<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
    <channel>
        <title>Posts on Tech Blog</title>
        <link>https://www.dimoulis.net/posts/</link>
        <description>Recent content in Posts on Tech Blog</description>
        <generator>Hugo -- gohugo.io</generator>
        <language>en-us</language>
        <copyright>&lt;a href=&#34;https://creativecommons.org/licenses/by-nc/4.0/&#34; target=&#34;_blank&#34; rel=&#34;noopener&#34;&gt;CC BY-NC 4.0&lt;/a&gt;</copyright>
        <lastBuildDate>Wed, 19 Jul 2023 18:57:22 +0300</lastBuildDate>
        <atom:link href="https://www.dimoulis.net/posts/index.xml" rel="self" type="application/rss+xml" />
        
        <item>
            <title>Ubuntu Linux on a Lenovo IdeaPad 3 laptop</title>
            <link>https://www.dimoulis.net/posts/linux-on-lenovo-laptop/</link>
            <pubDate>Wed, 19 Jul 2023 18:57:22 +0300</pubDate>
            
            <guid>https://www.dimoulis.net/posts/linux-on-lenovo-laptop/</guid>
            <description>📷Fré Sonneveld
I bought a Lenovo IdeaPad 3 15ALC6. It features an AMD Ryzen 3 5300U with integrated graphics, 8GB of RAM and 512GB NVMe. These features are adequate for its low price, though an even better deal would have been the slightly more expensive model with AMD Ryzen 5 5500U. The main limitation I can see is the small amount of memory which is shared with the GPU, but this is common in budget laptops.</description>
            <content type="html"><![CDATA[
<figure class="post-cover"><picture>
        <source srcset="fre-sonneveld-q6n8nIrDQHE-unsplash.webp 1x, fre-sonneveld-q6n8nIrDQHE-unsplash-2x.webp 2x" type="image/webp">
        <source srcset="fre-sonneveld-q6n8nIrDQHE-unsplash.jpg 1x, fre-sonneveld-q6n8nIrDQHE-unsplash-2x.jpg 2x" type="image/jpeg">
        <img src="fre-sonneveld-q6n8nIrDQHE-unsplash.jpg"
             alt="Electricity towers"/>
    </picture><figcaption class="left small">
            <p>
                    <a href="https://unsplash.com/photos/q6n8nIrDQHE?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditShareLink">📷Fré Sonneveld</a></p>
        </figcaption>
</figure>

<p>I bought a Lenovo IdeaPad 3 15ALC6. It features an AMD Ryzen 3 5300U with integrated graphics, 8GB of RAM and 512GB NVMe. These features are adequate for its low price, though an even better deal would have been the slightly more expensive model with AMD Ryzen 5 5500U. The main limitation I can see is the small amount of memory which is shared with the GPU, but this is common in budget laptops.</p>
<p>Its compatibility with Linux turned out to be excellent because Lenovo has apparenly made some effort to ensure Fedora and Ubuntu work smoothly on their laptops. The model I bought didn&rsquo;t have an OS preinstalled and I decided to install Ubuntu 22.04.</p>
<h2 id="installation">Installation</h2>
<p>The first thing to do before installation was to visit BIOS by pressing F2. You can select the boot media by pressing F12.</p>
<p>I disabled Secure Boot because it usually gets in the way of Linux distributions, although it&rsquo;s not strictly necessary for Ubuntu.</p>
<p>Another option of interest was <strong>Set UMA Frame Buffer size</strong>. This gives you the option of choosing how much of the shared memory to be allocated to the GPU, either 512MB, 1GB or the default 2GB. If you leave it at the default setting, you will find that only 6GB out of 8GB are available to the main system and that&rsquo;s not much.</p>
<p>After completing the installation, I wanted to try certain things that could improve the battery life. I took some hints from the <a href="https://wiki.archlinux.org/title/Lenovo_IdeaPad_3_15alc6_(AMD)">Arch wiki</a>. I configured the system to use the newer AMD P-State driver instead of ACPI CPUFreq. Phoronix has some <a href="https://www.phoronix.com/review/amd-pstate-linux517">benchmarks</a> that show it to be marginally more energy efficient.</p>
<p>I modified <code>/etc/default/grub</code> to look like:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#282a36;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-shell" data-lang="shell"><span style="display:flex;"><span><span style="color:#8be9fd;font-style:italic">GRUB_CMDLINE_LINUX_DEFAULT</span><span style="color:#ff79c6">=</span><span style="color:#f1fa8c">&#34;quiet splash amd_pstate=passive&#34;</span>
</span></span></code></pre></div><p>then ran <code>sudo update-grub</code>.</p>
<p>Phoronix has also <a href="https://www.phoronix.com/news/KDE-Plasma-Wayland-Power">benchmarked</a> the power consumption of Gnome and KDE under Wayland and X.Org and found that Wayland is more power efficient in either Gnome or KDE.</p>
<h2 id="mostly-ac-power">Mostly AC power</h2>
<p>If you mostly use the laptop connected to a power supply, Lenovo has a feature called Conservation Mode which can prolong the life of the battery. It keeps the battery charged at 50-60% instead of going up to 100% and staying there continuously, which can reduce its capacity over time. Here is a good source of <a href="https://batteryuniversity.com/article/bu-808-how-to-prolong-lithium-based-batteries">information</a> on the issue.</p>
<p>To enable Conservation Mode:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#282a36;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-shell" data-lang="shell"><span style="display:flex;"><span><span style="color:#8be9fd;font-style:italic">echo</span> <span style="color:#bd93f9">1</span> | sudo tee /sys/bus/platform/drivers/ideapad_acpi/VPC2004:00/conservation_mode
</span></span></code></pre></div><p>To disable Conservation Mode:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#282a36;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-shell" data-lang="shell"><span style="display:flex;"><span><span style="color:#8be9fd;font-style:italic">echo</span> <span style="color:#bd93f9">0</span> | sudo tee /sys/bus/platform/drivers/ideapad_acpi/VPC2004:00/conservation_mode
</span></span></code></pre></div><p>This feature is sticky, you only have to do it once and it survives across reboots.</p>
<h2 id="mostly-battery">Mostly battery</h2>
<p>I plan to use this laptop now and then, mostly on battery. This means that the laptop will stay a lot of time in <em>suspend</em> mode. <em>Suspend</em> does use up a small amount of the energy over time, so a better idea would be to either <em>Power Off</em> or to enable <em>Hibernate</em>.</p>
<p>I enabled <em>Suspend-Then-Hibernate</em>. This mode combines the best of both options: The system is suspended when I close the lid. It starts again quickly when I open it. If I don&rsquo;t use for 3 hours, then a RTC trigger wakes it up and hibernates it.</p>
<p>Here is how to enable Hibernate on Ubuntu 22.04:</p>
<ul>
<li>Boot a live USB.</li>
<li>Start <code>gparted</code></li>
<li>Shrink the main partition. The laptop has 8GB of RAM so I went with a 10GB partition, following <a href="https://help.ubuntu.com/community/SwapFaq">this advice</a>.</li>
<li>Create a swap partition with type <code>linux-swap</code></li>
<li>Reboot into main installation.</li>
</ul>
<p>Modify <code>/etc/fstab</code> to include the new swap partition:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#282a36;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-shell" data-lang="shell"><span style="display:flex;"><span><span style="color:#6272a4"># Get the UUID of the swap partition</span>
</span></span><span style="display:flex;"><span>lsblk --fs
</span></span><span style="display:flex;"><span>sudo swapoff -a
</span></span></code></pre></div><div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#282a36;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-shell" data-lang="shell"><span style="display:flex;"><span><span style="color:#6272a4">## /etc/fstab</span>
</span></span><span style="display:flex;"><span><span style="color:#6272a4">#/swapfile  none    swap    defaults        0       0 </span>
</span></span><span style="display:flex;"><span><span style="color:#8be9fd;font-style:italic">UUID</span><span style="color:#ff79c6">=</span>&lt;YOUR_SWAP_UUID&gt;  none    swap    defaults        <span style="color:#bd93f9">0</span>       <span style="color:#bd93f9">0</span>
</span></span></code></pre></div><p>Comment out the line about the swapfile, we are using a swap partition now. <strong>You can later remove /swapfile</strong> to reclaim its space.</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#282a36;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-shell" data-lang="shell"><span style="display:flex;"><span>sudo swapon -a
</span></span></code></pre></div><p>Modify <code>/etc/default/grub</code> again to look like:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#282a36;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-shell" data-lang="shell"><span style="display:flex;"><span><span style="color:#8be9fd;font-style:italic">GRUB_CMDLINE_LINUX_DEFAULT</span><span style="color:#ff79c6">=</span><span style="color:#f1fa8c">&#34;quiet splash resume=UUID=&lt;YOUR_SWAP_UUID&gt;&#34;</span>
</span></span></code></pre></div><div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#282a36;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-shell" data-lang="shell"><span style="display:flex;"><span>sudo update-grub
</span></span></code></pre></div><p>Try it:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#282a36;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-shell" data-lang="shell"><span style="display:flex;"><span>sudo systemctl hibernate
</span></span></code></pre></div><p>This also enabled the option to Hibernate in Gnome&rsquo;s Power Off/Log out menu. I didn&rsquo;t need to install any package, since it&rsquo;s all handled by <code>systemd</code>.</p>
<p>And now we can enable <em>Suspend-Then-Hibernate</em>:</p>
<p>Modify <code>/etc/systemd/logind.conf</code>. Change</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#282a36;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-shell" data-lang="shell"><span style="display:flex;"><span><span style="color:#6272a4">#HandleLidSwitch=suspend</span>
</span></span></code></pre></div><p>to</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#282a36;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-shell" data-lang="shell"><span style="display:flex;"><span><span style="color:#8be9fd;font-style:italic">HandleLidSwitch</span><span style="color:#ff79c6">=</span>suspend-then-hibernate
</span></span></code></pre></div><p>You can also use <code>hybrid-sleep</code> if you prefer, which hibernates the system and then suspends it instead of powering off.</p>
<p>Have also a look at <code>/etc/systemd/sleep.conf</code> and in particular <code>HibernateDelaySec=180min</code>, the time before the system wakes up from suspension to hibernate.</p>
<p>Reboot.</p>
<blockquote>
<p>Note that this method only handles suspension when you close the lid. If suspension is triggered by other means e.g. by inactivity after some time, then it will go into normal suspension. However it&rsquo;s good enough for me. There are other options that may work better for you.</p>
</blockquote>
<p>Another tip that will prevent the battery from losing capacity over time: Don&rsquo;t fully charge it and don&rsquo;t let it fully discharge. <a href="https://batteryuniversity.com/article/bu-706-summary-of-dos-and-donts">Keep the battery levels at 30%-80%</a>.</p>
<h2 id="other-tweaks">Other tweaks</h2>
<p>Although I prefer Firefox, I installed Google Chrome. I didn&rsquo;t use <code>snap</code> but went instead with <code>apt</code>  and the <code>.deb</code> straight from Google. The reason I picked Google Chrome, not even Chromium, was because I wanted to make sure video decoding was hardware accelerated (and more power efficient). You can verify that this is the case visiting <code>chrome://gpu</code>.</p>
<p>I also tried <a href="https://github.com/linrunner/TLP">TLP</a> and <a href="https://github.com/AdnanHodzic/auto-cpufreq">auto-cpufreq</a>. These are often recommended to optimize battery life further. However, I am not sure they do something useful. Gnome has <code>power-profiles-daemon</code> and KDE has the equivalent <code>powerdevil</code>. Bluetooth, which is power hungry, is turned off by default by Gnome. So perhaps <code>TLP</code> isn&rsquo;t as useful these days as it used to be. As for <code>auto-cpufreq</code>, what it does is toggle the scheduler between <code>powersave</code> and <code>performance</code> as well as turn CPU boost on and off, instead of letting <code>schedutil</code> handle it. It seems a bit messy to me and possibly hurting performance, so I didn&rsquo;t keep it after all.</p>
<p>So what is the battery life on this Linux laptop compared to Windows? I haven&rsquo;t run a benchmark but compared to the battery life of a Dell Latitude 3520, a laptop with similar specifications running Windows, my impression is that they last about the same. Linux however uses fewer resources, especially memory and it generally feels snappier.</p>
]]></content>
        </item>
        
        <item>
            <title>TP-LINK Archer T4U Plus for Linux</title>
            <link>https://www.dimoulis.net/posts/tp-link-archer-t4u/</link>
            <pubDate>Sat, 04 Mar 2023 12:42:37 +0200</pubDate>
            
            <guid>https://www.dimoulis.net/posts/tp-link-archer-t4u/</guid>
            <description>📷by me
TP-LINK Archer T4U Plus AC1300 is a WiFi 5 USB adapter. It features 2 antennas which can be rotated in any direction and a USB 3.0 connection (which also works in USB 2). The price is ok. I thought it was a good choice for a spare computer that I don&amp;rsquo;t use often.
Of course the reason that I don&amp;rsquo;t use it often is because the previous adapter, TP-LINK TL-WN722N, a 2.</description>
            <content type="html"><![CDATA[
<figure class="post-cover"><picture>
        <source srcset="tp-link-archer-t4u-plus.webp 1x, tp-link-archer-t4u-plus-2x.webp 2x" type="image/webp">
        <source srcset="tp-link-archer-t4u-plus.jpg 1x, tp-link-archer-t4u-plus-2x.jpg 2x" type="image/jpeg">
        <img src="tp-link-archer-t4u-plus.jpg"
             alt="TP-LINK Archer T4U Plus AC1300"/>
    </picture><figcaption class="left small">
            <p>
                    <a href="https://dimoulis.net">📷by me</a></p>
        </figcaption>
</figure>

<p>TP-LINK Archer T4U Plus AC1300 is a WiFi 5 USB adapter. It features 2 antennas which can be rotated in any direction and a USB 3.0 connection (which also works in USB 2). The price is ok. I thought it was a good choice for a spare computer that I don&rsquo;t use often.</p>
<p>Of course the reason that I don&rsquo;t use it often is because the previous adapter, TP-LINK TL-WN722N, a 2.4Ghz 802.11 b/g/n performed so badly in a congested environment. It was old and it had to go.</p>
<h2 id="before-you-buy">Before you buy</h2>
<p>To avoid any bad surprises, I tried to find if it was compatible with Linux before I bought it. TP-LINK doesn&rsquo;t officially support Linux (it only supports Windows). Nevertheless I found that many people had it working without problems so it was worth buying.</p>
<p>The chipset used by the device is either Realtek RTL8812AU or RTL8812BU depending on the version V1, V2 or V3 and <strong>you need a specific driver for each chipset!</strong></p>
<p><code>lsusb</code> gave me this information:</p>
<p><code>Bus 001 Device 002: ID 2357:010e TP-Link Archer T4UH v2 [Realtek RTL8812AU]</code></p>
<p>So mine is a V2 with a Realtek RTL8812AU.</p>
<h2 id="realtek-rtl8812au">Realtek RTL8812AU</h2>
<p>The driver that I ended up using is <a href="https://github.com/aircrack-ng/rtl8812au">https://github.com/aircrack-ng/rtl8812au</a> which can be installed on Ubuntu and other distributions. It is also packaged for <a href="https://aur.archlinux.org/packages/aircrack-ng-git">Arch Linux</a>.</p>
<p>To install on Ubuntu, you first have to install some requirements for building modules e.g.</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#282a36;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-shell" data-lang="shell"><span style="display:flex;"><span>sudo apt install build-essential dkms git iw
</span></span></code></pre></div><p>and on Debian and others, also make sure you have the Linux headers installed. Then follow the instructions on the driver&rsquo;s site to install it with DKMS.</p>
<p>Actually, Ubuntu and Debian do have a packaged driver for RTL8812AU. You can try it with</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#282a36;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-shell" data-lang="shell"><span style="display:flex;"><span>sudo apt install rtl8812au-dkms
</span></span></code></pre></div><p>I can&rsquo;t comment on how well this one works. I didn&rsquo;t try it much because I had aircrack-ng already working. I believe aircrack-ng is actively maintained and performs better.</p>
<h2 id="realtek-rtl8812bu">Realtek RTL8812BU</h2>
<p>On the other hand, if you have an RTL8812BU you can try this driver:</p>
<p><a href="https://github.com/morrownr/88x2bu-20210702">https://github.com/morrownr/88x2bu-20210702</a></p>
<p>There are other drivers too and Linux 6.2+ probably includes some driver on its own. The above is still recommended because it is more tried for the time being.</p>
<h2 id="what-about-wifi-6">What about WiFi 6</h2>
<p>Maybe you noticed that I bought a WiFi 5 adapter while WiFi 6 is available. One reason is that I thought WiFi 5 would be enough for these specific needs: I only need to saturate the Internet connection and I don&rsquo;t use that computer very often. So I thought I could keep the costs down and buy a WiFi 5 extender with the money I spared, to improve the signal.</p>
<p>There is also another important reason. TP-LINK TX20U Plus is another model that I considered, it&rsquo;s a bit more expensive but supports WiFi 6. It uses the chipset Realtek RTL8852AU and a driver for it is at <a href="https://github.com/lwfinger/rtl8852au">https://github.com/lwfinger/rtl8852au</a>, however the driver does not perform well and may have other problems. The very useful <a href="https://github.com/morrownr/USB-WiFi/blob/main/home/USB_WiFi_Chipsets.md">USB WiFi Chipset reference</a> specifically says to <em>avoid</em>.</p>
<p>From what I figured, if you are looking for an extra hassle free, plug and play WiFi 6, try to buy a PCI Express WiFi with an Intel chipset. Intel does fully support Linux and its drivers are in the kernel: worse thing you have to upgrade the kernel. Of course that&rsquo;s not the only option, other chipsets may have good support too, but an Intel chipset is a &ldquo;fail-safe&rdquo; option for Linux. So do try to find out the chipset used and its maturity of support in Linux before buying any WiFi adapter.</p>
]]></content>
        </item>
        
        <item>
            <title>Replacing Pulseaudio With Pipewire&#43;Wireplumber</title>
            <link>https://www.dimoulis.net/posts/replacing-pulseaudio-with-pipewire/</link>
            <pubDate>Sun, 22 Jan 2023 12:07:17 +0200</pubDate>
            
            <guid>https://www.dimoulis.net/posts/replacing-pulseaudio-with-pipewire/</guid>
            <description>📷C D-X
The latest update of Arch Linux informed me that pipewire-media-session, a dependency of kwin, is deprecated and would soon be removed from the repositories. I was adviced to replace it with wireplumber. Now, I did try wireplumber in the past, but it didn&amp;rsquo;t go so well so I had to revert it. It&amp;rsquo;s finally time to move on, I guess.
The problem is that this had some cascading effects and as a result I had to let go of the (old trusted) pulseaudio as well.</description>
            <content type="html"><![CDATA[
<figure class="post-cover"><picture>
        <source srcset="c-d-x-dBwadhWa-lI-unsplash.webp 1x, c-d-x-dBwadhWa-lI-unsplash-2x.webp 2x" type="image/webp">
        <source srcset="c-d-x-dBwadhWa-lI-unsplash.jpg 1x, c-d-x-dBwadhWa-lI-unsplash-2x.jpg 2x" type="image/jpeg">
        <img src="c-d-x-dBwadhWa-lI-unsplash.jpg"
             alt="Something should come out of these"/>
    </picture><figcaption class="left small">
            <p>
                    <a href="https://unsplash.com/photos/dBwadhWa-lI?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditShareLink">📷C D-X</a></p>
        </figcaption>
</figure>

<p>The latest update of Arch Linux informed me that <code>pipewire-media-session</code>, a dependency of <code>kwin</code>, is deprecated and would soon be removed from the repositories. I was adviced to replace it with <code>wireplumber</code>. Now, I did try <code>wireplumber</code> in the past, but it didn&rsquo;t go so well so I had to revert it. It&rsquo;s finally time to move on, I guess.</p>
<p>The problem is that this had some cascading effects and as a result I had to let go of the (old trusted) <code>pulseaudio</code> as well.</p>
<h2 id="not-what-i-planned-for-sunday-morning">Not what I planned for Sunday morning</h2>
<p>The first step was to replace <code>pipewire-media-session</code> with <code>wireplumber</code> as instructed:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#282a36;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-shell" data-lang="shell"><span style="display:flex;"><span>$ sudo pacman --asdeps -S wireplumber
</span></span></code></pre></div><p>which removed <code>pipewire-media-session</code>. Rebooting to check how well things went:</p>
<ul>
<li>System sound (back lineout) works</li>
<li>System sound (front headphones) works</li>
<li>System microphone (front) does <em>not</em> work</li>
<li>HDMI sound through the video card does <em>not</em> work</li>
<li>Webcam, didn&rsquo;t test</li>
</ul>
<p>Obviously, not good enough.</p>
<p>It turns out that <code>pipewire</code> was missing its plugins (backends), but that meant removing <code>pulseaudio</code> since they are in conflict.</p>
<pre tabindex="0"><code>$ sudo pacman --asdeps -S pipewire-alsa pipewire-pulse pipewire-jack
resolving dependencies...
looking for conflicting packages...
:: pipewire-alsa and pulseaudio-alsa are in conflict. Remove pulseaudio-alsa? [y/N] y
:: pipewire-pulse and pulseaudio are in conflict. Remove pulseaudio? [y/N] y
:: pipewire-jack and jack2 are in conflict (jack). Remove jack2? [y/N] y

Packages (6) jack2-1.9.21-3 [removal]  pulseaudio-16.1-3 [removal]  pulseaudio-alsa-1:1.2.7.1-1 [removal]
             pipewire-alsa-1:0.3.64-1  pipewire-jack-1:0.3.64-1  pipewire-pulse-1:0.3.64-1

Total Download Size:    0.31 MiB
Total Installed Size:   1.11 MiB
Net Upgrade Size:      -6.70 MiB

:: Proceed with installation? [Y/n]
</code></pre><p>So we are replacing distinct packages with a monolith to rule-them-all. I am getting <code>systemd</code> vibes here.</p>
<h2 id="progress-and-neverending-bug-squashing">Progress (and neverending bug squashing)</h2>
<p>After rebooting and testing everything, things now appear to work, including the webcam with its own microphone.
Since I have <code>multilib</code> installed, I also installed some lib32 versions</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#282a36;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-shell" data-lang="shell"><span style="display:flex;"><span>$ sudo pacman --asdeps -S lib32-pipewire lib32-pipewire-jack
</span></span></code></pre></div><p>For video, some packages to consider are (according to the wiki) <code>gst-plugin-pipewire</code> (for gstreamer) and/or <code>pipewire-v4l2</code> but I didn&rsquo;t have a need for them so far&hellip; Webex, at least, works with the current configuration.</p>
<p>Another thing to consider: I had some options in <code>/etc/pulse/default.pa</code> to avoid a crackling noise in HDMI output. I am not sure if they are still needed, but now that <code>pulseaudio</code> is gone the place to put <code>pulseaudio</code> related options would be <code>/usr/share/pipewire/pipewire-pulse.conf</code> after copying it somewhere under <code>/etc/pipewire</code>.</p>
]]></content>
        </item>
        
        <item>
            <title>WebDAV Behind A Nginx Reverse Proxy</title>
            <link>https://www.dimoulis.net/posts/webdav-behind-reverse-proxy/</link>
            <pubDate>Sun, 10 Apr 2022 11:37:24 +0300</pubDate>
            
            <guid>https://www.dimoulis.net/posts/webdav-behind-reverse-proxy/</guid>
            <description>📷Mor THIAM
I wanted a file share somewhere on the Internet and WebDAV is one of the most widely supported protocols. Before it, I considered the alternatives:
Give the files to a &amp;ldquo;cloud&amp;rdquo; provider to do with them as they please. No. Even if they are just groceries lists. SSHFS works nicely without any effort, except that it&amp;rsquo;s slow and not supported by Windows Explorer. Nextcloud would also work, but I think it&amp;rsquo;s an overkill since it&amp;rsquo;s a fairly complex PHP application that does many things, most of which I don&amp;rsquo;t care about, and yet it doesn&amp;rsquo;t do other things I care about, such as being compatible with rsync.</description>
            <content type="html"><![CDATA[
<figure class="post-cover"><picture>
        <source srcset="mor-thiam-uqtyr_EZ_As-unsplash.webp 1x, mor-thiam-uqtyr_EZ_As-unsplash-2x.webp 2x" type="image/webp">
        <source srcset="mor-thiam-uqtyr_EZ_As-unsplash.jpg 1x, mor-thiam-uqtyr_EZ_As-unsplash-2x.jpg 2x" type="image/jpeg">
        <img src="mor-thiam-uqtyr_EZ_As-unsplash.jpg"
             alt="Putting the final pieces to the puzzle"/>
    </picture><figcaption class="left small">
            <p>
                    <a href="https://unsplash.com/photos/uqtyr_EZ_As?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditShareLink">📷Mor THIAM</a></p>
        </figcaption>
</figure>

<p>I wanted a file share somewhere on the Internet and WebDAV is one of the most widely supported protocols. Before it, I considered the alternatives:</p>
<ul>
<li>Give the files to a &ldquo;cloud&rdquo; provider to do with them as they please. <em>No</em>. Even if they are just groceries lists.</li>
<li>SSHFS works nicely without any effort, except that it&rsquo;s slow and not supported by Windows Explorer.</li>
<li>Nextcloud would also work, but I think it&rsquo;s an overkill since it&rsquo;s a fairly complex PHP application that does many things, most of which I don&rsquo;t care about, and yet it doesn&rsquo;t do other things I care about, such as being compatible with <code>rsync</code>.</li>
</ul>
<blockquote>
<p>I wanted to do it my way and <em>KISS</em>. I finally managed to do it my way but <em>KISS</em> is in the eye of the beholder.</p>
</blockquote>
<p>There are several guides to setup Nginx or Apache for file sharing with WebDAV. Except&hellip; for all the small bugs and incompatibilities, as I found out along the way. You see, I wanted to access the share from several clients&hellip;</p>
<ul>
<li>Windows Explorer</li>
<li>KDE Dolphin</li>
<li>Gnome and MATE</li>
<li>Android FolderSync</li>
</ul>
<p>And many of them had their own idiosyncratic way of interpreting the RFCs that describe WebDAV.</p>
<p>Here are some of the bugs, problems, things I couldn&rsquo;t get the server and clients to agree on:</p>
<ul>
<li>Sensitive to trailing slashes.</li>
<li>Windows Explorer sends an OPTIONS / even when the share is in a subfolder.</li>
<li>Must support LOCK.</li>
<li>MOVE and COPY expect a Destination: header to match the Host: scheme (which isn&rsquo;t necessarily true if you use a reverse proxy).</li>
<li>KDE Dolphin couldn&rsquo;t interprete the MIME type when used with a Go based client. This seems to have been fixed in the latest version of Dophin.</li>
</ul>
<p>I had to try a few solutions until I found something that fully worked.</p>
<h2 id="nginx-on-its-own">Nginx on its own</h2>
<p>Nginx has rudimentary support for WebDAV and needs a separately maintained extension to support LOCK and other methods. It suffered from incompatibilities with missing trailing slashes. There are some guides that attempt to add all the extra hacks needed to make it work, but I gave up on this for the time being. If you are interested in trying it anyway, be informed that the version of the extension included in Ubuntu 20.04 LTS is not recent enough.</p>
<p>The next thing to try was to use a seperate WebDAV server and put it behind a Nginx reverse server, which is what this blog post is about. I already used Nginx for some web hosting and I simply wanted to add a WebDAV file share.</p>
<h2 id="dave">Dave</h2>
<p>It&rsquo;s a standalone WebDAV server with a simple .yaml configuration. It supports multiple users, each with their own folder and password protection. It is written in Go, using the same semi-standard library that Caddy uses for its WebDAV extension. What it doesn&rsquo;t do is provide &ldquo;autoindex&rdquo;, to be able to access the files using a browser, but we can add this functionaly using our reverse proxy.</p>
<p>Do <a href="https://github.com/micromata/dave">give it a try</a>, if you just want something to quickly set up and be done with it. However I should let you know that its development <em>has finished successfully</em>, all planned features have been added and all bugs have been squashed, which means it won&rsquo;t be updated further except to keep its dependencies up to date. Here is a fork I made with <a href="https://github.com/ddmls/dave">updated dependencies</a> and minor cleanup.</p>
<h2 id="apache">Apache</h2>
<p>Apache is probably the oldest and most widely used implementation of a WebDAV server, except for the fact that I already use Nginx so why would I want to install Apache as well? Just for its WebDAV functionality, behind Nginx.</p>
<p>I won&rsquo;t be giving full installalation instructions here, there are enough guides for that. However just for completion and because everybody loves copy-paste here is an example of how it looks like:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#282a36;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-shell" data-lang="shell"><span style="display:flex;"><span>DavLockDB /usr/local/apache/var/DavLock
</span></span><span style="display:flex;"><span>&lt;VirtualHost 127.0.0.1:8080&gt;
</span></span><span style="display:flex;"><span>    ServerAdmin admin@domain.tld
</span></span><span style="display:flex;"><span>        ServerName domain.tld
</span></span><span style="display:flex;"><span>        ServerAlias domain.tld
</span></span><span style="display:flex;"><span>        DocumentRoot /path/to/webdav
</span></span><span style="display:flex;"><span>        ErrorLog <span style="color:#f1fa8c">${</span><span style="color:#8be9fd;font-style:italic">APACHE_LOG_DIR</span><span style="color:#f1fa8c">}</span>/error.log
</span></span><span style="display:flex;"><span>        CustomLog <span style="color:#f1fa8c">${</span><span style="color:#8be9fd;font-style:italic">APACHE_LOG_DIR</span><span style="color:#f1fa8c">}</span>/access.log combined
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span>        <span style="color:#6272a4">#Alias /webdav /path/to/webdav</span>
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span>        &lt;Directory /path/to/webdav&gt;
</span></span><span style="display:flex;"><span>            DAV On
</span></span><span style="display:flex;"><span>        &lt;/Directory&gt;
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span>        <span style="color:#6272a4"># See https://github.com/BytemarkHosting/docker-webdav/blob/master/2.4/conf/conf-available/dav.conf</span>
</span></span><span style="display:flex;"><span>        <span style="color:#6272a4"># We turn it on for all User Agents here because Windows Explorer and KDE Dolphin appear affected</span>
</span></span><span style="display:flex;"><span>        SetEnv redirect-carefully
</span></span><span style="display:flex;"><span>&lt;/VirtualHost&gt;
</span></span></code></pre></div><p>If you use /srv take a look at apache2.conf to make sure the proper lines are uncommented. You also have to create /usr/local/apache/var/DavLock owned and writable by www-data:www-data (on Ubuntu/Debian).</p>
<h2 id="nginx-reverse-proxy">Nginx Reverse Proxy</h2>
<p>Here is the fun part! (not really).</p>
<p>After putting Dave or Apache behind the reverse proxy, it worked as intended <em>except</em> for renaming and copying files. Investigating further and <a href="https://mailman.nginx.org/pipermail/nginx/2007-January/000504.html">standing on the shoulders of <del>previous sufferers</del> giants</a>, I finally figured the cause: MOVE and COPY methods use a Destination header that must match the scheme of Host e.g. https for https and not http. This isn&rsquo;t true behind a reverse proxy.</p>
<p>Add to this a <a href="https://trac.nginx.org/nginx/ticket/348">long standing bug</a> in Nginx and its suggested workaround and you get this <em>Magic Copypasta</em>:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#282a36;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-shell" data-lang="shell"><span style="display:flex;"><span>    <span style="color:#8be9fd;font-style:italic">set</span> <span style="color:#8be9fd;font-style:italic">$dest</span> <span style="color:#8be9fd;font-style:italic">$http_destination</span>;
</span></span><span style="display:flex;"><span>    <span style="color:#ff79c6">if</span> <span style="color:#ff79c6">(</span><span style="color:#8be9fd;font-style:italic">$http_destination</span> ~ <span style="color:#f1fa8c">&#34;^https://(?&lt;myvar&gt;(.+))&#34;</span><span style="color:#ff79c6">)</span> <span style="color:#ff79c6">{</span>
</span></span><span style="display:flex;"><span>       <span style="color:#8be9fd;font-style:italic">set</span> <span style="color:#8be9fd;font-style:italic">$dest</span> http://<span style="color:#8be9fd;font-style:italic">$myvar</span>;
</span></span><span style="display:flex;"><span>    <span style="color:#ff79c6">}</span>
</span></span><span style="display:flex;"><span>    proxy_set_header Destination <span style="color:#8be9fd;font-style:italic">$dest</span>;
</span></span></code></pre></div><p>It looks ugly, but its a workaround for a bug after all.</p>
<p>Here&rsquo;s how it ended up looking like after several iterations. Nginx handles the <code>Basic Authentication</code>. There are some optimizations to use <code>keepalive</code> and reduce the number of localhost connections that may or may not matter in practice.</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#282a36;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-shell" data-lang="shell"><span style="display:flex;"><span>upstream webdav <span style="color:#ff79c6">{</span>
</span></span><span style="display:flex;"><span>    server 127.0.0.1:8080;
</span></span><span style="display:flex;"><span>    keepalive 32;
</span></span><span style="display:flex;"><span><span style="color:#ff79c6">}</span>
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span>server <span style="color:#ff79c6">{</span>
</span></span><span style="display:flex;"><span>    server_name domain.tld;
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span>    root /var/www/html;
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span>    access_log /var/log/nginx/domain.tld;
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span>    client_max_body_size 0;
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span>    location / <span style="color:#ff79c6">{</span>
</span></span><span style="display:flex;"><span>        auth_basic           <span style="color:#f1fa8c">&#34;Restricted area&#34;</span>;
</span></span><span style="display:flex;"><span>        auth_basic_user_file /etc/nginx/.htpasswd;
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span>    <span style="color:#6272a4"># https://mailman.nginx.org/pipermail/nginx/2007-January/000504.html - fix Destination: header</span>
</span></span><span style="display:flex;"><span>    <span style="color:#6272a4"># https://trac.nginx.org/nginx/ticket/348 - bug, workaround with named capture</span>
</span></span><span style="display:flex;"><span>    <span style="color:#8be9fd;font-style:italic">set</span> <span style="color:#8be9fd;font-style:italic">$dest</span> <span style="color:#8be9fd;font-style:italic">$http_destination</span>;
</span></span><span style="display:flex;"><span>    <span style="color:#ff79c6">if</span> <span style="color:#ff79c6">(</span><span style="color:#8be9fd;font-style:italic">$http_destination</span> ~ <span style="color:#f1fa8c">&#34;^https://(?&lt;myvar&gt;(.+))&#34;</span><span style="color:#ff79c6">)</span> <span style="color:#ff79c6">{</span>
</span></span><span style="display:flex;"><span>       <span style="color:#8be9fd;font-style:italic">set</span> <span style="color:#8be9fd;font-style:italic">$dest</span> http://<span style="color:#8be9fd;font-style:italic">$myvar</span>;
</span></span><span style="display:flex;"><span>    <span style="color:#ff79c6">}</span>
</span></span><span style="display:flex;"><span>    proxy_set_header Destination       <span style="color:#8be9fd;font-style:italic">$dest</span>;
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span>    <span style="color:#6272a4">#rewrite /webdav/(.*) /$1 break;</span>
</span></span><span style="display:flex;"><span>    proxy_pass http://webdav;
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span>    proxy_buffering off;
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span>    <span style="color:#6272a4"># Keep-alive</span>
</span></span><span style="display:flex;"><span>    proxy_http_version                 1.1;
</span></span><span style="display:flex;"><span>    proxy_set_header Connection        <span style="color:#f1fa8c">&#34;&#34;</span>;
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span>    <span style="color:#6272a4"># Proxy headers</span>
</span></span><span style="display:flex;"><span>    proxy_set_header Host              <span style="color:#8be9fd;font-style:italic">$host</span>;
</span></span><span style="display:flex;"><span>    proxy_set_header X-Real-IP         <span style="color:#8be9fd;font-style:italic">$remote_addr</span>;
</span></span><span style="display:flex;"><span>    proxy_set_header X-Forwarded-For   <span style="color:#8be9fd;font-style:italic">$proxy_add_x_forwarded_for</span>;
</span></span><span style="display:flex;"><span>    proxy_set_header X-Forwarded-Proto <span style="color:#8be9fd;font-style:italic">$scheme</span>;
</span></span><span style="display:flex;"><span>    proxy_set_header X-Forwarded-Host  <span style="color:#8be9fd;font-style:italic">$host</span>;
</span></span><span style="display:flex;"><span>    proxy_set_header X-Forwarded-Port  <span style="color:#8be9fd;font-style:italic">$server_port</span>;
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span>    <span style="color:#6272a4"># Proxy timeouts between successive read/write operations, not the whole request.</span>
</span></span><span style="display:flex;"><span>    proxy_connect_timeout              300s;
</span></span><span style="display:flex;"><span>    proxy_send_timeout                 300s;
</span></span><span style="display:flex;"><span>    proxy_read_timeout                 300s;
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span>    listen <span style="color:#ff79c6">[</span>::<span style="color:#ff79c6">]</span>:443 ssl http2;
</span></span><span style="display:flex;"><span>    listen <span style="color:#bd93f9">443</span> ssl http2;
</span></span><span style="display:flex;"><span>    <span style="color:#6272a4"># ... SSL stuff ...</span>
</span></span><span style="display:flex;"><span><span style="color:#ff79c6">}</span>
</span></span></code></pre></div><p>Left as an exercise for the reader:</p>
<ul>
<li>Create the .htpasswd file (I used Apache&rsquo;s htpasswd).</li>
<li>Use 308 redirects instead of 301 to redirect <code>http</code> to <code>https</code> (because 301 doesn&rsquo;t support all the request methods of WebDAV).</li>
<li>If needed, include an extra location with &ldquo;autoindex&rdquo;.</li>
</ul>
]]></content>
        </item>
        
        <item>
            <title>How To Use SSH Over An HTTP Proxy</title>
            <link>https://www.dimoulis.net/posts/ssh-over-proxy/</link>
            <pubDate>Tue, 09 Nov 2021 16:38:09 +0200</pubDate>
            
            <guid>https://www.dimoulis.net/posts/ssh-over-proxy/</guid>
            <description>📷Modestas Urbonas
ssh is a very versatile tool that many of us depend on for everyday tasks. Other than the basic connection to a remote host and multiplexing the terminal with tmux or screen, it is also used
together with rsync to execute remote commands in scripts for port forwarding for an adhoc VPN of sorts That&amp;rsquo;s a lot of functionality. Yet in some places the connection is restricted behind an HTTP proxy that won&amp;rsquo;t let ssh do its magic.</description>
            <content type="html"><![CDATA[
<figure class="post-cover"><picture>
        <source srcset="modestas-urbonas-vj_9l20fzj0-unsplash.webp 1x, modestas-urbonas-vj_9l20fzj0-unsplash-2x.webp 2x" type="image/webp">
        <source srcset="modestas-urbonas-vj_9l20fzj0-unsplash.jpg 1x, modestas-urbonas-vj_9l20fzj0-unsplash-2x.jpg 2x" type="image/jpeg">
        <img src="modestas-urbonas-vj_9l20fzj0-unsplash.jpg"
             alt="Connect two remote places"/>
    </picture><figcaption class="left small">
            <p>
                    <a href="https://unsplash.com/photos/vj_9l20fzj0?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditShareLink">📷Modestas Urbonas</a></p>
        </figcaption>
</figure>

<p><code>ssh</code> is a very versatile tool that many of us depend on for everyday tasks. Other than the basic connection to a remote host and multiplexing the terminal with tmux or screen, it is also used</p>
<ul>
<li>together with <code>rsync</code></li>
<li>to execute remote commands in scripts</li>
<li>for port forwarding</li>
<li>for an adhoc VPN of sorts</li>
</ul>
<p>That&rsquo;s a lot of functionality. Yet in some places the connection is restricted behind an HTTP proxy that won&rsquo;t let <code>ssh</code> do its magic. Fortunately, it&rsquo;s still possible to configure <code>ssh</code> for these cases and here we&rsquo;ll be covering such a scenario. HTTP proxies usually only allow connections to specific ports such as 80 and 443, although they allow arbitrary TCP streams with the CONNECT method.</p>
<h2 id="linux-macos-and-windows-with-wsl">Linux, MacOS and Windows with WSL</h2>
<p>We will be using <code>netcat-openbsd</code>, as it&rsquo;s called in Ubuntu and Debian. Apparently there are two implemenations of netcat and we want the one that supports the -x &ldquo;connect to proxy&rdquo; parameter. Here is how <code>~/.ssh/config</code> would look like:</p>
<pre tabindex="0"><code>Host otherside
    HostName example.com
    User torvalds
    Port 443
    IdentityFile ~/.ssh/id_ed25519
    ProxyCommand nc -X connect -x 10.20.30.40:8080 %h %p
    LocalForward 9999 127.0.0.1:5050
</code></pre><p>We are connecting to <code>torvalds@example.com</code> over an HTTP proxy at 10.20.30.40:8080 . netcat also supports SOCKS proxies and authentication if you need them, but you&rsquo;ll have to <code>man nc</code> for more information on these topics. As a bonus, we forward the local port 9999 to port 5050 on the remote server.</p>
<p>The tricky part is that the <code>ssh</code> server has to listen to port 443, which is normally used by HTTPS. Don&rsquo;t worry about that, we&rsquo;ll fix it later.</p>
<h2 id="windows-native">Windows native</h2>
<p>Windows users are probably used to tools like <code>PuTTY</code> and <code>WinSCP</code> to handle their <code>ssh</code> and <code>sftp</code> connections. These programs do support proxy connections, forwarding ports and the like. One thing to keep in mind is that <code>PuTTY</code> uses it&rsquo;s own file format for <code>ssh</code> key files, however it&rsquo;s possible to import an existing <code>openssh</code> key into it. We will not be covering their configuration here though. One limitation of them is that they cannot be used with <code>Visual Studio Code</code> for remote development over <code>ssh</code> and you don&rsquo;t get the handy <code>rsync</code> either.</p>
<p>Windows come with their own version of OpenSSH which can be enabled as an optional feature. Its configuration files can be found in <code>C:\Users\username\.ssh</code>. We also need to install <code>Nmap</code>, which comes with its own netcat-like program, called <code>ncat</code>.</p>
<p><code>C:\Users\username\.ssh\config</code>:</p>
<pre tabindex="0"><code>Host otherside
    HostName example.com
    User torvalds
    Port 443
    IdentityFile C:\Users\torvalds\.ssh\id_ed25519
    ProxyCommand C:\Program Files (x86)\Nmap\ncat.exe --proxy 10.20.30.40:8080 %h %p
    LocalForward 9999 127.0.0.1:5050
</code></pre><p>It works the same as described in the previous section.</p>
<p>Some gotchas&hellip;</p>
<ul>
<li><code>Nmap</code> needs Administrator rights in order to be installed and used, but <code>ncat</code> doesn&rsquo;t. If you are unable to install <code>Nmap</code>, then I suggest that you install it on a computer where you do have Administrator rights, then copy <code>ncat.exe</code>, all DLLs and <code>ca-bundle.crt</code>. These are all the files you need.</li>
<li>Do not use an old, so-called portable version linked by ncat&rsquo;s site.</li>
<li>nmap 7.93 has a <a href="https://github.com/openssl/openssl/issues/19191">bug</a>, use 7.92 or a later version instead.</li>
</ul>
<h2 id="dnat-the-incoming-connection-on-the-server">DNAT the incoming connection on the server</h2>
<p>As we mentioned before, the <code>ssh</code> client has to connect to port 443 in order to pass through an HTTP proxy. The port may be already in use by a web server. We can work around this requirement by using a DNAT rule on the server.</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#282a36;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-shell" data-lang="shell"><span style="display:flex;"><span>sysctl net.ipv4.conf.<span style="color:#8be9fd;font-style:italic">$dev</span>.forwarding<span style="color:#ff79c6">=</span><span style="color:#bd93f9">1</span>
</span></span><span style="display:flex;"><span>iptables -t nat -A PREROUTING -p tcp -i <span style="color:#8be9fd;font-style:italic">$dev</span> --src <span style="color:#8be9fd;font-style:italic">$proxy_ip</span> --dport <span style="color:#bd93f9">443</span> -j DNAT --to-destination <span style="color:#8be9fd;font-style:italic">$my_ip</span>:<span style="color:#8be9fd;font-style:italic">$ssh_port</span>
</span></span></code></pre></div><p>$proxy_ip is the outgoing IP of the HTTP proxy e.g. the address duckduckgo.com gives you when you search for &ldquo;my ip&rdquo;. $my_ip is the server&rsquo;s IP and $ssh_port the port <code>ssh</code> is normally listening to.</p>
<p><strong>Note that both port 443 and $ssh_port must be open in the firewall rules</strong> or at least accept connections coming from $proxy_ip.</p>
<h2 id="final-remarks">Final remarks</h2>
<p>You may want to encrypt the connection with TLS. This mainly serves to obfuscate the headers sent by <code>ssh</code> and make it look more like a common HTTPS connection. This will not be covered here; for more information, have a look <a href="https://nmap.org/ncat/guide/ncat-ssl.html">at the program&rsquo;s guide</a>.</p>
<p>That&rsquo;s it. Have fun.</p>
]]></content>
        </item>
        
        <item>
            <title>Search Console Couldn&#39;t Fetch Sitemap</title>
            <link>https://www.dimoulis.net/posts/couldnt-fetch-sitemap/</link>
            <pubDate>Sat, 18 Sep 2021 12:27:07 +0300</pubDate>
            
            <guid>https://www.dimoulis.net/posts/couldnt-fetch-sitemap/</guid>
            <description>The infamous &amp;ldquo;Couldn&amp;rsquo;t fetch sitemap&amp;rdquo; I was having this problem for some time. After submitting this site&amp;rsquo;s sitemap.xml to Google Search Console, I was getting a red warning that it could&amp;rsquo;t fetch the sitemap, because of some unspecified HTTP error. Not very helpful. Plus there wasn&amp;rsquo;t any sign in the server logs that it was attempting to fetch it.
Apparently this is a fairly common problem and there can be many causes for it:</description>
            <content type="html"><![CDATA[<h2 id="the-infamous-couldnt-fetch-sitemap">The infamous &ldquo;Couldn&rsquo;t fetch sitemap&rdquo;</h2>
<p>I was having this problem for some time. After submitting this site&rsquo;s <code>sitemap.xml</code> to Google Search Console, I was getting a red warning that it could&rsquo;t fetch the sitemap, because of some unspecified HTTP error. Not very helpful. Plus there wasn&rsquo;t any sign in the server logs that it was attempting to fetch it.</p>
<p>Apparently this is a fairly common problem and there can be many causes for it:</p>
<ul>
<li>A malformed <code>sitemap.xml</code></li>
<li><code>robots.txt</code> or <code>X-Robots-Tag</code> blocking access to crawlers</li>
<li>Connectivity problems, such as IPv6</li>
</ul>
<p>I <a href="https://www.xml-sitemaps.com/validate-xml-sitemap.html">double checked</a> that the sitemap was correct and it could be fetched by others. Bing Webmaster Tools had no problem accessing and parsing it either.</p>
<p>Some people advice to just wait it out and it will solve itself, but after waiting for a few weeks things hadn&rsquo;t changed. The cause had to be something more obscure. A hint was in <code>nginx</code> <code>error.log</code>:</p>
<pre tabindex="0"><code>2021/09/11 15:15:33 [crit] 35499#35499: *136 SSL_do_handshake() failed (SSL: error:141CF06C:SSL routines:tls_parse_ctos_key_share:bad key share) while SSL handshaking, client: 123.123.123.123, server: 0.0.0.0:443
</code></pre><p><code>geoiplookup</code> told me that the client&rsquo;s IP resolved to a domain that belongs to Google.</p>
<p>After some more investigating using <code>tshark</code> and <a href="https://osqa-ask.wireshark.org/questions/62098/how-to-find-out-which-ssl-cipher-suite-is-being-used/">wireshark</a>, I suspected that the cause might be some TLS cipher incompatibility. So, I tried changing this setting in <code>nginx</code>:</p>
<pre tabindex="0"><code>ssl_prefer_server_ciphers on;
</code></pre><p>which Let&rsquo;s Encrypt sets to off. And it worked. It worked well enough to allow me to submit the sitemap. Although the initial response was that it still couldn&rsquo;t fetch the file, I could see in the logs that it was getting it and next time I checked I saw the green success sign.</p>
<p>However, this is more of a workaround. I still get such TLS error messages in the logs from Google&rsquo;s domains, although the crawler works properly. I am convinced that it isn&rsquo;t so much a problem in my configuration, since I had it with both vanilla Let&rsquo;s Encrypt and Caddy&rsquo;s automatic HTTPS configuration.</p>
<p>The real cause is probably that Googlebot, having to index the whole Internet, supports a large number of old and insecure ciphers. Some of them may be causing a problem in new configurations. I could possibly identify exactly which cipher causes the problem and disable it but I would rather have them take a look at this issue, which may be affecting others as well.</p>
<h2 id="redirecting-a-domain-and-bing">Redirecting a domain and Bing</h2>
<p>Bing does things a bit differently. For example, it doesn&rsquo;t distinguish between <em>http://</em> and <em>https://</em> domains and even more confusingly, it doesn&rsquo;t distinguish between <em>www</em> and <em>non-www</em> domains. At least that&rsquo;s my understanding. I wonder how it handles multiple subdomains.</p>
<p>It also seems to be getting confused by redirections of files that really belong to a single domain, such as <code>robots.txt</code> and <code>sitemap.xml</code>. If I understand correctly, Googlebot ignores such redirections (at least it does for <code>robots.txt</code>) while Bingbot follows them and&hellip;</p>
<p>I was trying to redirect this site from its old domain to its new domain. The first approach was to simply 301 redirect everything. Using Change Of Address in Google Search Console was sufficient to redirect the old pages to their new location and to add new content as well. Bing on the other hand, has removed its equivalent Site Move Tool. The old pages weren&rsquo;t being reindexed to discover the redirections and new pages on the new domain were ignored.</p>
<p>After further investigation/hair pulling/trying various things, I came to the conclusion that this was caused by the redirection of <code>sitemap.xml</code> and <code>robots.txt</code> (which includes the URL of <code>sitemap.xml</code>). So I tried redirecting everything except for these files, which were served as they were in the old domain. Here&rsquo;s an example for <code>nginx</code>:</p>
<pre tabindex="0"><code># Redirect old domain: https://www.olddomain.com -&gt; https://www.newdomain.com (canonical)
# except for robots.txt and sitemap.xml
server {
    server_name www.olddomain.com;

    access_log /var/log/nginx/olddomain.log;

    location / {
        return 301 https://www.newdomain.com$request_uri;
    }

    location /robots.txt {
        root /var/www/www.olddomain.com;
    }

    location /sitemap.xml {
        root /var/www/www.olddomain.com;
    }

    # ...
}
</code></pre><p>It gets more complicated because of <em>http://</em>, <em>https://</em>, <em>non-www</em> and <em>www</em> redirections, but I decided to redirect everything old to the old canonical and from there to the new canonical, with the exceptions that were mentioned.</p>
<p>So we have two versions of <code>sitemap.xml</code>, one served from the old domain with the old URLs and the new one. After reindexing some pages and waiting, it seems that it got unstuck. The old pages have been reindexed, Bingbot found the redirections and removed them and the new ones are slowly getting indexed as well. I will have to wait a bit more to be entirely sure because, apparently, Bingbot crawls <em>slowly</em> and <em>selectively</em>.</p>
]]></content>
        </item>
        
        <item>
            <title>SEO for Hugo Static Site Generator</title>
            <link>https://www.dimoulis.net/posts/seo-for-hugo-static-site-generator/</link>
            <pubDate>Sun, 12 Sep 2021 13:54:12 +0300</pubDate>
            
            <guid>https://www.dimoulis.net/posts/seo-for-hugo-static-site-generator/</guid>
            <description>📷Austin Chan
I used to not think much about SEO. I thought it was about marketing tricks and misleading users. But putting all that aside, in the end, you do want people to find your site and this means making your site crawlable by the search engine bots such as Googlebot and Bingbot. You have to help them a little do their part.
robots.txt introduces your site to search engine bots Hugo does create a basic robots.</description>
            <content type="html"><![CDATA[
<figure class="post-cover"><picture>
        <source srcset="austin-chan-ukzHlkoz1IE-unsplash.webp 1x, austin-chan-ukzHlkoz1IE-unsplash-2x.webp 2x" type="image/webp">
        <source srcset="austin-chan-ukzHlkoz1IE-unsplash.jpg 1x, austin-chan-ukzHlkoz1IE-unsplash-2x.jpg 2x" type="image/jpeg">
        <img src="austin-chan-ukzHlkoz1IE-unsplash.jpg"
             alt="Sign that gets the attention"/>
    </picture><figcaption class="left small">
            <p>
                    <a href="https://unsplash.com/photos/ukzHlkoz1IE?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditShareLink">📷Austin Chan</a></p>
        </figcaption>
</figure>

<p>I used to not think much about SEO. I thought it was about marketing tricks and misleading users. But putting all that aside, in the end, you do want people to find your site and this means making your site crawlable by the search engine bots such as Googlebot and Bingbot. You have to help them a little do their part.</p>
<h2 id="robotstxt-introduces-your-site-to-search-engine-bots">robots.txt introduces your site to search engine bots</h2>
<p>Hugo does create a basic <code>robots.txt</code> by default which allows all crawlers access to everything. It is suboptimal though and we can provide some more information to help search engine bots find their way around our site.</p>
<p>Your Hugo theme likely creates some taxonomy index pages about tags and categories. You probably don&rsquo;t mean to include these pages in the search results but rather the pages they link to. Another problem these automatically generated pages may cause, is slowing down the crawling of your site, because search engines allocate a crawling budget to you; you had better make the best use of it. Finally search engines just don&rsquo;t like duplicate pages and they may deduce that your site if of poor quality. I assume you have a /posts/ or similar page anyway, with links of interest listed there.</p>
<p>Another recommended piece of information to put in <code>robots.txt</code> is the sitemap of your site. A sitemap is the list of URLs of your site, so that a crawler doesn&rsquo;t have to discover them by itself. Hugo also generates a <code>sitemap.xml</code> for you, but this isn&rsquo;t mentioned in the <code>robots.txt</code> file it creates.</p>
<p>The easiest way to create a custom <code>robots.txt</code> file is to disable Hugo&rsquo;s automatic generation. Change this setting in <code>config.toml</code>:</p>
<pre tabindex="0"><code>enableRobotsTXT = false
</code></pre><p>And then we can put our custom <code>robots.txt</code> in the <code>/static</code> folder:</p>
<pre tabindex="0"><code>User-agent: *
Disallow: /tags/
Disallow: /categories/

Sitemap: https://www.example.com/sitemap.xml
</code></pre><p>Next time you run <code>hugo</code>, this file will be copied as-is in the <code>/public</code> folder.</p>
<h2 id="descriptions-because-first-impressions-matter">Descriptions, because first impressions matter</h2>
<p>Next after the title of your page and its URL, the most important thing to consider from a SEO perspective is its description. Although it will not affect itself the ranking of your page, it will affect the click rate since a better written description can attract more clicks. Keep in mind that search engine may or may not actually use your description if it&rsquo;s too short or misleading. The descriptions are created by your theme and the generated HTML looks like this:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#282a36;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-html" data-lang="html"><span style="display:flex;"><span>&lt;<span style="color:#ff79c6">meta</span> <span style="color:#50fa7b">name</span><span style="color:#ff79c6">=</span><span style="color:#f1fa8c">&#34;description&#34;</span> <span style="color:#50fa7b">content</span><span style="color:#ff79c6">=</span><span style="color:#f1fa8c">&#34;I am a description.&#34;</span>&gt;
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span>&lt;<span style="color:#ff79c6">meta</span> <span style="color:#50fa7b">itemprop</span><span style="color:#ff79c6">=</span><span style="color:#f1fa8c">&#34;description&#34;</span> <span style="color:#50fa7b">content</span><span style="color:#ff79c6">=</span><span style="color:#f1fa8c">&#34;I am a description.&#34;</span>&gt;
</span></span><span style="display:flex;"><span>&lt;<span style="color:#ff79c6">meta</span> <span style="color:#50fa7b">name</span><span style="color:#ff79c6">=</span><span style="color:#f1fa8c">&#34;twitter:description&#34;</span> <span style="color:#50fa7b">content</span><span style="color:#ff79c6">=</span><span style="color:#f1fa8c">&#34;I am a description.&#34;</span>/&gt;
</span></span><span style="display:flex;"><span>&lt;<span style="color:#ff79c6">meta</span> <span style="color:#50fa7b">property</span><span style="color:#ff79c6">=</span><span style="color:#f1fa8c">&#34;og:description&#34;</span> <span style="color:#50fa7b">content</span><span style="color:#ff79c6">=</span><span style="color:#f1fa8c">&#34;I am a description.&#34;</span> /&gt;
</span></span><span style="display:flex;"><span>...
</span></span></code></pre></div><p>The first line is the &ldquo;proper&rdquo; description. The next lines are generated by <a href="https://gohugo.io/templates/internal/">Hugo&rsquo;s internal templates</a> and in particular <code>&lt;meta itemprop=&quot;description&quot; ...&gt;</code>, defined in <code>_internal/schema.html</code>, seems to be preferred over <code>&lt;meta name=&quot;description&quot; ...&gt;</code> by Bing and the other engines it shares data with <em>(more on that later)</em>.</p>
<p>What ends up in the description is up to the theme. If you didn&rsquo;t specify a description in the front matter of your post, it is likely that it will try to generate a summary and use it as a description. The problem here is that autogenerated summaries aren&rsquo;t very good, since they are usually just the first lines of your post; this may be not what you intended. So I suggest that you have a look at the HTML that your theme generates and if it&rsquo;s not satisfactory, specify a description of your own in the front matter. <a href="https://www.contentkingapp.com/academy/meta-tags/#meta-description">It is suggested</a> that the description is 70–155 characters to fit in the search engine results.</p>
<h2 id="dont-forget-about-bing">Don&rsquo;t forget about Bing</h2>
<p>After putting the URL of your sitemap in <code>robots.txt</code> it is still a good idea to submit it yourself to Google Search Console. This way, you can monitor the crawling of your page by Googlebot and catch any errors. You can also keep an eye on which searches drive traffic to your site.</p>
<p>When it comes to search engines, Bing is a distant #2. But it&rsquo;s still important, because its crawler Bingbot also provides data to <a href="https://duckduckgo.com/">Duckduckgo</a> and possibly other lesser known engines such as <a href="https://www.qwant.com">Qwant</a> and <a href="https://www.ecosia.org">Ecosia</a>. Therefore it makes sense to make sure that Bingbot can crawl your site correctly.</p>
<p>Bing Webmaster Tools acknowledge that you probably already used Google Search Console before coming to them, so they make it easy to login by using your Google account (along with other options such as a Microsoft account). After that you are given the option to import the domains and sitemaps you have set up in Google Search Console. I must say it all worked flawlessly and effortlessly. <del>Plus I didn&rsquo;t have to deal with certain bugs of Google Search Console.</del> You are also given options similar to the ones Google Search Console gives you, such as URL Inspection and Performance metrics.</p>
<h2 id="inform-search-engines-about-site-changes">Inform search engines about site changes</h2>
<p>After making changes to your site and posting something new, you can let the search engines know that the sitemap has changed. An automatic way of doing this is pinging them with the sitemap URL.</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#282a36;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-shell" data-lang="shell"><span style="display:flex;"><span><span style="color:#6272a4"># Ping Google about changes in the sitemap</span>
</span></span><span style="display:flex;"><span>curl <span style="color:#f1fa8c">&#34;https://www.google.com/ping?sitemap=https://www.example.com/sitemap.xml&#34;</span>
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#6272a4"># Ping Bing about changes in the sitemap</span>
</span></span><span style="display:flex;"><span>curl <span style="color:#f1fa8c">&#34;https://www.bing.com/ping?sitemap=https://www.example.com/sitemap.xml&#34;</span>
</span></span></code></pre></div><p>You can include it in your deployment method; left as an exercise to the reader.</p>
<h2 id="patience">Patience</h2>
<p>Congratulations, you went so far and now&hellip; you wait. It can take a few days to a few weeks until the search engines have crawled your site and this assuming nothing unexpected has gone wrong and there are no problems to fix. In the meantime I suggest that you take the opportunity to practice your Zen Buddhism exercises and take your mind away from all that pointless worrying.</p>
]]></content>
        </item>
        
        <item>
            <title>Tmux Copy Paste With Alacritty</title>
            <link>https://www.dimoulis.net/posts/tmux-copy-paste-with-alacritty/</link>
            <pubDate>Mon, 23 Aug 2021 18:48:13 +0300</pubDate>
            
            <guid>https://www.dimoulis.net/posts/tmux-copy-paste-with-alacritty/</guid>
            <description>If you want copy-paste to work between tmux and other apps, you are given instructions to use a rather complicated setup, plugins, forward X11 with ssh (what about Wayland?).
A more simple way is to use a terminal that supports OSC 52 control sequences. The more common terminals such as konsole and gnome-terminal do not support them, because they can be a security concern (an application running on the terminal &amp;ldquo;stealing&amp;rdquo; your clipboard).</description>
            <content type="html"><![CDATA[<p>If you want copy-paste to work between tmux and other apps, you are given instructions to use a <a href="https://www.seanh.cc/2020/12/27/copy-and-paste-in-tmux/">rather complicated setup</a>, <a href="https://github.com/tmux-plugins/tmux-yank">plugins</a>, forward X11 with ssh (what about Wayland?).</p>
<p>A more simple way is to use a terminal that supports <a href="https://github.com/tmux/tmux/wiki/Clipboard">OSC 52</a> control sequences. The more common terminals such as <code>konsole</code> and <code>gnome-terminal</code> do not support them, because they can be a security concern (an application running on the terminal &ldquo;stealing&rdquo; your clipboard). On the other hand they are too convenient to pass&hellip;</p>
<p>Some terminals that do support OSC 52 are <code>xterm</code> (with a configuration option) and <a href="https://github.com/alacritty/alacritty">alacritty</a>.</p>
<p>Alacritty is an OpenGL accelerated terminal emulator written in Rust. It doesn&rsquo;t support tabs and I think it is intended to be used either with a terminal multiplexer such as <code>tmux</code> and <code>screen</code> or with <code>i3</code>. It is cross platform, supporting all major platforms. On Linux, you can get it from your distribution&rsquo;s package manager. On Windows and MacOS, you can <a href="https://github.com/alacritty/alacritty/releases/">grab the binaries</a>.</p>
<p>In any case, all you need is these options in <code>~/.tmux.conf</code>:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#282a36;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-shell" data-lang="shell"><span style="display:flex;"><span><span style="color:#8be9fd;font-style:italic">set</span> -g default-terminal <span style="color:#f1fa8c">&#34;tmux-256color&#34;</span>
</span></span><span style="display:flex;"><span><span style="color:#8be9fd;font-style:italic">set</span> -g mouse on
</span></span></code></pre></div><p>And then use the mouse to select a region. It will be copied to clipboard.</p>
<h2 id="special-considerations-for-ssh-connections">Special considerations for ssh connections</h2>
<ul>
<li>If you are using <strong>Debian</strong> on the remote, make sure <code>ncurses-term</code> is installed, to provide the extra terminal definitions that include <code>alacritty</code>. In <strong>Ubuntu</strong>, it should be already installed.</li>
<li>Some older distributions such as <strong>Debian buster</strong> did not yet include terminfo entries for <code>alacritty</code>. In that case you can <a href="https://wiki.archlinux.org/title/Alacritty#Terminal_functionality_unavailable_in_remote_shells">configure alacritty</a> to use <code>TERM=xterm-256color</code> instead of <code>TERM=alacritty</code>, by adding these lines to <code>alacritty</code>&rsquo;s configuration file <code>~/.config/alacritty/alacritty.yml</code>:</li>
</ul>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#282a36;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-yaml" data-lang="yaml"><span style="display:flex;"><span><span style="color:#ff79c6">env</span>:
</span></span><span style="display:flex;"><span>  <span style="color:#ff79c6">TERM</span>: xterm-256color
</span></span></code></pre></div><h2 id="making-it-more-comfortable">Making it more comfortable</h2>
<p>You can choose a color scheme <a href="https://github.com/alacritty/alacritty/wiki/Color-schemes">here</a>. In my case, I picked Breeze. I clicked on its arrow and copy-pasted the scheme in a file called <code>~/.config/alacritty/breeze.yml</code>.</p>
<p>I also like F11 to toggle full screen. Putting it altogether, append these lines to <code>~/.config/alacritty/alacritty.yml</code>:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#282a36;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-yaml" data-lang="yaml"><span style="display:flex;"><span><span style="color:#ff79c6">import</span>:
</span></span><span style="display:flex;"><span> - ~/.config/alacritty/breeze.yml
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#ff79c6">key_bindings</span>:
</span></span><span style="display:flex;"><span>  - { <span style="color:#ff79c6">key: F11, action</span>: ToggleFullscreen }
</span></span></code></pre></div>]]></content>
        </item>
        
        <item>
            <title>Install Arch Linux On OVH VPS</title>
            <link>https://www.dimoulis.net/posts/install-arch-linux-on-ovh-vps/</link>
            <pubDate>Sun, 22 Aug 2021 17:10:40 +0300</pubDate>
            
            <guid>https://www.dimoulis.net/posts/install-arch-linux-on-ovh-vps/</guid>
            <description>📷Steve Johnson
OVH recently announced that it will not offer the option of Arch Linux and FreeBSD for new installations. Although they recommend that you upload your own image if you are using its Public Cloud, this option is not available to VPS users.
Not a problem though. There are a few ways to install any OS you want and they seem ok with it.
Use a cloud image I have tested this method with Arch and Debian bullseye images and everything worked fine, including IPv6 configured out of the box.</description>
            <content type="html"><![CDATA[
<figure class="post-cover"><picture>
        <source srcset="steve-johnson-lH-UZuoG-aY-unsplash.webp 1x, steve-johnson-lH-UZuoG-aY-unsplash-2x.webp 2x" type="image/webp">
        <source srcset="steve-johnson-lH-UZuoG-aY-unsplash.jpg 1x, steve-johnson-lH-UZuoG-aY-unsplash-2x.jpg 2x" type="image/jpeg">
        <img src="steve-johnson-lH-UZuoG-aY-unsplash.jpg"
             alt="Installation tools"/>
    </picture><figcaption class="left small">
            <p>
                    <a href="https://unsplash.com/photos/lH-UZuoG-aY?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditShareLink">📷Steve Johnson</a></p>
        </figcaption>
</figure>

<p>OVH recently announced that it will not offer the option of Arch Linux and FreeBSD for new installations. Although they recommend that you <a href="https://docs.ovh.com/gb/en/public-cloud/uploading-your-own-image/">upload your own image</a> if you are using its Public Cloud, this option is not available to VPS users.</p>
<p>Not a problem though. There are a few ways to install any OS you want and they seem ok with it.</p>
<h2 id="use-a-cloud-image">Use a cloud image</h2>
<p>I have tested this method with Arch and Debian bullseye images and everything worked fine, including IPv6 configured out of the box. The following steps describe an Arch installation but, with small adjustments, they should work for other distributions as well.</p>
<p>Install any available distribution such as e.g. Debian.</p>
<p>Log in the manager and <strong>reboot in rescue mode</strong>. The rescue system is Debian based and includes backports and ZFS packages if you ever need them.</p>
<p>Use <code>lsblk</code> to orient yourself. The Debian based rescue disk should be mounted at <code>/dev/sda</code> while your target disk is at <code>/dev/sdb</code>.</p>
<p>Make some extra room for the image:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#282a36;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-shell" data-lang="shell"><span style="display:flex;"><span>mkdir /tmp/mnt
</span></span><span style="display:flex;"><span>mount -t tmpfs tmpfs /tmp/mnt
</span></span><span style="display:flex;"><span><span style="color:#8be9fd;font-style:italic">cd</span> /tmp/mnt
</span></span></code></pre></div><p>Download the image with wget:</p>
<ul>
<li>Arch Linux <a href="https://mirror.pkgbuild.com/images/latest/">https://mirror.pkgbuild.com/images/latest/</a>, use <code>cloudimg</code></li>
<li>Debian <a href="https://cloud.debian.org/images/cloud/bullseye/daily/latest/">https://cloud.debian.org/images/cloud/bullseye/daily/latest/</a> use <code>genericcloud-amd64</code></li>
</ul>
<p>We need <code>qemu-img</code> from <code>qemu-utils</code>:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#282a36;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-shell" data-lang="shell"><span style="display:flex;"><span>apt install qemu-utils
</span></span></code></pre></div><p>Overwrite your target disk /dev/sdb with the image as shown in the <a href="https://wiki.archlinux.org/title/Arch_Linux_on_a_VPS">arch wiki</a>. Notice we are overwriting the whole disk, not just a partition:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#282a36;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-shell" data-lang="shell"><span style="display:flex;"><span>qemu-img convert -f qcow2 -O raw Arch-Linux-x86_64-cloudimg-20210315.17387.qcow2 /dev/sdb
</span></span></code></pre></div><p>The Arch Linux image uses a <code>GPT</code> partitioning scheme, a btrfs partition compressed with zstd and a BIOS compatibility partition.</p>
<p>To make sure you are able to login after reboot, it&rsquo;s a good idea to setup a user with <code>sudo</code> abilities and a known password. You may also copy your ssh public key manually.</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#282a36;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-shell" data-lang="shell"><span style="display:flex;"><span>sfdisk -l /dev/sdb
</span></span><span style="display:flex;"><span>mount -t btrfs /dev/sdbX /mnt
</span></span><span style="display:flex;"><span>chroot /mnt
</span></span><span style="display:flex;"><span>useradd -m -s /bin/bash -G wheel arch
</span></span><span style="display:flex;"><span>passwd arch
</span></span><span style="display:flex;"><span><span style="color:#8be9fd;font-style:italic">echo</span> <span style="color:#f1fa8c">&#34;arch ALL=(ALL) NOPASSWD:ALL&#34;</span> &gt; /etc/sudoers.d/arch
</span></span></code></pre></div><p>Finally <strong>use the manager to reboot</strong> the virtual machine <strong>in normal mode</strong>.</p>
<p>Cloud images contain <code>cloud-guest-utils</code> which should take of resizing the partition table to fill the disk, setup networking etc. If for some reason you can&rsquo;t connect with ssh, you can still try KVM to login.</p>
<p>If you got so far, then it&rsquo;s time to secure the installation:</p>
<ul>
<li>update</li>
<li>configure ssh to only use <a href="https://wiki.archlinux.org/title/OpenSSH#Force_public_key_authentication">public key authentication</a> rather than password authentication</li>
<li>setup a firewall</li>
<li>etc.</li>
</ul>
<h2 id="final-tuning">Final tuning</h2>
<p>OVH recommends that <code>cloud-guest-utils</code> and <code>qemu-guest-agent</code> are installed when using a custom image. <code>cloud-guest-utils</code> are already installed, so:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#282a36;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-shell" data-lang="shell"><span style="display:flex;"><span>sudo pacman -S qemu-guest-agent
</span></span><span style="display:flex;"><span>sudo systemctl <span style="color:#8be9fd;font-style:italic">enable</span> --now qemu-guest-agent.service
</span></span></code></pre></div><p>Have a look at <code>/etc/default/grub</code>.</p>
<ul>
<li>To have KVM console work, we need to add <code>console=ttyS0</code> to boot parameters.</li>
<li>To improve performance, you may be interested in <a href="https://wiki.archlinux.org/title/Btrfs#Compression">disabling compression</a> by removing <code>rootflags=compress-force=zstd</code>.</li>
<li>Provide some entropy to the kernel&rsquo;s random number generator. You may decide that you&rsquo;d rather trust the CPU manufacturer&rsquo;s random number generator rather than get warnings in boot messages about using an uninitialized <code>urandom</code>.</li>
</ul>
<p><em>These are just some recommendations.</em> So, let&rsquo;s modify <code>/etc/default/grub</code> to look like this:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#282a36;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-shell" data-lang="shell"><span style="display:flex;"><span><span style="color:#8be9fd;font-style:italic">GRUB_CMDLINE_LINUX_DEFAULT</span><span style="color:#ff79c6">=</span><span style="color:#f1fa8c">&#34;random.trust_cpu=on&#34;</span>
</span></span><span style="display:flex;"><span><span style="color:#8be9fd;font-style:italic">GRUB_CMDLINE_LINUX</span><span style="color:#ff79c6">=</span><span style="color:#f1fa8c">&#34;net.ifnames=0 console=ttyS0&#34;</span>
</span></span></code></pre></div><p>And then we can apply the changes.</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#282a36;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-shell" data-lang="shell"><span style="display:flex;"><span>sudo grub-mkconfig -o /boot/grub/grub.cfg
</span></span><span style="display:flex;"><span>sudo reboot
</span></span><span style="display:flex;"><span>sudo btrfs filesystem defragment -r /
</span></span></code></pre></div><p>Perhaps you are interested in converting the system to use a flat hierarchy of subvolumes by using steps similar to the ones described <a href="https://fedoramagazine.org/convert-your-filesystem-to-btrfs">here</a>.</p>
<p>Here is a <a href="https://www.dimoulis.net/posts/benchmark-of-postgresql-with-ext4-xfs-btrfs-zfs/">benchmark</a> comparing the performance of Postgresql on btrfs and other filesystems.</p>
]]></content>
        </item>
        
        <item>
            <title>Systemd Service Hardening</title>
            <link>https://www.dimoulis.net/posts/systemd-service-hardening/</link>
            <pubDate>Sat, 21 Aug 2021 15:25:23 +0300</pubDate>
            
            <guid>https://www.dimoulis.net/posts/systemd-service-hardening/</guid>
            <description>Linux provides certain security mechanisms that are used by containers such as Docker, LXD and systemd-nspawn. We can use the same mechanisms to sandbox systemd services shipped by the distribution or the ones we write ourselves. The purpose is to protect the system even if the service is compromised.
Arch Linux maintainers use several of these options in the system unit files that they ship, while Debian and Ubuntu maintainers generally only use the options that the upstream developer has used, if any.</description>
            <content type="html"><![CDATA[<p>Linux provides certain security mechanisms that are used by containers such as Docker, LXD and systemd-nspawn. We can use the same mechanisms to sandbox systemd services shipped by the distribution or the ones we write ourselves. The purpose is to protect the system even if the service is compromised.</p>
<p>Arch Linux maintainers use several of these options in the system unit files that they ship, while Debian and Ubuntu maintainers generally only use the options that the upstream developer has used, if any. For examples, have a look at the systemd unit files for <em>memcached</em> and <em>mariadb</em> in <code>/usr/lib/systemd/system</code>.</p>
<p>These options generally fall in these categories:</p>
<ul>
<li>Filesystem namespace</li>
<li>Other namespaces such as user</li>
<li>Capabilities</li>
<li>seccomp and system call filter</li>
</ul>
<p>Some of these options have a performance cost, in particular seccomp:</p>
<ul>
<li><a href="https://github.com/systemd/systemd/issues/18370">https://github.com/systemd/systemd/issues/18370</a></li>
<li><a href="https://lore.kernel.org/linux-security-module/c22a6c3cefc2412cad00ae14c1371711@huawei.com/T/">https://lore.kernel.org/linux-security-module/c22a6c3cefc2412cad00ae14c1371711@huawei.com/T/</a>, see Lennart Poettering&rsquo;s comments</li>
</ul>
<p>Although someone could simply enable all available options and take the performance hit when security is critical over anything else, in other cases it makes sense to try to balance security and performance and carefully pick the options that provide most security benefit for an acceptable performance cost.</p>
<h2 id="get-a-report-on-a-services-security-score">Get a report on a service&rsquo;s security score</h2>
<p>The first step is to <a href="https://wiki.debian.org/ServiceSandboxing">generate a report</a> on the service</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#282a36;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-shell" data-lang="shell"><span style="display:flex;"><span>sudo systemd-analyze security mydaemon.service --no-pager
</span></span></code></pre></div><p>This will give as a total score. Notice that the available options have a weight, according to their estimated impact on security.</p>
<p>Then we can <a href="https://wiki.archlinux.org/title/Systemd#Drop-in_files">add a snippet</a> with extra options using</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#282a36;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-shell" data-lang="shell"><span style="display:flex;"><span>sudo systemctl edit mydaemon.service
</span></span></code></pre></div><h2 id="list-of-options">List of options</h2>
<p>I had a look at the source of <a href="https://github.com/systemd/systemd-stable/archive/refs/tags/v248.3.tar.gz">systemd v248.3</a></p>
<h3 id="filesystem-namespace-low-cost-high-impact">Filesystem namespace: low cost, high impact</h3>
<p><code>src/core/namespace.h</code></p>
<ul>
<li>ProtectHome</li>
<li>ProtectSystem</li>
<li>ProtectProc</li>
<li>ProcSubset</li>
<li>ProtectKernelTunables</li>
<li>ProtectKernelModules</li>
<li>BindMount</li>
<li>MountImage</li>
</ul>
<h3 id="capabilities-low-cost-high-impact-man-prctlhttpsman7orglinuxman-pagesman2prctl2html">Capabilities: low cost, high impact (<a href="https://man7.org/linux/man-pages/man2/prctl.2.html">man prctl</a>)</h3>
<ul>
<li>CapabilityBoundingSet</li>
<li>AmbientCapabilities</li>
<li>NoNewPrivileges</li>
</ul>
<h3 id="other-options-to-consider-for-a-chroot">Other options to consider for a chroot:</h3>
<p>Also namespace thus low cost.</p>
<ul>
<li>RootDirectory</li>
<li>MountAPIVFS</li>
<li>PrivateUsers</li>
<li>DynamicUser</li>
</ul>
<h3 id="seccomp-high-cost-varying-impact">Seccomp (high cost, varying impact)</h3>
<p><code>src/core/execute.c</code>
<code>#if HAVE_SECCOMP ... #endif</code></p>
<ul>
<li>SystemCallFilter=</li>
<li>SystemCallLog=</li>
<li>SystemCallArchitectures=</li>
<li>RestrictAddressFamilies=</li>
<li>MemoryDenyWriteExecute=</li>
<li>RestrictRealtime=</li>
<li>RestrictSUIDSGID=</li>
<li>ProtectKernelTunables=</li>
<li>ProtectKernelModules=</li>
<li>ProtectKernelLogs=</li>
<li>ProtectClock=</li>
<li>PrivateDevices=</li>
<li>RestrictNamespaces=</li>
<li>LockPersonality=</li>
</ul>
<p>I think it makes sense to use most of the filesystem sandboxing options and capabilties and then pick those seccomp options that will have the most impact on the security of the system, using the report of <code>systemd-analyze security</code> as a guide.</p>
<p>See <a href="https://www.freedesktop.org/software/systemd/man/systemd.exec.html">systemd service configuration options</a> for a full list of currently available options. Some options may not be supported by older systemd versions. See <code>analyze/analyze-security.c</code> for the weight of options or the report of <code>systemd-analyze security</code>.</p>
<h2 id="example-systemd-unit">Example systemd unit</h2>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#282a36;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-shell" data-lang="shell"><span style="display:flex;"><span><span style="color:#6272a4"># Tuned after:</span>
</span></span><span style="display:flex;"><span><span style="color:#6272a4"># sudo systemd-analyze security caddy.service --no-pager</span>
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#6272a4">## Filesystem namespace options (cheap)</span>
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#6272a4"># Mount most things read-only and set read-write paths</span>
</span></span><span style="display:flex;"><span><span style="color:#8be9fd;font-style:italic">ProtectSystem</span><span style="color:#ff79c6">=</span>strict
</span></span><span style="display:flex;"><span><span style="color:#8be9fd;font-style:italic">ReadWritePaths</span><span style="color:#ff79c6">=</span>/var/lib/caddy /var/log/caddy
</span></span><span style="display:flex;"><span><span style="color:#8be9fd;font-style:italic">InaccessiblePaths</span><span style="color:#ff79c6">=</span>...
</span></span><span style="display:flex;"><span><span style="color:#8be9fd;font-style:italic">ProtectHome</span><span style="color:#ff79c6">=</span><span style="color:#8be9fd;font-style:italic">true</span>
</span></span><span style="display:flex;"><span><span style="color:#8be9fd;font-style:italic">PrivateTmp</span><span style="color:#ff79c6">=</span><span style="color:#8be9fd;font-style:italic">true</span>
</span></span><span style="display:flex;"><span><span style="color:#8be9fd;font-style:italic">ProtectProc</span><span style="color:#ff79c6">=</span>invisible
</span></span><span style="display:flex;"><span><span style="color:#8be9fd;font-style:italic">ProtectKernelTunables</span><span style="color:#ff79c6">=</span><span style="color:#8be9fd;font-style:italic">true</span>
</span></span><span style="display:flex;"><span><span style="color:#8be9fd;font-style:italic">ProtectControlGroups</span><span style="color:#ff79c6">=</span><span style="color:#8be9fd;font-style:italic">true</span>
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#6272a4">## Capabilities (man prctl) (cheap)</span>
</span></span><span style="display:flex;"><span><span style="color:#8be9fd;font-style:italic">NoNewPrivileges</span><span style="color:#ff79c6">=</span><span style="color:#8be9fd;font-style:italic">true</span>
</span></span><span style="display:flex;"><span><span style="color:#6272a4">#CapabilityBoundingSet=</span>
</span></span><span style="display:flex;"><span><span style="color:#8be9fd;font-style:italic">CapabilityBoundingSet</span><span style="color:#ff79c6">=</span>CAP_NET_BIND_SERVICE
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#6272a4">## Seccomp (expensive)</span>
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#6272a4"># High impact</span>
</span></span><span style="display:flex;"><span><span style="color:#8be9fd;font-style:italic">RestrictAddressFamilies</span><span style="color:#ff79c6">=</span>AF_UNIX AF_INET AF_INET6
</span></span><span style="display:flex;"><span><span style="color:#8be9fd;font-style:italic">RestrictNamespaces</span><span style="color:#ff79c6">=</span><span style="color:#8be9fd;font-style:italic">true</span>
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#6272a4"># Misc recommended</span>
</span></span><span style="display:flex;"><span><span style="color:#8be9fd;font-style:italic">PrivateDevices</span><span style="color:#ff79c6">=</span><span style="color:#8be9fd;font-style:italic">true</span>
</span></span><span style="display:flex;"><span><span style="color:#8be9fd;font-style:italic">ProtectKernelModules</span><span style="color:#ff79c6">=</span><span style="color:#8be9fd;font-style:italic">true</span>
</span></span><span style="display:flex;"><span><span style="color:#8be9fd;font-style:italic">ProtectKernelLogs</span><span style="color:#ff79c6">=</span><span style="color:#8be9fd;font-style:italic">true</span>
</span></span><span style="display:flex;"><span><span style="color:#8be9fd;font-style:italic">ProtectClock</span><span style="color:#ff79c6">=</span><span style="color:#8be9fd;font-style:italic">true</span>
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#6272a4"># Other</span>
</span></span><span style="display:flex;"><span><span style="color:#6272a4">#RestrictSUIDSGID=true</span>
</span></span><span style="display:flex;"><span><span style="color:#6272a4">#RestrictRealtime=true</span>
</span></span><span style="display:flex;"><span><span style="color:#6272a4">#MemoryDenyWriteExecute=true</span>
</span></span><span style="display:flex;"><span><span style="color:#6272a4">#LockPersonality=true</span>
</span></span><span style="display:flex;"><span><span style="color:#6272a4">#SystemCallArchitectures=native</span>
</span></span><span style="display:flex;"><span><span style="color:#6272a4">#SystemCallFilter=@system-service</span>
</span></span><span style="display:flex;"><span><span style="color:#6272a4">#SystemCallErrorNumber=EPERM</span>
</span></span></code></pre></div>]]></content>
        </item>
        
        <item>
            <title>Benchmark of Ext4, XFS, Btrfs, ZFS With PostgreSQL</title>
            <link>https://www.dimoulis.net/posts/benchmark-of-postgresql-with-ext4-xfs-btrfs-zfs/</link>
            <pubDate>Thu, 19 Aug 2021 10:46:55 +0300</pubDate>
            
            <guid>https://www.dimoulis.net/posts/benchmark-of-postgresql-with-ext4-xfs-btrfs-zfs/</guid>
            <description>Most VPS hosting providers use Ext4 or XFS out of the box. There are other options of course and interestingly, the official Arch Linux cloud image uses Btrfs, with zstd compression. I like Btrfs and I think it&amp;rsquo;s a good match for a rolling release distribution targeted at developers such as Arch (and OpenSuse and recently Fedora).
Pros Snapshots to easily rollback when/if an update goes wrong Easy backups Container friendly Cons Weak random read/write performance when used for a database or a virtual machine host For development, the pros outweigh the cons and I wouldn&amp;rsquo;t mind the extra features on a web server.</description>
            <content type="html"><![CDATA[<p>Most VPS hosting providers use Ext4 or XFS out of the box. There are other options of course and interestingly, the <a href="https://mirror.pkgbuild.com/images/latest/">official Arch Linux cloud image</a> uses Btrfs, with zstd compression. I like Btrfs and I think it&rsquo;s a good match for a rolling release distribution targeted at developers such as Arch (and OpenSuse and recently Fedora).</p>
<ul>
<li>Pros
<ul>
<li>Snapshots to easily rollback when/if an update goes wrong</li>
<li>Easy backups</li>
<li>Container friendly</li>
</ul>
</li>
<li>Cons
<ul>
<li>Weak random read/write performance when used for a database or a virtual machine host</li>
</ul>
</li>
</ul>
<p>For development, the pros outweigh the cons and I wouldn&rsquo;t mind the extra features on a web server. However in that case database performance is often the limiting factor. So I was curious, not whether Btrfs would outperform Ext4 and XFS but rather if it&rsquo;s a viable choice.</p>
<p>Then there is ZFS, which is used by FreeBSD for good reasons (its previous filesystem UFS is about as good as Ext2) and is also promoted by Ubuntu these days.</p>
<p>Here are some benchmarks. First, the tests were run locally in a VM configured close to the specs of the VPS I was planning to use. Then some benchmarks on the VPS itself.</p>
<h2 id="tests-on-a-vm">Tests on a VM</h2>
<p>I used these <a href="https://severalnines.com/blog/benchmarking-postgresql-performance">instructions</a> to benchmark Postgresql, it&rsquo;s a basic benchmark with 10 and 100 clients. However, Btrfs and ZFS were tuned for a database as one would normally do.</p>
<ul>
<li>Specs
<ul>
<li>KVM, 2 CPU, 4 Gb RAM, 1 Gb disk</li>
</ul>
</li>
<li>Linux tuning
<ul>
<li>noatime</li>
<li>chattr +C for btrfs</li>
<li>ZFS tuned for Postgresql according to the <a href="https://wiki.archlinux.org/title/ZFS#Databases">arch wiki</a></li>
<li>No other Postgresql tuning</li>
</ul>
</li>
<li>FreeBSD tuning
<ul>
<li>Filesystem tuned similarly to Linux but without changing primarycache setting according to
<ul>
<li><a href="https://redbyte.eu/en/blog/postgresql-benchmark-freebsd-centos-ubuntu-debian-opensuse/">https://redbyte.eu/en/blog/postgresql-benchmark-freebsd-centos-ubuntu-debian-opensuse/</a></li>
<li><a href="https://news.ycombinator.com/item?id=16022258">https://news.ycombinator.com/item?id=16022258</a></li>
</ul>
</li>
<li>ZFS uses lz4 by default</li>
</ul>
</li>
</ul>
<p>Tests were run using a raw preallocated image on a SSD Btrfs filesystem.</p>
<ul>
<li><span   style="color:Purple">
    TPS
</span> include connection establishing, rounded, <em>higher is better</em>.</li>
<li><span   style="color:SlateBlue">
    Latency
</span> in ms, <em>lower is better</em>.</li>
</ul>
<p><strong>10 clients</strong></p>
<table>
<thead>
<tr>
<th>Filesystem</th>
<th>Latency (ms)</th>
<th>TPS</th>
</tr>
</thead>
<tbody>
<tr>
<td>Linux ext4</td>
<td>26</td>
<td>375</td>
</tr>
<tr>
<td>Linux xfs</td>
<td>25-28</td>
<td>360-393</td>
</tr>
<tr>
<td>Linux btrfs</td>
<td>50-52</td>
<td>192-197</td>
</tr>
<tr>
<td>Linux zfs</td>
<td>30-31</td>
<td>321-333</td>
</tr>
<tr>
<td>FreeBSD zfs</td>
<td>36-37</td>
<td>272-278</td>
</tr>
</tbody>
</table>
<figure class="center"><img src="benchmark-vm-10-clients.svg"
         alt="Benchmark VM 10 clients"/>
</figure>

<p><strong>100 clients</strong></p>
<table>
<thead>
<tr>
<th>Filesystem</th>
<th>Latency (ms)</th>
<th>TPS</th>
</tr>
</thead>
<tbody>
<tr>
<td>Linux ext4</td>
<td>296-303</td>
<td>329-336</td>
</tr>
<tr>
<td>Linux xfs</td>
<td>294-312</td>
<td>319-339</td>
</tr>
<tr>
<td>Linux btrfs</td>
<td>539-581</td>
<td>172-185</td>
</tr>
<tr>
<td>Linux zfs</td>
<td>368</td>
<td>271</td>
</tr>
<tr>
<td>FreeBSD zfs</td>
<td>404</td>
<td>247</td>
</tr>
</tbody>
</table>
<figure class="center"><img src="benchmark-vm-100-clients.svg"
         alt="Benchmark VM 100 clients"/>
</figure>

<h2 id="tests-on-a-vps">Tests on a VPS</h2>
<p>A variety of tests were run:</p>
<ul>
<li>Arch Linux using its official cloud image (Btrfs, no compression, has metadata and system DUP, noatime, autodefrag, <a href="https://www.qemu.org/2021/01/19/virtio-blk-scsi-configuration/">virtio-block</a>)</li>
<li>Ubuntu as setup by the VPS provider (<a href="https://www.qemu.org/2021/01/19/virtio-blk-scsi-configuration/">virtio-scsi</a>)</li>
<li>Arch Linux using a custom LVM + Btrfs + Ext4 for database</li>
<li>Arch Linux with the -lts kernel (<em>no preemption</em>) and Ext4, XFS, Btrfs filesystems</li>
</ul>
<p><strong>Why Btrfs over LVM?</strong> It&rsquo;s not as crazy as it sounds and it&rsquo;s mentioned in the <a href="https://btrfs.wiki.kernel.org/index.php/FAQ#Btrfs_has_subvolumes.2C_does_this_mean_I_don.27t_need_a_logical_volume_manager_and_I_can_create_a_big_Btrfs_filesystem_on_a_raw_partition.3F">btrfs wiki</a> as a possibility. The idea is to combine the best parts of a COW and a traditional filesystem at the cost of extra complexity.</p>
<p><strong>10 clients</strong></p>
<table>
<thead>
<tr>
<th>Setup</th>
<th>virtio</th>
<th>Latency (ms)</th>
<th>TPS</th>
</tr>
</thead>
<tbody>
<tr>
<td>Arch btrfs</td>
<td>virtio-block</td>
<td>23-24</td>
<td>421-439</td>
</tr>
<tr>
<td>Ubuntu ext4</td>
<td>virtio-scsi</td>
<td>13</td>
<td>756-787</td>
</tr>
<tr>
<td>Arch LVM + btrfs + ext4</td>
<td>virtio-scsi</td>
<td>13-15</td>
<td>665-745</td>
</tr>
<tr>
<td>Arch lts ext4</td>
<td>virtio-scsi</td>
<td>17</td>
<td>568-583</td>
</tr>
<tr>
<td>Arch lts xfs</td>
<td>virtio-scsi</td>
<td>15</td>
<td>648-659</td>
</tr>
<tr>
<td>Arch lts btrfs</td>
<td>virtio-scsi</td>
<td>21</td>
<td>460-465</td>
</tr>
<tr>
<td>Arch btrfs (single metadata)</td>
<td>virtio-scsi</td>
<td>20</td>
<td>505</td>
</tr>
</tbody>
</table>
<figure class="center"><img src="benchmark-vps-10-clients.svg"
         alt="Benchmark VPS 10 clients"/>
</figure>

<p><strong>100 clients</strong></p>
<table>
<thead>
<tr>
<th>Setup</th>
<th>virtio</th>
<th>Latency (ms)</th>
<th>TPS</th>
</tr>
</thead>
<tbody>
<tr>
<td>Arch btrfs</td>
<td>virtio-block</td>
<td>247-256</td>
<td>390-403</td>
</tr>
<tr>
<td>Ubuntu ext4</td>
<td>virtio-scsi</td>
<td>180-203</td>
<td>492-554</td>
</tr>
<tr>
<td>Arch LVM + btrfs + ext4</td>
<td>virtio-scsi</td>
<td>178-179</td>
<td>556-561 tps</td>
</tr>
<tr>
<td>Arch lts ext4</td>
<td>virtio-scsi</td>
<td>182-183</td>
<td>543-548</td>
</tr>
<tr>
<td>Arch lts xfs</td>
<td>virtio-scsi</td>
<td>184-190</td>
<td>523-542</td>
</tr>
<tr>
<td>Arch lts btrfs</td>
<td>virtio-scsi</td>
<td>240-254</td>
<td>392-416</td>
</tr>
<tr>
<td>Arch btrfs (single metadata)</td>
<td>virtio-scsi</td>
<td>243</td>
<td>410</td>
</tr>
</tbody>
</table>
<figure class="center"><img src="benchmark-vps-100-clients.svg"
         alt="Benchmark VPS 100 clients"/>
</figure>

<h2 id="conclusion">Conclusion</h2>
<p>These are a lot of numbers to absorb, but first a warning: This is a small synthetic test designed to measure a specific aspect of performance, <em>database performance</em>. The real performance of an application will depend on whether it is CPU bound, or network bound, or serving many files or finally the database performance. The only realistic benchmark is the one done on a real application in real conditions.</p>
<p>Having said that:</p>
<ul>
<li>Ext4 and XFS are the fastest, as expected. But they come with the smallest set of features compared to newer filesystems.</li>
<li>Btrfs trails the other options <em>for a database</em> in terms of latency and throughput. But it is reasonably easy to setup and comes with useful features, such as snapshots and easy backups.</li>
<li>ZFS performs ok, at least on a fresh installation. However I would expect it to get very fragmentated with time, as all COW filesystems do. It also uses a lot of memory which is limited on a VPS. And it&rsquo;s <a href="https://openzfs.github.io/openzfs-docs/Getting%20Started/Arch%20Linux/index.html">complicated to setup</a> if you do it yourself.</li>
<li>The LVM+Btrfs+Ext4 combination performs well, but is not supported by <code>cloud-utils</code> and I had to do the resizing manually.</li>
</ul>
<p>In other words, every choice is a compromise and one has to pick according to their priorities.</p>
]]></content>
        </item>
        
        <item>
            <title>Static Compilation of Go Programs</title>
            <link>https://www.dimoulis.net/posts/static-compilation-of-go-programs/</link>
            <pubDate>Wed, 18 Aug 2021 18:48:45 +0300</pubDate>
            
            <guid>https://www.dimoulis.net/posts/static-compilation-of-go-programs/</guid>
            <description>One of the advertised features of Go is its ability to build self-contained static executables. This gives us the ability to develop on Arch and deploy on Debian, or even Windows and ARM platforms.
As it turns out, in some cases Go does in fact create dynamically linked executables that expect specific versions of libraries. These are not portable. This happens because Go may use the C compiler even when we do not expect it.</description>
            <content type="html"><![CDATA[<p>One of the advertised features of Go is its ability to build self-contained static executables. This gives us the ability to develop on Arch and deploy on Debian, or even Windows and ARM platforms.</p>
<p>As it turns out, in some cases Go does in fact create dynamically linked executables that expect specific versions of libraries. These are not portable. This happens because Go may use the C compiler even when we do not expect it.</p>
<h2 id="why-does-this-happen">Why does this happen?</h2>
<pre tabindex="0"><code>$ ldd web
        linux-vdso.so.1 (0x00007fffce3d2000)
        libpthread.so.0 =&gt; /usr/lib/libpthread.so.0 (0x00007f0d8f33d000)
        libc.so.6 =&gt; /usr/lib/libc.so.6 (0x00007f0d8f171000)
        /lib64/ld-linux-x86-64.so.2 =&gt; /usr/lib64/ld-linux-x86-64.so.2 (0x00007f0d8f385000)
</code></pre><p>The developer of <a href="https://www.goatcounter.com/">GoatCounter</a> <a href="https://www.arp242.net/static-go.html">pointed out</a> that certain standard Go libraries use C bindings to provide extra functionality.</p>
<ul>
<li>net</li>
<li>os/user</li>
</ul>
<p>When these are used directly or indirectly they have the result of <code>gccgo</code> being used and the executable is dynamically linked by default. However, we can ask them to stay in a pure Go implementation.</p>
<h2 id="static-compilation-recipe">Static compilation recipe</h2>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#282a36;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-shell" data-lang="shell"><span style="display:flex;"><span>go build -tags osusergo
</span></span><span style="display:flex;"><span>go build -tags netgo
</span></span><span style="display:flex;"><span>go build -tags osusergo,netgo
</span></span></code></pre></div><p>One of these build flags will instruct the troubling library to stick to pure Go and give us the static executable we expect.</p>
<pre tabindex="0"><code>$ ldd web 
        not a dynamic executable
</code></pre><p>I think that&rsquo;s good enough for me but there are options to consider:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#282a36;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-shell" data-lang="shell"><span style="display:flex;"><span><span style="color:#8be9fd;font-style:italic">CGO_ENABLED</span><span style="color:#ff79c6">=</span><span style="color:#bd93f9">0</span> go build
</span></span></code></pre></div><p>This should have the same result. Finally, if we actually do intend to use the C library, we can pass the static compilation flag to the C compiler:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#282a36;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-shell" data-lang="shell"><span style="display:flex;"><span>go build -ldflags<span style="color:#ff79c6">=</span><span style="color:#f1fa8c">&#34;-extldflags=-static&#34;</span>
</span></span></code></pre></div><p>But in that case the executable will not be able to pick up any security updates of the linked libraries. We still have the option of building a dynamically linked executable in a container.</p>
]]></content>
        </item>
        
    </channel>
</rss>
