<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/">
  <channel>
    <title>Storage on Eric Daly&#39;s Blog</title>
    <link>https://blog.dalydays.com/tags/storage/</link>
    <description>Recent content in Storage on Eric Daly&#39;s Blog</description>
    <generator>Hugo -- gohugo.io</generator>
    <language>en</language>
    <lastBuildDate>Tue, 21 Jan 2025 00:00:00 +0000</lastBuildDate><atom:link href="https://blog.dalydays.com/tags/storage/index.xml" rel="self" type="application/rss+xml" />
    <item>
      <title>Kubernetes Storage - OpenEBS Replicated Storage Mayastor</title>
      <link>https://blog.dalydays.com/post/kubernetes-storage-with-openebs/</link>
      <pubDate>Tue, 21 Jan 2025 00:00:00 +0000</pubDate>
      
      <guid>https://blog.dalydays.com/post/kubernetes-storage-with-openebs/</guid>
      <description>Let&amp;rsquo;s walk through deploying OpenEBS Replicated Storage with the Mayastor engine on Talos Linux!</description>
      <content:encoded><![CDATA[<h1 id="intro-and-prerequisites">Intro and Prerequisites</h1>
<p>In a previous post, I mentioned that I struggled to get OpenEBS working in Talos and instead went with democratic-csi. In recent weeks, I decided to revisit this and figure out how to get OpenEBS replicated storage working in order to evaluate replicated storage in my cluster. I now have multiple disks that I can dedicate to my Kubernetes cluster and wanted to avoid the issue with the single point of failure using a TrueNAS VM for democratic-csi.</p>
<p>If you are following along, I will assume you are familiar with deploying Talos Linux itself and have talosctl installed with an existing cluster running. If you need more details on how to do that, check out <a href="https://blog.dalydays.com/post/kubernetes-homelab-series-part-1-talos-linux-proxmox/">https://blog.dalydays.com/post/kubernetes-homelab-series-part-1-talos-linux-proxmox/</a>.</p>
<h1 id="dedicated-storage-node">Dedicated Storage Node</h1>
<p>It&rsquo;s not absolutely necessary to use a dedicated storage node. I&rsquo;m doing this because I want to pass a disk directly to a VM for storage on each physical host and want to keep storage somewhat isolated from other worker nodes, and I can spare the few extra resources to dedicate to this purpose. If you want to use existing worker nodes, just follow this process for your existing nodes instead of creating new ones.</p>
<h2 id="create-new-talos-nodes">Create New Talos Node(s)</h2>
<ul>
<li>Create a VM in Proxmox with 4GB RAM and 4 vCPU cores (2GB RAM is not enough due to the fact that you will be enabling hugepages which takes up 2GB and you would see oom-kills otherwise. You also need a dedicated CPU core just for the io-engine, along with all the other stuff that runs. I tried with 2vCPU and it wouldn&rsquo;t schedule the io-engine pod due to insufficient resources). I named my first one talos-storage-1
<ul>
<li>Attach a Talos ISO to the CD ROM and boot from it</li>
<li>Get the IP address from the node</li>
</ul>
</li>
<li>Install Talos using the worker.yaml template used for other worker nodes (you may want to get a current or updated version of Talos from the image factory):
<ul>
<li><code>talosctl apply-config --insecure -n 10.0.50.135 --file _out/worker.yaml</code></li>
</ul>
</li>
<li>Apply a patch to set a static IP and node label, e.g.</li>
</ul>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-yaml" data-lang="yaml"><span class="line"><span class="cl"><span class="c"># ./patches/storage1.patch</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="nt">machine</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">network</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">hostname</span><span class="p">:</span><span class="w"> </span><span class="l">talos-storage-1</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">interfaces</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span>- <span class="nt">deviceSelector</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">          </span><span class="nt">busPath</span><span class="p">:</span><span class="w"> </span><span class="s2">&#34;0*&#34;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">        </span><span class="nt">addresses</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">          </span>- <span class="m">10.0.50.31</span><span class="l">/24</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">        </span><span class="nt">routes</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">          </span>- <span class="nt">network</span><span class="p">:</span><span class="w"> </span><span class="m">0.0.0.0</span><span class="l">/0</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">            </span><span class="nt">gateway</span><span class="p">:</span><span class="w"> </span><span class="m">10.0.50.1</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">nameservers</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span>- <span class="m">192.168.1.22</span><span class="w">
</span></span></span></code></pre></div><ul>
<li><code>talosctl patch mc -n 10.0.50.135 --patch @patches/storage1.patch</code>
<ul>
<li>I&rsquo;m having trouble here with the node name changing, and I have to manually delete the random name from the cluster:</li>
<li>e.g. <code>kubectl delete node talos-lry-si8</code></li>
<li>Also you might need to delete the label <code>openebs.io/nodename=</code> if you already have openebs running and are adding/changing nodes
<ul>
<li><code>kubectl edit node talos-storage-1</code> and change the value to the current node name</li>
</ul>
</li>
</ul>
</li>
<li>Apply a patch to set some machine config stuff for OpenEBS which includes hugepages, a nodeLabel for where mayastor engine should run, and the <code>/var/local</code> bind mount:</li>
</ul>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-yaml" data-lang="yaml"><span class="line"><span class="cl"><span class="c"># ./patches/openebs.patch</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="nt">machine</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">sysctls</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">vm.nr_hugepages</span><span class="p">:</span><span class="w"> </span><span class="s2">&#34;1024&#34;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">nodeLabels</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">openebs.io/engine</span><span class="p">:</span><span class="w"> </span><span class="l">mayastor</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">extraMounts</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span>- <span class="nt">destination</span><span class="p">:</span><span class="w"> </span><span class="l">/var/local</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">type</span><span class="p">:</span><span class="w"> </span><span class="l">bind</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">source</span><span class="p">:</span><span class="w"> </span><span class="l">/var/local</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">options</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">        </span>- <span class="l">rbind</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">        </span>- <span class="l">rshared</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">        </span>- <span class="l">rw</span><span class="w">
</span></span></span></code></pre></div><ul>
<li><code>talosctl patch mc -n 10.0.50.31 --patch @patches/openebs.patch</code></li>
<li>If you have an additional disk to use with OpenEBS, you&rsquo;ll need to pass it directly to the Talos node VM. I&rsquo;m using Proxmox
<ul>
<li>SSH into the Proxmox host and find the disk ID to be passed. I just run <code>ls -lh /dev/disk/by-id/ and get the root disk (not containing any &quot;_1&quot; or &quot;_part*&quot;, for example </code>/dev/disk/by-id/nvme-Samsung_SSD_970_EVO_Plus_2TB_S59CNM0W635077P`)</li>
<li>Pass the disk directly to the Talos VM, where 511 is the VM ID and assuming you only have 1 disk already on scsi0: <code> qm set 511 -scsi1 /dev/disk/by-id/nvme-Samsung_SSD_970_EVO_Plus_2TB_S59CNM0W635077P</code></li>
<li>Checking the hardware tab on VM 511 in Proxmox, you should see this new disk. Double click it and make sure to check &ldquo;Advanced&rdquo;, &ldquo;Discard&rdquo;, and &ldquo;SSD emulation&rdquo;
<ul>
<li>If you see orange on these settings, you will need to shut down the VM, then power it back on for the changes to apply. Rebooting won&rsquo;t do it.</li>
</ul>
</li>
<li>Now that the disk has been added, look for it with talosctl: <code>talosctl get disks -n 10.0.50.31</code>
<ul>
<li>In my case I see a disk named <code>sdb</code> which is 2.0TB with model &ldquo;QEMU HARDDISK&rdquo;</li>
</ul>
</li>
<li>Mount the disk to be passed to containers with appropriate privileges. This is required for openebs-io-engine to access the extra disk.</li>
</ul>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-yaml" data-lang="yaml"><span class="line"><span class="cl"><span class="c"># ./patches/mount-sdb.patch</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="nt">machine</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">disks</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span>- <span class="nt">device</span><span class="p">:</span><span class="w"> </span><span class="l">/dev/sdb</span><span class="w">
</span></span></span></code></pre></div><ul>
<li>Apply: <code>talosctl patch mc -n 10.0.50.31 --patch @patches/mount-sdb.patch</code> - At this point the Talos node will reboot and should come back up healthy in a minute.</li>
<li>View the console or check the dashboard with <code>talosctl dashbord -n 10.0.50.31</code></li>
<li>If you see an error about being unable to mount the disk or the partition being the wrong type, etc. you will need to wipe the disk and create a fresh GPT partition. As of Talos 1.9.0 this can be done with <code>talosctl wipe disk sdb -n 10.0.50.31</code>, otherwise you would need to do this outside of Talos.
<ul>
<li><code>talosctl wipe disk sdb -n 10.0.50.31</code> - where <code>sdb</code> is the device, you confirmed this right? Confirm using <code>talosctl get disks -n 10.0.50.31</code></li>
<li>Otherwise from Proxmox you could do this: Shut down the VM and do this in proxmox with <code>wipefs /dev/yourdev</code> and then use <code>fdisk /dev/yourdev</code> &gt; <code>g</code> &gt; <code>w</code> (<code>g</code> writes a new GPT table, <code>w</code> writes to disk). Now power Talos back on and it should do its thing.</li>
<li>Yet another option would be to boot into a different Linux ISO on the VM and use a tool like Gparted. Whatever you like best.</li>
</ul>
</li>
</ul>
</li>
</ul>
<h3 id="lets-verify-our-disk-mount">Let&rsquo;s Verify Our Disk Mount</h3>
<p>When Talos successfully mounts the extra disk, we should see it listed with <code>lsblk</code> without any partitions. We want to pass the raw disk to OpenEBS. In order to check, run a debug pod on your storage node and check the bind mounts.</p>
<ul>
<li><code>kubectl debug node/talos-storage-1 -it --image=alpine -- /bin/sh</code></li>
<li><code>apk add lsblk</code></li>
<li><code>lsblk</code></li>
<li>Check for the mount, showing the full capacity of your disk.</li>
</ul>
<p>Now repeat this whole process for any other Talos nodes you need. I have 3, so I&rsquo;m doing <code>talos-storage-1</code>, <code>talos-storage-2</code> and <code>talos-storage-3</code>.</p>
<h2 id="worker-nodes-also-need-varlocal-mounted">Worker Nodes Also Need /var/local Mounted</h2>
<p>Certain OpenEBS components run on any node, and this requires all worker nodes to have <code>/var/local</code> mounted.</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-yaml" data-lang="yaml"><span class="line"><span class="cl"><span class="c"># ./patches/mount-var-local.patch</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="nt">machine</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">kubelet</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">extraMounts</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span>- <span class="nt">destination</span><span class="p">:</span><span class="w"> </span><span class="l">/var/local</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">        </span><span class="nt">type</span><span class="p">:</span><span class="w"> </span><span class="l">bind</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">        </span><span class="nt">source</span><span class="p">:</span><span class="w"> </span><span class="l">/var/local</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">        </span><span class="nt">options</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">          </span>- <span class="l">rbind</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">          </span>- <span class="l">rshared</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">          </span>- <span class="l">rw</span><span class="w">
</span></span></span></code></pre></div><p>In my case, I applied this to my 3 worker nodes. I don&rsquo;t think a reboot is required, but you could if you wanted to:</p>
<ul>
<li><code>talosctl patch mc -n 10.0.50.21 --patch @patches/mount-var-local.patch</code></li>
<li><code>talosctl patch mc -n 10.0.50.22 --patch @patches/mount-var-local.patch</code></li>
<li><code>talosctl patch mc -n 10.0.50.23 --patch @patches/mount-var-local.patch</code></li>
</ul>
<p>In order to check the other bind mount for <code>/var/local</code>, we have to wait until after deploying OpenEBS because the mount isn&rsquo;t utilized until a pod is deployed with a HostPath volume at or below this path. Specifically, the <code>openebs-io-engine-*</code> Daemonset maps to this path.</p>
<h1 id="installing-openebs">Installing OpenEBS</h1>
<p>This was a pain to figure out. Documentation from OpenEBS is lacking, and so is documentation from Talos on the same topic. Here&rsquo;s what I found to work. You need a privileged namespace, bind mounts on all worker nodes, then DiskPools before you can start testing PVCs.</p>
<h2 id="privileged-namespace">Privileged Namespace</h2>
<p>OpenEBS requires privileges, and the easiest way to handle that is by making the namespace privileged (rather than messing with machine configs).</p>
<ul>
<li>Add a new privileged namespace. The Helm chart wants you to use <code>openebs</code> so do this:</li>
</ul>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-yaml" data-lang="yaml"><span class="line"><span class="cl"><span class="c"># namespace.yaml</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="nt">apiVersion</span><span class="p">:</span><span class="w"> </span><span class="l">v1</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="nt">kind</span><span class="p">:</span><span class="w"> </span><span class="l">Namespace</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="nt">metadata</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">name</span><span class="p">:</span><span class="w"> </span><span class="l">openebs</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">labels</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">pod-security.kubernetes.io/enforce</span><span class="p">:</span><span class="w"> </span><span class="l">privileged</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">pod-security.kubernetes.io/warn</span><span class="p">:</span><span class="w"> </span><span class="l">privileged</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">pod-security.kubernetes.io/audit</span><span class="p">:</span><span class="w"> </span><span class="l">privileged</span><span class="w">
</span></span></span></code></pre></div><ul>
<li><code>kubectl apply -f namespace.yaml</code></li>
</ul>
<h2 id="helm-installation">Helm Installation</h2>
<ul>
<li><code>helm repo add openebs https://openebs.github.io/openebs</code></li>
<li><code>helm repo update</code></li>
<li>Grab the values from the Helm chart (<code>helm show values openebs/openebs &gt; values.yaml</code>), or use this. I have already modified the config to disable initContainers which is a known issue with Talos, and disabled local provisioners that I&rsquo;m not interested in.</li>
</ul>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-yaml" data-lang="yaml"><span class="line"><span class="cl"><span class="c"># values.yaml</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="nt">openebs-crds</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">csi</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">volumeSnapshots</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">enabled</span><span class="p">:</span><span class="w"> </span><span class="kc">true</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">keep</span><span class="p">:</span><span class="w"> </span><span class="kc">true</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="c"># Refer to https://github.com/openebs/dynamic-localpv-provisioner/blob/v4.1.2/deploy/helm/charts/values.yaml for complete set of values.</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="nt">localpv-provisioner</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">rbac</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">create</span><span class="p">:</span><span class="w"> </span><span class="kc">true</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="c"># Refer to https://github.com/openebs/zfs-localpv/blob/v2.6.2/deploy/helm/charts/values.yaml for complete set of values.</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="nt">zfs-localpv</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">crds</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">zfsLocalPv</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">enabled</span><span class="p">:</span><span class="w"> </span><span class="kc">true</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">csi</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">volumeSnapshots</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">        </span><span class="nt">enabled</span><span class="p">:</span><span class="w"> </span><span class="kc">false</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="c"># Refer to https://github.com/openebs/lvm-localpv/blob/lvm-localpv-1.6.2/deploy/helm/charts/values.yaml for complete set of values.</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="nt">lvm-localpv</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">crds</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">lvmLocalPv</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">enabled</span><span class="p">:</span><span class="w"> </span><span class="kc">true</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">csi</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">volumeSnapshots</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">        </span><span class="nt">enabled</span><span class="p">:</span><span class="w"> </span><span class="kc">false</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="c"># Refer to https://github.com/openebs/mayastor-extensions/blob/v2.7.2/chart/values.yaml for complete set of values.</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="nt">mayastor</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">csi</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">node</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">initContainers</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">        </span><span class="nt">enabled</span><span class="p">:</span><span class="w"> </span><span class="kc">false</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">etcd</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="c"># -- Kubernetes Cluster Domain</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">clusterDomain</span><span class="p">:</span><span class="w"> </span><span class="l">cluster.local</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">localpv-provisioner</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">enabled</span><span class="p">:</span><span class="w"> </span><span class="kc">false</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">crds</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">enabled</span><span class="p">:</span><span class="w"> </span><span class="kc">false</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="c"># -- Configuration options for pre-upgrade helm hook job.</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="nt">preUpgradeHook</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">image</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="c"># -- The container image registry URL for the hook job</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">registry</span><span class="p">:</span><span class="w"> </span><span class="l">docker.io</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="c"># -- The container repository for the hook job</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">repo</span><span class="p">:</span><span class="w"> </span><span class="l">bitnami/kubectl</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="c"># -- The container image tag for the hook job</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">tag</span><span class="p">:</span><span class="w"> </span><span class="s2">&#34;1.25.15&#34;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="c"># -- The imagePullPolicy for the container</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">pullPolicy</span><span class="p">:</span><span class="w"> </span><span class="l">IfNotPresent</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="nt">engines</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">local</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">lvm</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">enabled</span><span class="p">:</span><span class="w"> </span><span class="kc">false</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">zfs</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">enabled</span><span class="p">:</span><span class="w"> </span><span class="kc">false</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">replicated</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">mayastor</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">enabled</span><span class="p">:</span><span class="w"> </span><span class="kc">true</span><span class="w">
</span></span></span></code></pre></div><ul>
<li><code>helm install openebs -n openebs openebs/openebs -f values.yaml</code></li>
<li>Verify: <code>kubectl get po -n openebs</code> and it should look something like this:</li>
</ul>
<pre tabindex="0"><code>NAME                                          READY   STATUS    RESTARTS      AGE
openebs-agent-core-74d4ddc7c5-hjnxl           2/2     Running   0             9m23s
openebs-agent-ha-node-f9bsb                   1/1     Running   0             9m23s
openebs-agent-ha-node-gjdbt                   1/1     Running   0             9m23s
openebs-agent-ha-node-mwjq9                   1/1     Running   0             9m23s
openebs-agent-ha-node-rfjrw                   1/1     Running   0             93s
openebs-api-rest-757d87d4bd-zd2ms             1/1     Running   0             9m23s
openebs-csi-controller-58c7dfcd5b-6jtcq       6/6     Running   0             9m23s
openebs-csi-node-fmmwg                        2/2     Running   0             9m23s
openebs-csi-node-j95f5                        2/2     Running   2 (60s ago)   93s
openebs-csi-node-jxkvq                        2/2     Running   0             9m23s
openebs-csi-node-xtsnt                        2/2     Running   0             9m23s
openebs-etcd-0                                1/1     Running   0             9m23s
openebs-etcd-1                                1/1     Running   0             9m23s
openebs-etcd-2                                1/1     Running   0             9m23s
openebs-io-engine-lb8zr                       2/2     Running   0             9m23s
openebs-localpv-provisioner-657c44878-wjmwr   1/1     Running   0             9m23s
openebs-loki-0                                1/1     Running   0             9m23s
openebs-nats-0                                3/3     Running   0             9m23s
openebs-nats-1                                3/3     Running   0             9m23s
openebs-nats-2                                3/3     Running   0             9m23s
openebs-obs-callhome-8665bb8f6f-4ntrd         2/2     Running   0             9m23s
openebs-operator-diskpool-6d44884f8f-52rrx    1/1     Running   0             9m23s
openebs-promtail-2g6kz                        1/1     Running   0             9m23s
openebs-promtail-d72cx                        1/1     Running   0             93s
openebs-promtail-hsxlc                        1/1     Running   0             9m23s
openebs-promtail-npw7f                        1/1     Running   0             9m23s
</code></pre><ul>
<li>If pods are stuck initializing after a few minutes, start checking logs with <code>openebs-etcd-0</code> since a lot of things depend on that being up before it will initialize.
<ul>
<li>Related to <code>/var/local</code> bind mounts not being added to worker nodes, <code>openebs-etcd-*</code> will have Warning events about &ldquo;FailedMount&rdquo;, stating that a PVC doesn&rsquo;t exist. If you look closely, the path starts with <code>/var/local/...</code> which means you&rsquo;re missing the <code>/var/local</code> bind mount on one or more Talos worker/storage nodes.</li>
<li>The fix in this situation would be to fix the bind mount on the worker nodes, and you could also try a reboot (you can identify the node based on the pod having issues). There&rsquo;s no need to change the OpenEBS deployment.</li>
<li>After fixing it, you might see stale pods with errors. You can terminate pods in an error state, and if they are needed the Deployment or Daemonset responsible for them will recreate them as needed.</li>
</ul>
</li>
</ul>
<h2 id="add-diskpools">Add DiskPool(s)</h2>
<p>I was excited at this point to test with a PVC, but then was confused about why it wouldn&rsquo;t provision anything. The Talos docs feel a bit sparse and seem to imply that now is the time to test with a PVC. But if you pay close attention they only mention testing the local provisioner, adding to my confusion. It turns out you need to add DiskPools, which makes sense in hindsight. If you&rsquo;ve ever used Longhorn there is a similar config needed after the initial install, so that you know what disk capacity you have to work with.</p>
<ul>
<li>Earlier, we mounted that 2TB disk in the talos-storage-1 node. Now we&rsquo;ll use that for our first DiskPool</li>
<li>Get the disk ID by exec-ing into the openebs-io-engine pod
<ul>
<li>Identify one of the io-engine pods: <code>kubectl get po -l app=io-engine -n openebs</code></li>
<li>Exec into the pod: <code>kubectl exec -it openebs-io-engine-jpnrh -c io-engine -n openebs -- bin/sh</code></li>
<li><code>ls -lh /dev/disk/by-id/</code> - grab the one pointing to <code>/dev/sdb</code> in our case, which for me is <code>scsi-0QEMU_QEMU_HARDDISK_drive-scsi1</code></li>
</ul>
</li>
<li>FYI I went with uring instead of aio since it&rsquo;s the new kid on the block</li>
</ul>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-yaml" data-lang="yaml"><span class="line"><span class="cl"><span class="c"># diskpool-1.yaml</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="nt">apiVersion</span><span class="p">:</span><span class="w"> </span><span class="s2">&#34;openebs.io/v1beta2&#34;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="nt">kind</span><span class="p">:</span><span class="w"> </span><span class="l">DiskPool</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="nt">metadata</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">name</span><span class="p">:</span><span class="w"> </span><span class="l">pool-1</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">namespace</span><span class="p">:</span><span class="w"> </span><span class="l">openebs</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="nt">spec</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">node</span><span class="p">:</span><span class="w"> </span><span class="l">talos-storage-1</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">disks</span><span class="p">:</span><span class="w"> </span><span class="p">[</span><span class="s2">&#34;uring:///dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_drive-scsi1&#34;</span><span class="p">]</span><span class="w">
</span></span></span></code></pre></div><ul>
<li><code>kubectl apply -f diskpool-1.yaml</code></li>
<li>Verify: <code>kubectl get dsp -n openebs</code> - quickly it should change to Created state and Online POOL_STATUS</li>
</ul>
<p>Repeat this process for any/all storage nodes you have. Since I&rsquo;m virtualizing Talos, the disk path is exactly the same on all 3 nodes so I can reuse the config, just updating the pool name and the node name.</p>
<h3 id="troubleshooting">Troubleshooting</h3>
<p>Sorry I can&rsquo;t be a ton of help here since I have only done really limited troubleshooting. If you run into issues with diskpools stuck in Creating status, you will need to describe the dsp or check logs.</p>
<p>What I can say is that I&rsquo;m running a homelab, which means I have 3 different disks, from 3 different manufacturers. Two of them worked in this configuration, but one did not. The disk is fine, it&rsquo;s fairly new, I&rsquo;ve tried wiping it multiple times, multiple ways, but it just would not work with the diskpool. I tried doing it as a bind mount and using /mnt/local/nvme2tb (this one requires adding the volume to the io-engine Daemonset), mounting /dev/sdb, mounting /dev/sdb1, everything I could think of, but it would not create. I rebuilt the Talos node but got the same results. I switched to a different disk and changed nothing else, and it works totally fine. For prosperity, these are the disks I have and which ones worked in this configuration.</p>
<ul>
<li>Samsung 970 EVO Plus 2TB - no problems</li>
<li>Samsung 990 EVO 2TB - no problems</li>
<li>WD BLACK SN770 2TB - no problems</li>
<li>Crucial P3 Plus 2TB (CT2000P3PSSD8) - COULD NOT GET THIS WORKING :(</li>
</ul>
<p>If you are reading this and you know or think you might know why this didn&rsquo;t work, please reach out! I&rsquo;m interested in understanding why this wouldn&rsquo;t work and how I could troubleshoot better.</p>
<h2 id="testing-a-replicated-pvc">Testing A Replicated PVC</h2>
<p>If you are here, you have at least one working diskpool and are ready to test that PVC provisioning works and can be attached to a running pod. Let&rsquo;s test that.</p>
<ul>
<li>Verify diskpools: <code>kubectl get dsp -n openebs</code> - for my 3 storage nodes with 2TB volumes, I&rsquo;m seeing this</li>
</ul>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-sh" data-lang="sh"><span class="line"><span class="cl">NAME     NODE              STATE     POOL_STATUS   CAPACITY        USED   AVAILABLE
</span></span><span class="line"><span class="cl">pool-1   talos-storage-1   Created   Online        <span class="m">1998443249664</span>   <span class="m">0</span>      <span class="m">1998443249664</span>
</span></span><span class="line"><span class="cl">pool-2   talos-storage-2   Created   Online        <span class="m">1998443249664</span>   <span class="m">0</span>      <span class="m">1998443249664</span>
</span></span><span class="line"><span class="cl">pool-3   talos-storage-3   Created   Online        <span class="m">1998443249664</span>   <span class="m">0</span>      <span class="m">1998443249664</span>
</span></span></code></pre></div><ul>
<li>Check what storage classes are available: <code>kubectl get sc</code></li>
</ul>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-sh" data-lang="sh"><span class="line"><span class="cl">NAME                     PROVISIONER               RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
</span></span><span class="line"><span class="cl">mayastor-etcd-localpv    openebs.io/local          Delete          WaitForFirstConsumer   <span class="nb">false</span>                  6h15m
</span></span><span class="line"><span class="cl">mayastor-loki-localpv    openebs.io/local          Delete          WaitForFirstConsumer   <span class="nb">false</span>                  6h15m
</span></span><span class="line"><span class="cl">openebs-hostpath         openebs.io/local          Delete          WaitForFirstConsumer   <span class="nb">false</span>                  6h15m
</span></span><span class="line"><span class="cl">openebs-single-replica   io.openebs.csi-mayastor   Delete          Immediate              <span class="nb">true</span>                   6h15m
</span></span></code></pre></div><ul>
<li>For now, we&rsquo;re interested in testing that <code>openebs-single-replica</code> SC that uses Mayastor, so write this file. Note that this uses the default namespace:</li>
</ul>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-yaml" data-lang="yaml"><span class="line"><span class="cl"><span class="c"># test-pvc-openebs-single-replica.yaml</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="nn">---</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="nt">apiVersion</span><span class="p">:</span><span class="w"> </span><span class="l">v1</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="nt">kind</span><span class="p">:</span><span class="w"> </span><span class="l">PersistentVolumeClaim</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="nt">metadata</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">name</span><span class="p">:</span><span class="w"> </span><span class="l">openebs-testpvc</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="nt">spec</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">storageClassName</span><span class="p">:</span><span class="w"> </span><span class="l">openebs-single-replica</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">accessModes</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span>- <span class="l">ReadWriteOnce</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">resources</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">requests</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">storage</span><span class="p">:</span><span class="w"> </span><span class="l">10Gi</span><span class="w">
</span></span></span></code></pre></div><ul>
<li>Apply it: <code>kubectl apply -f test-pvc-openebs-single-replica.yaml</code></li>
<li>Check PV and PVC:
<ul>
<li><code>kubectl get pv</code></li>
<li><code>kubectl get pvc</code> - you should see a PVC named <code>openebs-testpvc</code> with status Bound and storage class <code>openebs-single-replica</code></li>
</ul>
</li>
<li>Deploy a test pod to attach the PVC to - this pod is also in the default namespace:</li>
</ul>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-yaml" data-lang="yaml"><span class="line"><span class="cl"><span class="c"># pod-using-testpvc.yaml</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="nn">---</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="nt">apiVersion</span><span class="p">:</span><span class="w"> </span><span class="l">v1</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="nt">kind</span><span class="p">:</span><span class="w"> </span><span class="l">Pod</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="nt">metadata</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">name</span><span class="p">:</span><span class="w"> </span><span class="l">testlogger</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="nt">spec</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">containers</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span>- <span class="nt">name</span><span class="p">:</span><span class="w"> </span><span class="l">testlogger</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">image</span><span class="p">:</span><span class="w"> </span><span class="l">alpine:3.20</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">command</span><span class="p">:</span><span class="w"> </span><span class="p">[</span><span class="s2">&#34;/bin/ash&#34;</span><span class="p">]</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">args</span><span class="p">:</span><span class="w"> </span><span class="p">[</span><span class="s2">&#34;-c&#34;</span><span class="p">,</span><span class="w"> </span><span class="s2">&#34;while true; do echo \&#34;$(date) - test log\&#34; &gt;&gt; /mnt/test.log &amp;&amp; sleep 1; done&#34;</span><span class="p">]</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">volumeMounts</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span>- <span class="nt">name</span><span class="p">:</span><span class="w"> </span><span class="l">testvol</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">        </span><span class="nt">mountPath</span><span class="p">:</span><span class="w"> </span><span class="l">/mnt</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">volumes</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span>- <span class="nt">name</span><span class="p">:</span><span class="w"> </span><span class="l">testvol</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">persistentVolumeClaim</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">        </span><span class="nt">claimName</span><span class="p">:</span><span class="w"> </span><span class="l">openebs-testpvc</span><span class="w">
</span></span></span></code></pre></div><ul>
<li><code>kubectl apply -f pod-using-testpvc.yaml</code></li>
<li>Verify it&rsquo;s running: <code>kubectl get po testlogger</code></li>
<li>Exec into the test pod: <code>kubectl exec -it testlogger -- /bin/sh</code></li>
<li>Look at your mounts with <code>df -h /mnt</code>. Since OpenEBS Mayastor uses NVMe-oF you should see that mount path <code>/mnt</code> attached to what looks like a NVMe block device.</li>
</ul>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-sh" data-lang="sh"><span class="line"><span class="cl">Filesystem                Size      Used Available Use% Mounted on
</span></span><span class="line"><span class="cl">/dev/nvme0n1              9.7G     28.0K      9.2G   0% /mnt
</span></span></code></pre></div><ul>
<li>Hmm. The size doesn&rsquo;t quite add up. 9.7G is close to 10Gi but then 28.0K used does not equal 9.2G available. Hmm&hellip;</li>
<li>Cleanup:
<ul>
<li><code>kubectl delete -f pod-using-testpvc.yaml</code></li>
<li><code>kubectl delete -f test-pvc-openebs-single-replica.yaml</code></li>
</ul>
</li>
</ul>
<h2 id="what-is-a-single-replica-anyway">What Is A &ldquo;Single&rdquo; Replica Anyway?</h2>
<blockquote>
<p>A replica is an exact reproduction of something&hellip;</p>
</blockquote>
<p>A replica by definition cannot exist without copying something that already exists, which inherently means there must be at least 2. In the context of OpenEBS, a single replica just means you only have ONE copy of the data. It gets randomly assigned to one of the diskpools available. If that disk fails, that data is gone. But we are using OpenEBS for the purpose of replication, so how do we get more replicas??? Follow me.</p>
<ul>
<li>Create a new storage class with 2 replicas (feel free to do 3 or any value at this point):</li>
</ul>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-yaml" data-lang="yaml"><span class="line"><span class="cl"><span class="c"># openebs-2-replicas-sc.yaml</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="nn">---</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="nt">apiVersion</span><span class="p">:</span><span class="w"> </span><span class="l">storage.k8s.io/v1</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="nt">kind</span><span class="p">:</span><span class="w"> </span><span class="l">StorageClass</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="nt">metadata</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">name</span><span class="p">:</span><span class="w"> </span><span class="l">openebs-2-replicas</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="nt">parameters</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">protocol</span><span class="p">:</span><span class="w"> </span><span class="l">nvmf</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">repl</span><span class="p">:</span><span class="w"> </span><span class="s2">&#34;2&#34;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="nt">provisioner</span><span class="p">:</span><span class="w"> </span><span class="l">io.openebs.csi-mayastor</span><span class="w">
</span></span></span></code></pre></div><ul>
<li><code>kubectl apply -f openebs-2-replicas-sc.yaml</code></li>
<li>Verify: <code>kubectl get sc</code></li>
<li>Test - deploy another test-pvc using the new SC:</li>
</ul>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-yaml" data-lang="yaml"><span class="line"><span class="cl"><span class="c"># test-pvc-openebs-2-replicas.yaml</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="nn">---</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="nt">apiVersion</span><span class="p">:</span><span class="w"> </span><span class="l">v1</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="nt">kind</span><span class="p">:</span><span class="w"> </span><span class="l">PersistentVolumeClaim</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="nt">metadata</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">name</span><span class="p">:</span><span class="w"> </span><span class="l">openebs-testpvc</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="nt">spec</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">storageClassName</span><span class="p">:</span><span class="w"> </span><span class="l">openebs-2-replicas</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">accessModes</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span>- <span class="l">ReadWriteOnce</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">resources</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">requests</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">storage</span><span class="p">:</span><span class="w"> </span><span class="l">10Gi</span><span class="w">
</span></span></span></code></pre></div><ul>
<li><code>kubectl apply -f openebs-2-replicas.yaml</code></li>
<li>Test with a pod, using the same pod from earlier - <code>kubectl apply -f pod-using-testpvc.yaml</code></li>
<li>Exec into the test pod: <code>kubectl exec -it testlogger -- /bin/sh</code></li>
<li>Check your mounted disk: <code>df -h /mnt</code></li>
</ul>
<h1 id="conclusion">Conclusion</h1>
<p>I&rsquo;m going to stop here. I didn&rsquo;t go into any details about how to check which diskpools hold the replica(s) for the PV, but I assume there is a way to do that. I also did not look at how to recover in case there is a diskpool or storage node failure. Assuming you had 3 replicas, and one went down, there would be no data loss.</p>
<p>I didn&rsquo;t cover performance, monitoring, recovery, or anything else that you probably care about long term. That could be a future post, but my next stop is actually evaluating Longhorn with the v2 engine. As of today, they have released 1.8.0-rc5, which enables support for their v2 engine with Talos (this just means they support NVMe-oF). If Longhorn can now work with NVMe-oF and Talos, to me that is a much more mature and feature rich product, with more community support than OpenEBS. I believe it also supports snapshots and other features that OpenEBS does not currently.</p>
<p>My next post will be all about blowing this setup away and doing it all over with Longhorn. Hopefully by then the stable 1.8.0 will have been released.</p>
]]></content:encoded>
    </item>
    
    <item>
      <title>Kubernetes Homelab Series Part 6 - Storage With democratic-csi</title>
      <link>https://blog.dalydays.com/post/kubernetes-homelab-series-part-6-storage-with-democratic-csi/</link>
      <pubDate>Mon, 25 Nov 2024 00:00:00 +0000</pubDate>
      
      <guid>https://blog.dalydays.com/post/kubernetes-homelab-series-part-6-storage-with-democratic-csi/</guid>
      <description>Diving into the depths of Kubernetes storage, then walking through using democratic-csi for iSCSI and NFS with Talos Linux.</description>
      <content:encoded><![CDATA[<h1 id="whats-so-hard-about-storage">What&rsquo;s So Hard About Storage?</h1>
<p>There are several questions to answer when deciding how to handle storage for Kubernetes.</p>
<ul>
<li>How many disks do you have?</li>
<li>Do you want/need replicated storage?</li>
<li>What are your storage capacity requirements?</li>
<li>What are your performance requirements?</li>
<li>Do you need dynamically provisioned storage or will you be doing it manually?</li>
<li>NFS or iSCSI or NVMe-oF?</li>
<li>How will you back up your data?</li>
<li>Do you need snapshots?</li>
<li>Does your storage need to be highly available?</li>
<li>Does it need to be accessible from any node in the cluster, or are you good with node local storage?</li>
</ul>
<p>Ultimately it&rsquo;s not actually hard, it&rsquo;s just complex if you want to achieve anything like you get with EBS volumes in AWS, but in your homelab. Here&rsquo;s how I always try to approach complex problems: start as simple as possible, make it work, then add complexity only as needed.</p>
<h2 id="my-requirements">My Requirements</h2>
<ul>
<li>To have dynamically provisioned persistent volumes</li>
<li>To have that persistent volume be accessible from any node in my cluster</li>
<li>Keep it as simple as reasonably possible</li>
</ul>
<h2 id="how-am-i-doing-it">How Am I Doing It?</h2>
<p>Currently I have a single 2TB NVMe disk and I want to use that for dynamically provisioned storage. I&rsquo;m not worried about replication right now since I have backups in place and this is just for my homelab. If I wanted to do replicated storage, I might consider a Ceph cluster, but that realistically requires a decent amount of hardware and fast networking interconnectivity (greater than 1GB, ideally 10GB minimum for replication to keep up).</p>
<p>In order to manage the disk I&rsquo;m using TrueNAS Scale, which is basically ZFS on Linux with a nice web GUI to manage things. This actually provides the option of doing a zpool with maybe 2 disks mirrored, or even a RAIDZ6 as your storage target to easily solve for replication across disks.</p>
<p>In Proxmox, I&rsquo;m passing the disk itself directly to TrueNAS VM. You <strong>should</strong> pass an entire disk controller if you need to pool multiple drives together in ZFS. If you&rsquo;re dealing with a single disk, it&rsquo;s OK to do it this way.</p>
<p>Spin up the VM, format the disk, ZFS, etc. and you&rsquo;re ready to go. In my case, this is on the Kubernetes VLAN and assigned a static IP. Originally I had a second NIC attached to the primary VLAN but this caused some weird web UI performance issues so I removed that one.</p>
<h1 id="truenas-setup-in-proxmox">TrueNAS Setup In Proxmox</h1>
<p>This isn&rsquo;t intended to be a TrueNAS tutorial, so I&rsquo;ll just list the steps at a high level. Basically, get a TrueNAS server running and then proceed.</p>
<ul>
<li>Pass a disk (or HBA controller) to a VM in Proxmox</li>
<li>Install TrueNAS Scale</li>
<li>Create a zpool</li>
<li>Generate an API Key - in the top right corner go to Admin &gt; API Keys</li>
<li>Make sure the network is accessible from your Kubernetes cluster</li>
</ul>
<h1 id="install-democratic-csi-with-truenas">Install democratic-csi With TrueNAS</h1>
<p><a href="https://github.com/democratic-csi/democratic-csi">https://github.com/democratic-csi/democratic-csi</a></p>
<p>This is a straightforward CSI provider that focuses on dynamically provisioned storage from TrueNAS or generic ZFS on Linux backends. Protocols include NFS, iSCSI, and NVMe-oF. I&rsquo;ll show you how to use the API variation and do NFS and iSCSI shares, plus talk about almost getting NVMe-oF working.</p>
<p>You will install a separate Helm chart for each provisioner, and you can actually run multiple at the same time, which is what I will be doing with both NFS and iSCSI. This is helpful since NFS even supports RWX volumes (if you actually have a use case for that), while iSCSI is a good default for RWO volumes.</p>
<h2 id="volumesnapshot-support">VolumeSnapshot Support</h2>
<p>This is optional, but if you want to utilize volume snapshots (which became GA as of Kubernetes 1.20), you will need to install the CRDs which aren&rsquo;t included with vanilla Talos, along with installing the &ldquo;snapshotter.&rdquo; This implements Volume Snapshots - <a href="https://kubernetes.io/docs/concepts/storage/volume-snapshots/">https://kubernetes.io/docs/concepts/storage/volume-snapshots/</a></p>
<ul>
<li>Clone repo: <code>git clone https://github.com/kubernetes-csi/external-snapshotter.git</code></li>
<li><code>cd external-snapshotter</code></li>
<li>Apply CRDs: <code> kubectl kustomize client/config/crd | kubectl create -f -</code></li>
<li>Install snapshotter into kube-system: <code>kubectl -n kube-system kustomize deploy/kubernetes/snapshot-controller | kubectl create -f -</code></li>
<li>Verify: <code>kubectl get deploy snapshot-controller -n kube-system</code></li>
</ul>
<h2 id="dynamic-iscsi-provisioner-with-freenas-api-iscsi">Dynamic iSCSI Provisioner With freenas-api-iscsi</h2>
<p>My single 2TB disk is in a pool named <code>nvme2tb</code>. I created a dataset in TrueNAS named <code>iscsi</code>. Those may vary in your case, so pay attention to the configuration and update those values according to your environment.</p>
<p>Don&rsquo;t forget, your Talos installation needs to include the iscsi extension or the nodes won&rsquo;t be able to connect to TrueNAS.</p>
<ul>
<li>Create a dataset named <code>iscsi</code></li>
<li>Make sure Block (iSCSI) Shares Targets is running, and click Configure</li>
<li>Save the defaults for Target Global Configuration</li>
<li>Add a portal on 0.0.0.0:3260 named <code>k8s-democratic-csi</code></li>
<li>Add an Initiator Group, Allow all initiators, and name it something like <code>k8s-talos</code></li>
<li>Create a Target named <code>donotdelete</code> and alias <code>donotdelete</code>, then add iSCSI group selecting the Portal and Initiator Group you just created. This prevents TrueNAS from deleting the Initiator Group if you&rsquo;re testing and you delete the one and only PV.</li>
<li>Make note of the portal ID and the Initiator Group ID and update these values in the file <code>freenas-api-iscsi.yaml</code> if needed
<ul>
<li>During testing, the manually created Initiator Group was getting deleted whenever deleting the last PV. This appears to be a bug in TrueNAS somewhere according to <a href="https://github.com/democratic-csi/democratic-csi/issues/412">https://github.com/democratic-csi/democratic-csi/issues/412</a>. Essentially TrueNAS deletes the Initiator Group automatically if an associated Target is deleted and no others exist. If you followed the instructions and created a manual Target this won&rsquo;t be an issue :)</li>
</ul>
</li>
<li>Create the democratic-csi namespace: <code>kubectl create ns democratic-csi</code></li>
<li>Make that namespace privileged: <code>kubectl label --overwrite namespace democratic-csi pod-security.kubernetes.io/enforce=privileged</code></li>
<li>Create <code>freenas-api-iscsi.yaml</code> and update <code>apiKey</code>, <code>host</code>, <code>targetPortal</code>, <code>datasetParentName</code>, and <code>detachedSnapshotsDatasetParentName</code>. Other common considerations in storage class YAML config are <code>storageClasses.defaultClass</code> (true/false, there can only be one), and <code>storageClasses.reclaimPolicy</code> (Delete/Retain) and if you set this to Retain, there is less chance that data will be deleted if you delete a pod (for example), but you are also responsible for deleting this manually in TrueNAS if you ever need to.
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-yaml" data-lang="yaml"><span class="line"><span class="cl"><span class="nt">driver</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">config</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">driver</span><span class="p">:</span><span class="w"> </span><span class="l">freenas-api-iscsi</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">httpConnection</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">protocol</span><span class="p">:</span><span class="w"> </span><span class="l">https</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">apiKey</span><span class="p">:</span><span class="w"> </span><span class="p">[</span><span class="l">your-truenas-api-key-here]</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">host</span><span class="p">:</span><span class="w"> </span><span class="m">10.0.50.99</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">port</span><span class="p">:</span><span class="w"> </span><span class="m">443</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">allowInsecure</span><span class="p">:</span><span class="w"> </span><span class="kc">true</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">zfs</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">datasetParentName</span><span class="p">:</span><span class="w"> </span><span class="l">nvme2tb/iscsi/volumes</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">detachedSnapshotsDatasetParentName</span><span class="p">:</span><span class="w"> </span><span class="l">nvme2tb/iscsi/snapshots</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">zvolCompression</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">zvolDedup</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">zvolEnableReservation</span><span class="p">:</span><span class="w"> </span><span class="kc">false</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">zvolBlockSize</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">iscsi</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">targetPortal</span><span class="p">:</span><span class="w"> </span><span class="s2">&#34;10.0.50.99:3260&#34;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">targetPortals</span><span class="p">:</span><span class="w"> </span><span class="p">[]</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">interface</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">namePrefix</span><span class="p">:</span><span class="w"> </span><span class="l">csi-</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">nameSuffix</span><span class="p">:</span><span class="w"> </span><span class="s2">&#34;-talos&#34;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">targetGroups</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">        </span>- <span class="nt">targetGroupPortalGroup</span><span class="p">:</span><span class="w"> </span><span class="m">1</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">          </span><span class="nt">targetGroupInitiatorGroup</span><span class="p">:</span><span class="w"> </span><span class="m">5</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">          </span><span class="nt">targetGroupAuthType</span><span class="p">:</span><span class="w"> </span><span class="l">None</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">          </span><span class="nt">targetGroupAuthGroup</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">extentInsecureTpc</span><span class="p">:</span><span class="w"> </span><span class="kc">true</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">extentXenCompat</span><span class="p">:</span><span class="w"> </span><span class="kc">false</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">extentDisablePhysicalBlocksize</span><span class="p">:</span><span class="w"> </span><span class="kc">true</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">extentBlocksize</span><span class="p">:</span><span class="w"> </span><span class="m">512</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">extentRpm</span><span class="p">:</span><span class="w"> </span><span class="s2">&#34;SSD&#34;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">extentAvailThreshold</span><span class="p">:</span><span class="w"> </span><span class="m">0</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="nt">csiDriver</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="c"># should be globally unique for a given cluster</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">name</span><span class="p">:</span><span class="w"> </span><span class="s2">&#34;org.democratic-csi.freenas-api-iscsi&#34;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="nt">storageClasses</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span>- <span class="nt">name</span><span class="p">:</span><span class="w"> </span><span class="l">truenas-iscsi</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">defaultClass</span><span class="p">:</span><span class="w"> </span><span class="kc">true</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">reclaimPolicy</span><span class="p">:</span><span class="w"> </span><span class="l">Delete</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">volumeBindingMode</span><span class="p">:</span><span class="w"> </span><span class="l">Immediate</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">allowVolumeExpansion</span><span class="p">:</span><span class="w"> </span><span class="kc">true</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">parameters</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">fsType</span><span class="p">:</span><span class="w"> </span><span class="l">ext4</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">detachedVolumesFromSnapshots</span><span class="p">:</span><span class="w"> </span><span class="s2">&#34;false&#34;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">mountOptions</span><span class="p">:</span><span class="w"> </span><span class="p">[]</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">secrets</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">provisioner-secret</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">controller-publish-secret</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">node-stage-secret</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">node-publish-secret</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">controller-expand-secret</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="nt">volumeSnapshotClasses</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span>- <span class="nt">name</span><span class="p">:</span><span class="w"> </span><span class="l">truenas-iscsi</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">parameters</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">detachedSnapshots</span><span class="p">:</span><span class="w"> </span><span class="s2">&#34;true&#34;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="nt">node</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">hostPID</span><span class="p">:</span><span class="w"> </span><span class="kc">true</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">driver</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">extraEnv</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span>- <span class="nt">name</span><span class="p">:</span><span class="w"> </span><span class="l">ISCSIADM_HOST_STRATEGY</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">        </span><span class="nt">value</span><span class="p">:</span><span class="w"> </span><span class="l">nsenter</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span>- <span class="nt">name</span><span class="p">:</span><span class="w"> </span><span class="l">ISCSIADM_HOST_PATH</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">        </span><span class="nt">value</span><span class="p">:</span><span class="w"> </span><span class="l">/usr/local/sbin/iscsiadm</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">iscsiDirHostPath</span><span class="p">:</span><span class="w"> </span><span class="l">/usr/local/etc/iscsi</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">iscsiDirHostPathType</span><span class="p">:</span><span class="w"> </span><span class="s2">&#34;&#34;</span><span class="w">
</span></span></span></code></pre></div></li>
<li>Deploy: <code>helm upgrade --install --namespace democratic-csi --values freenas-api-iscsi.yaml truenas-iscsi democratic-csi/democratic-csi</code></li>
<li>Verify:
<ul>
<li>You&rsquo;re looking to see that everything is fully running. It may take a minute to spin up.</li>
<li><code>kubectl get all -n democratic-csi</code></li>
<li><code>kubectl get storageclasses</code> or <code>kubectl get sc</code></li>
</ul>
</li>
</ul>
<h3 id="test---deploy-a-pvc">Test - Deploy A PVC</h3>
<ul>
<li>Test with a simple PVC, targeting our new <code>truenas-iscsi</code> storage class, <code>test-pvc-truenas-iscsi.yaml</code>:
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-yaml" data-lang="yaml"><span class="line"><span class="cl"><span class="nt">apiVersion</span><span class="p">:</span><span class="w"> </span><span class="l">v1</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="nt">kind</span><span class="p">:</span><span class="w"> </span><span class="l">PersistentVolumeClaim</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="nt">metadata</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">name</span><span class="p">:</span><span class="w"> </span><span class="l">testpvc-iscsi</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="nt">spec</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">storageClassName</span><span class="p">:</span><span class="w"> </span><span class="l">truenas-iscsi</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">accessModes</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span>- <span class="l">ReadWriteOnce</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">resources</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">requests</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">storage</span><span class="p">:</span><span class="w"> </span><span class="l">5Gi</span><span class="w">
</span></span></span></code></pre></div><ul>
<li><code>kubectl apply -f test-pvc-truenas-iscsi.yaml</code></li>
<li>Be patient, this can take a minute to provision the new Zvol on TrueNAS (looks something like <code>nvme2tb/iscsi/volumes/pvc-25e70f84-91c7-4e49-a9f1-e324681a3b7d</code>) and get everything mapped in Kubernetes.</li>
<li>Check the Persistent Volume itself: <code>kubectl get pv</code>
<ul>
<li>Looking for a new entry here</li>
</ul>
</li>
<li>Check the Persistent Volume Claim: <code>kubectl get pvc</code>
<ul>
<li>Looking for status Bound to the newly created PV</li>
</ul>
</li>
<li>If you need to investigate, next look at <code>kubectl describe pvc</code> and <code>kubectl describe pv</code>, or go look in the TrueNAS UI to see if a new disk has been created</li>
</ul>
</li>
</ul>
<h3 id="test---deploy-a-pod">Test - Deploy A Pod</h3>
<p>At this point there should be a PV and PVC, but they are not actually connected to a pod yet. The moment a pod claims a PVC, that&rsquo;s when the actual node that the pod is running on will mount the iSCSI target, and this is where the <code>iscsi-utils</code> extension comes into play in Talos Linux. Let&rsquo;s test to make sure we can actually connect to the PVC from a pod.</p>
<p>This test pod uses a small Alpine image and writes to a log file every second. The two lines commented out at the bottom are there in case you want to target a specific node. I would recommend if you&rsquo;re not sure all your Talos Linux nodes are configured properly for iSCSI to target each of them and verify from every pod. You can delete the pod, but preserve the PVC and if you reconnect to the PVC from another pod, even if it&rsquo;s running on another node, it should still contain the same data.</p>
<ul>
<li>Create <code>pod-using-testpvc-iscsi.yaml</code>:
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-yaml" data-lang="yaml"><span class="line"><span class="cl"><span class="nt">apiVersion</span><span class="p">:</span><span class="w"> </span><span class="l">v1</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="nt">kind</span><span class="p">:</span><span class="w"> </span><span class="l">Pod</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="nt">metadata</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">name</span><span class="p">:</span><span class="w"> </span><span class="l">testlogger-iscsi</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="nt">spec</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">containers</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span>- <span class="nt">name</span><span class="p">:</span><span class="w"> </span><span class="l">testlogger-iscsi</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">image</span><span class="p">:</span><span class="w"> </span><span class="l">alpine:3.20</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">command</span><span class="p">:</span><span class="w"> </span><span class="p">[</span><span class="s2">&#34;/bin/ash&#34;</span><span class="p">]</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">args</span><span class="p">:</span><span class="w"> </span><span class="p">[</span><span class="s2">&#34;-c&#34;</span><span class="p">,</span><span class="w"> </span><span class="s2">&#34;while true; do echo \&#34;$(date) - test log\&#34; &gt;&gt; /mnt/test.log &amp;&amp; sleep 1; done&#34;</span><span class="p">]</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">volumeMounts</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span>- <span class="nt">name</span><span class="p">:</span><span class="w"> </span><span class="l">testvol</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">        </span><span class="nt">mountPath</span><span class="p">:</span><span class="w"> </span><span class="l">/mnt</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">volumes</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span>- <span class="nt">name</span><span class="p">:</span><span class="w"> </span><span class="l">testvol</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">persistentVolumeClaim</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">        </span><span class="nt">claimName</span><span class="p">:</span><span class="w"> </span><span class="l">testpvc-iscsi</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="c">#    nodeSelector:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="c">#      kubernetes.io/hostname: taloswk1</span><span class="w">
</span></span></span></code></pre></div></li>
<li>Deploy: <code>kubectl apply -f pod-using-testpvc-iscsi.yaml</code></li>
<li>Verify <code>kubectl get po</code>
<ul>
<li>Check which node it&rsquo;s on with <code>kubectl get po -o wide</code> or <code>kubectl describe po testlogger-iscsi | grep Node:</code></li>
</ul>
</li>
<li>Validate data is being written to the PVC:
<ul>
<li>Exec into the pod: <code>kubectl exec -it testlogger-iscsi -- /bin/sh</code></li>
<li>Look at the file: <code>cat /mnt/test.log</code></li>
<li>Show line count: <code>wc -l /mnt/test.log</code></li>
</ul>
</li>
</ul>
<h3 id="test---cleanup">Test - Cleanup</h3>
<ul>
<li>Delete pod: <code>kubectl delete -f pod-using-testpvc-iscsi.yaml</code></li>
<li>Delete PVC: <code>kubectl delete -f test-pvc-truenas-iscsi.yaml</code></li>
</ul>
<h2 id="dynamic-nfs-provisioner-with-freenas-api-nfs">Dynamic NFS Provisioner With freenas-api-nfs</h2>
<p>This one&rsquo;s a little simpler than iSCSI since support is built into Talos automatically, and there&rsquo;s less setup on the TrueNAS side.</p>
<ul>
<li>Create a dataset named <code>nfs</code></li>
<li>Create the democratic-csi namespace: <code>kubectl create ns democratic-csi</code></li>
<li>Make that namespace privileged: <code>kubectl label --overwrite namespace democratic-csi pod-security.kubernetes.io/enforce=privileged</code></li>
<li>Create <code>freenas-api-nfs.yaml</code> and update <code>apiKey</code>, <code>host</code>, <code>shareHost</code>, <code>datasetParentName</code>, and <code>detachedSnapshotsDatasetParentName</code>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-yaml" data-lang="yaml"><span class="line"><span class="cl"><span class="nt">driver</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">config</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">driver</span><span class="p">:</span><span class="w"> </span><span class="l">freenas-api-nfs</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">httpConnection</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">protocol</span><span class="p">:</span><span class="w"> </span><span class="l">https</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">apiKey</span><span class="p">:</span><span class="w"> </span><span class="p">[</span><span class="l">api-key-goes-here]</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">host</span><span class="p">:</span><span class="w"> </span><span class="m">10.0.50.99</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">port</span><span class="p">:</span><span class="w"> </span><span class="m">443</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">allowInsecure</span><span class="p">:</span><span class="w"> </span><span class="kc">true</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">zfs</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">datasetParentName</span><span class="p">:</span><span class="w"> </span><span class="l">nvme2tb/nfs</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">detachedSnapshotsDatasetParentName</span><span class="p">:</span><span class="w"> </span><span class="l">nvme2tb/nfs/snaps</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">datasetEnableQuotas</span><span class="p">:</span><span class="w"> </span><span class="kc">true</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">datasetEnableReservation</span><span class="p">:</span><span class="w"> </span><span class="kc">false</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">datasetPermissionsMode</span><span class="p">:</span><span class="w"> </span><span class="s2">&#34;0777&#34;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">datasetPermissionsUser</span><span class="p">:</span><span class="w"> </span><span class="m">0</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">datasetPermissionsGroup</span><span class="p">:</span><span class="w"> </span><span class="m">0</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">nfs</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">shareCommentTemplate</span><span class="p">:</span><span class="w"> </span><span class="s2">&#34;{{ parameters.[csi.storage.k8s.io/pvc/namespace] }}-{{ parameters.[csi.storage.k8s.io/pvc/name] }}&#34;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">shareHost</span><span class="p">:</span><span class="w"> </span><span class="m">10.0.50.99</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">shareAlldirs</span><span class="p">:</span><span class="w"> </span><span class="kc">false</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">shareAllowedHosts</span><span class="p">:</span><span class="w"> </span><span class="p">[]</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">shareAllowedNetworks</span><span class="p">:</span><span class="w"> </span><span class="p">[]</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">shareMaprootUser</span><span class="p">:</span><span class="w"> </span><span class="l">root</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">shareMaprootGroup</span><span class="p">:</span><span class="w"> </span><span class="l">root</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">shareMapallUser</span><span class="p">:</span><span class="w"> </span><span class="s2">&#34;&#34;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">shareMapallGroup</span><span class="p">:</span><span class="w"> </span><span class="s2">&#34;&#34;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="nt">csiDriver</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="c"># should be globally unique for a given cluster</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">name</span><span class="p">:</span><span class="w"> </span><span class="s2">&#34;org.democratic-csi.freenas-api-nfs&#34;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="nt">storageClasses</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span>- <span class="nt">name</span><span class="p">:</span><span class="w"> </span><span class="l">truenas-nfs</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">defaultClass</span><span class="p">:</span><span class="w"> </span><span class="kc">false</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">reclaimPolicy</span><span class="p">:</span><span class="w"> </span><span class="l">Delete</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">volumeBindingMode</span><span class="p">:</span><span class="w"> </span><span class="l">Immediate</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">allowVolumeExpansion</span><span class="p">:</span><span class="w"> </span><span class="kc">true</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">parameters</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">fsType</span><span class="p">:</span><span class="w"> </span><span class="l">nfs</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">mountOptions</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span>- <span class="l">noatime</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span>- <span class="l">nfsvers=4</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="nt">volumeSnapshotClasses</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span>- <span class="nt">name</span><span class="p">:</span><span class="w"> </span><span class="l">truenas-nfs</span><span class="w">
</span></span></span></code></pre></div></li>
<li>Since we are using snapshots, you also need to install a snapshot controller such as <a href="https://github.com/democratic-csi/charts/tree/master/stable/snapshot-controller">https://github.com/democratic-csi/charts/tree/master/stable/snapshot-controller</a>
<ul>
<li>You can skip this step if you disable snapshots in your YAML file</li>
<li><code>helm upgrade --install --namespace kube-system --create-namespace snapshot-controller democratic-csi/snapshot-controller</code></li>
<li><code>kubectl -n kube-system logs -f -l app=snapshot-controller</code></li>
</ul>
</li>
<li>Deploy: <code>helm upgrade --install --namespace democratic-csi --values freenas-api-nfs.yaml truenas-nfs democratic-csi/democratic-csi</code></li>
<li>Verify:
<ul>
<li>You&rsquo;re looking to see that everything is fully running. It may take a minute to spin up.</li>
<li><code>kubectl get all -n democratic-csi</code></li>
<li><code>kubectl get storageclasses</code> or <code>kubectl get sc</code></li>
</ul>
</li>
</ul>
<h3 id="test---deploy-a-pvc-1">Test - Deploy A PVC</h3>
<ul>
<li>Test with a simple PVC, targeting our new <code>truenas-nfs</code> storage class, <code>test-pvc-truenas-nfs.yaml</code>:
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-yaml" data-lang="yaml"><span class="line"><span class="cl"><span class="nt">apiVersion</span><span class="p">:</span><span class="w"> </span><span class="l">v1</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="nt">kind</span><span class="p">:</span><span class="w"> </span><span class="l">PersistentVolumeClaim</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="nt">metadata</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">name</span><span class="p">:</span><span class="w"> </span><span class="l">testpvc-nfs</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="nt">spec</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">storageClassName</span><span class="p">:</span><span class="w"> </span><span class="l">truenas-nfs</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">accessModes</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span>- <span class="l">ReadWriteOnce</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">resources</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">requests</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">storage</span><span class="p">:</span><span class="w"> </span><span class="l">5Gi</span><span class="w">
</span></span></span></code></pre></div><ul>
<li><code>kubectl apply -f test-pvc-truenas-nfs.yaml</code></li>
<li>Be patient, this can take a minute to provision the new block device on TrueNAS and get everything mapped in Kubernetes.</li>
<li>Check the Persistent Volume itself: <code>kubectl get pv</code>
<ul>
<li>Looking for a new entry here</li>
</ul>
</li>
<li>Check the Persistent Volume Claim: <code>kubectl get pvc</code>
<ul>
<li>Looking for status Bound to the newly created PV</li>
</ul>
</li>
<li>If you need to investigate, next look at <code>kubectl describe pvc</code> and <code>kubectl describe pv</code>, or go look in the TrueNAS UI to see if a new disk has been created</li>
</ul>
</li>
</ul>
<h3 id="test---deploy-a-pod-1">Test - Deploy A Pod</h3>
<p>At this point there should be a PV and PVC, but they are not actually connected to a pod yet. The moment a pod claims a PVC, that&rsquo;s when the actual node that the pod is running on will mount the NFS share.</p>
<p>This test pod uses a small Alpine image and writes to a log file every second. The two lines commented out at the bottom are there in case you want to target a specific node. I would recommend if you&rsquo;re not sure all your Talos Linux nodes are configured properly for iSCSI to target each of them and verify from every pod. You can delete the pod, but preserve the PVC and if you reconnect to the PVC from another pod, even if it&rsquo;s running on another node, it should still contain the same data.</p>
<ul>
<li>Create <code>pod-using-testpvc-nfs.yaml</code>:
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-yaml" data-lang="yaml"><span class="line"><span class="cl"><span class="nt">apiVersion</span><span class="p">:</span><span class="w"> </span><span class="l">v1</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="nt">kind</span><span class="p">:</span><span class="w"> </span><span class="l">Pod</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="nt">metadata</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">name</span><span class="p">:</span><span class="w"> </span><span class="l">testlogger-nfs</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="nt">spec</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">containers</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span>- <span class="nt">name</span><span class="p">:</span><span class="w"> </span><span class="l">testlogger-nfs</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">image</span><span class="p">:</span><span class="w"> </span><span class="l">alpine:3.20</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">command</span><span class="p">:</span><span class="w"> </span><span class="p">[</span><span class="s2">&#34;/bin/ash&#34;</span><span class="p">]</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">args</span><span class="p">:</span><span class="w"> </span><span class="p">[</span><span class="s2">&#34;-c&#34;</span><span class="p">,</span><span class="w"> </span><span class="s2">&#34;while true; do echo \&#34;$(date) - test log\&#34; &gt;&gt; /mnt/test.log &amp;&amp; sleep 1; done&#34;</span><span class="p">]</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">volumeMounts</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span>- <span class="nt">name</span><span class="p">:</span><span class="w"> </span><span class="l">testvol</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">        </span><span class="nt">mountPath</span><span class="p">:</span><span class="w"> </span><span class="l">/mnt</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">volumes</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span>- <span class="nt">name</span><span class="p">:</span><span class="w"> </span><span class="l">testvol</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">persistentVolumeClaim</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">        </span><span class="nt">claimName</span><span class="p">:</span><span class="w"> </span><span class="l">testpvc-nfs</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="c">#    nodeSelector:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="c">#      kubernetes.io/hostname: taloswk1</span><span class="w">
</span></span></span></code></pre></div></li>
<li>Deploy: <code>kubectl apply -f pod-using-testpvc-nfs.yaml</code></li>
<li>Verify <code>kubectl get po</code>
<ul>
<li>Check which node it&rsquo;s on with <code>kubectl get po -o wide</code> or <code>kubectl describe po testlogger-nfs | grep Node:</code></li>
</ul>
</li>
<li>Validate data is being written to the PVC:
<ul>
<li>Exec into the pod: <code>kubectl exec -it testlogger-nfs -- /bin/sh</code></li>
<li>Look at the file: <code>cat /mnt/test.log</code></li>
<li>Show line count: <code>wc -l /mnt/test.log</code></li>
</ul>
</li>
</ul>
<h3 id="test---cleanup-1">Test - Cleanup</h3>
<ul>
<li><code>kubectl delete -f pod-using-testpvc-nfs.yaml</code></li>
<li><code>kubectl delete -f test-pvc-truenas-nfs.yaml</code></li>
</ul>
<h2 id="dynamic-nvme-of-storage-for-kubernetes-i-was-unable-to-make-this-work">Dynamic NVMe-oF Storage For Kubernetes (I was unable to make this work)</h2>
<p>I spent some time trying to get this to work. TrueNAS doesn&rsquo;t currently support NVMe-oF through the interface, but since it&rsquo;s just a Linux box you can simply (almost simply) install extra packages needed and configure them as root. After doing that, I tested manually by connecting from another Linux machine to validate that I could indeed mount NVMe over TCP using TrueNAS.</p>
<p>From there, I figured out the configuration needed for democratic-csi <code>zfs-generic-nvmeof</code> driver and started testing. I got as far as getting it to provision a new dataset on TrueNAS, create the mount, and create the PV and PVC in the cluster, showing as Bound. However, when I would actually attempt to connect to it from a pod, it would fail. It may have something to do with how democratic-csi does the mount from the node, or otherwise I might have something wrong in my configuration that I can&rsquo;t figure out.</p>
<p>If I could get this working, I might not even bother running a TrueNAS instance and just run some lightweight Linux server to interface between democratic-csi and the disk(s).</p>
<p>Here&rsquo;s some extra details on exactly what I tried:</p>
<ul>
<li><a href="https://github.com/siderolabs/talos/issues/9255">https://github.com/siderolabs/talos/issues/9255</a></li>
<li><a href="https://github.com/democratic-csi/democratic-csi/issues/418">https://github.com/democratic-csi/democratic-csi/issues/418</a></li>
</ul>
<p>Please help me if you know how to make this work, as I&rsquo;d much rather be using this than iSCSI :)</p>
<p>Here&rsquo;s my almost working config for reference:</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-yaml" data-lang="yaml"><span class="line"><span class="cl"><span class="nt">csiDriver</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">name</span><span class="p">:</span><span class="w"> </span><span class="s2">&#34;org.democratic-csi.nvmeof&#34;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="nt">storageClasses</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span>- <span class="nt">name</span><span class="p">:</span><span class="w"> </span><span class="l">truenas-nvmeof</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">defaultClass</span><span class="p">:</span><span class="w"> </span><span class="kc">false</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">reclaimPolicy</span><span class="p">:</span><span class="w"> </span><span class="l">Delete</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">volumeBindingMode</span><span class="p">:</span><span class="w"> </span><span class="l">Immediate</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">allowVolumeExpansion</span><span class="p">:</span><span class="w"> </span><span class="kc">true</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">parameters</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">fsType</span><span class="p">:</span><span class="w"> </span><span class="l">ext4</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">mountOptions</span><span class="p">:</span><span class="w"> </span><span class="p">[]</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">secrets</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">provisioner-secret</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">controller-publish-secret</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">node-stage-secret</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">node-publish-secret</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">controller-expand-secret</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="nt">volumeSnapshotClasses</span><span class="p">:</span><span class="w"> </span><span class="p">[]</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="nt">driver</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">config</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">driver</span><span class="p">:</span><span class="w"> </span><span class="l">zfs-generic-nvmeof</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">sshConnection</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">host</span><span class="p">:</span><span class="w"> </span><span class="m">10.0.50.99</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">port</span><span class="p">:</span><span class="w"> </span><span class="m">22</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">username</span><span class="p">:</span><span class="w"> </span><span class="l">root</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">privateKey</span><span class="p">:</span><span class="w"> </span><span class="p">|</span><span class="sd">
</span></span></span><span class="line"><span class="cl"><span class="sd">        -----BEGIN RSA PRIVATE KEY-----
</span></span></span><span class="line"><span class="cl"><span class="sd">        REDACTED!
</span></span></span><span class="line"><span class="cl"><span class="sd">        -----END RSA PRIVATE KEY-----</span><span class="w">        
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">zfs</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">datasetParentName</span><span class="p">:</span><span class="w"> </span><span class="l">nvme2tb/k8s/nvmeof</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">detachedSnapshotsDatasetParentName</span><span class="p">:</span><span class="w"> </span><span class="l">nvme2tb/k8s/nvmeof-snapshots</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">zvolCompression</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">zvolDedup</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">zvolEnableReservation</span><span class="p">:</span><span class="w"> </span><span class="kc">false</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">zvolBlocksize</span><span class="p">:</span><span class="w"> </span><span class="l">16K</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">nvmeof</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">transports</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">        </span>- <span class="l">tcp://0.0.0.0:4420</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">namePrefix</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">nameSuffix</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">shareStrategy</span><span class="p">:</span><span class="w"> </span><span class="s2">&#34;nvmetCli&#34;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">shareStrategyNvmetCli</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">        </span><span class="nt">nvmetcliPath</span><span class="p">:</span><span class="w"> </span><span class="l">nvmetcli</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">        </span><span class="nt">configIsImportedFilePath</span><span class="p">:</span><span class="w"> </span><span class="l">/var/run/nvmet-config-loaded</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">        </span><span class="nt">configPath</span><span class="p">:</span><span class="w"> </span><span class="l">/etc/nvmet/config.json</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">        </span><span class="nt">basename</span><span class="p">:</span><span class="w"> </span><span class="s2">&#34;nqn.2003-01.org.linux-nvme&#34;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">        </span><span class="nt">ports</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">          </span>- <span class="s2">&#34;1&#34;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">        </span><span class="nt">subsystem</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">          </span><span class="nt">attributes</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">            </span><span class="nt">allow_any_host</span><span class="p">:</span><span class="w"> </span><span class="m">1</span><span class="w">
</span></span></span></code></pre></div><h1 id="what-about-other-options">What About Other Options?</h1>
<h2 id="what-about-local-storage">What About Local Storage?</h2>
<p>This fails to meet one of my main requirements of being able to mount a persistent volume to any node. The whole point of Kubernetes, at least for me, is the ability to take a node offline with no actual downtime of any services. If you have a pod connected to a PVC on a certain node, but I need to move that pod to another node, I need to reconnect to the existing persistent volume, but this method doesn&rsquo;t allow for that.</p>
<h2 id="what-about-proxmox-csi-plugin">What About proxmox-csi-plugin?</h2>
<p>I also looked at the <a href="https://github.com/sergelogvinov/proxmox-csi-plugin">proxmox-csi-plugin</a>, but that has the same problem as with node local storage so doesn&rsquo;t fit my requirements.</p>
<h2 id="what-about-rancher-longhorn">What About Rancher Longhorn?</h2>
<p>Longhorn is nice and easy. It uses either NFS or iSCSI. The Talos team recommends against both NFS and iSCSI (although either can be used). I would say to use Longhorn if you need replicated storage in a homelab (like 3 disks) but don&rsquo;t want to figure out Ceph, etc. It&rsquo;s straightforward and well supported. As for downsides, I don&rsquo;t have firsthand experience. I hear it&rsquo;s great for usability, but some users complain that it&rsquo;s not reliable.</p>
<p>It&rsquo;s also a little more complicated to set up specifically with Talos, but Longhorn has specific instructions for installation with Talos so that shouldn&rsquo;t be a man blocker.</p>
<p>At some point I might try this out, but for now I&rsquo;m sticking with the simple approach of using TrueNAS Scale with any type of zpool you want, and dynamically provisioning NFS or iSCSI using democratic-csi.</p>
<h2 id="what-about-mayastor">What About Mayastor?</h2>
<p>I tried getting this to work because the Talos team recommends it if you don&rsquo;t want to do a full blown Ceph cluster, etc. I was also super interested in the fact that it uses NVMe-oF which is a newer protocol (basically a modern replacement for iSCSI). I wasn&rsquo;t able to get it working, but I discovered a little bit about it and decided to keep it simpler for the following reasons:</p>
<ul>
<li>I only have a single disk currently, so I don&rsquo;t need any replicated storage</li>
<li>It seems to be more resource intensive than democratic-csi and has more components.</li>
<li>The documentation is kind of hectic. It used to be Mayastor, but now it&rsquo;s OpenEBS Replicated PV Mayastor or something. When deploying, it&rsquo;s hard to tell if I&rsquo;m deploying other OpenEBS stuff I don&rsquo;t need or what exactly PV type I need. I think you need one type of PV to store data for one of the components even if you are ultimately trying to run the replicated PV type (mayastor) for your primary cluster storage. I don&rsquo;t know, it was confusing.</li>
</ul>
<p>My failure to get this working is 100% a skill issue, but going back to my requirements I really don&rsquo;t need this for my homelab at this point. I may revisit this in the future.</p>
<h2 id="what-about-ceph">What About Ceph?</h2>
<p>I would need enough disks and resources to run Ceph. It&rsquo;s something I really want to test out and potentially use, although probably way overkill for my homelab. I&rsquo;m currently running 2 Minisforum MS-01 servers and planning on getting a third to do a true Proxmox cluster and replace my old 2U power hungry server. At that point, I might actually give Ceph a shot (trying both the Proxmox Ceph installation and also Rook/Ceph on top of Kubernetes). This would solve for truly HA storage, plus meet all other requirements I have, assuming I don&rsquo;t add a requirement for low resource utilization just to run storage.</p>
]]></content:encoded>
    </item>
    
  </channel>
</rss>
