Episode 0x06: Persistent Storage Usnig NFS
Table of Contents
NOTE: Many commands in this post make use of specific constants tied to my own setup. Make sure to tailor these to your own needs. These examples should serve as a guide, not as direct instructions to copy and paste.
NOTE: Check out the final code at homelab repo on my Github account.
Introduction
In our previous episode, we identified a major issue in our cluster: the absence of a persistent volume claim mechanism. Our nodes rely solely on their local storage, which is barely sufficient for the applications running on our cluster. Therefore, our objective in this post is to set up a Network File System (NFS) accessible by our cluster. Then, we’ll use the nfs-subdir-external-provisioner to create a storage class, allowing our cluster to claim storage from the NFS and persist data there.
While the NFS subdir provisioner has some limitations—such as not enforcing storage limits set in the PersistentVolumeClaims (PVCs)—it is easy to set up and effective. Alternatives like setting up a CEPH cluster, though more robust, require significantly more effort and more powerful hardware.
Installation
Reuse Edge VM
To create a network file system, we need a virtual machine. Given that we already have the Edge VM, we can reuse it. Follow these steps:
Log into your Proxmox dashboard and add a second disk to the Edge VM. This disk will serve as the primary storage space for our NFS. I’ve allocated 600GB to this disk, but I recommend allocating as much space as you can. Here are the updated hardware specs for my Edge VM:
Device | Description |
---|---|
cpu | socket = 1, core = 1(host) |
ram | 256 Mi |
disk 0 | 2 Gi |
disk 1 | 600 Gi |
net 0 | bride to vmbr0 |
net 1 | bride to vmbr0, tag = 100 |
Restart your machine and ensure the second disk is recognized in the VM by running the following command:
lsblk
You should see output similar to this:
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
sda 8:0 0 2G 0 disk
├─sda1 8:1 0 1.9G 0 part /
├─sda14 8:14 0 3M 0 part
└─sda15 8:15 0 124M 0 part /boot/efi
sr0 11:0 1 4M 0 rom
vda 254:0 0 600G 0 disk
Format the disk and create a partition: (your device name may be different from mine)
fdisk /dev/vda
Follow the prompts to create a primary partition. After completing the steps, check the disk partitions again with lsblk
:
sda 8:0 0 2G 0 disk
├─sda1 8:1 0 1.9G 0 part /
├─sda14 8:14 0 3M 0 part
└─sda15 8:15 0 124M 0 part /boot/efi
sr0 11:0 1 4M 0 rom
vda 254:0 0 600G 0 disk
└─vda1 254:1 0 600G 0 part
Mount the partition to your filesystem:
Then, inside our mounted drive, create a subdir called nfs and make sure it is accessible by everyone.
mkdir -p /mnt/vda1
mount /dev/vda1 /mnt/vda1
Create a subdirectory called nfs
inside your mounted drive and set permissions:
mkdir -p /mnt/vda1/nfs
chmod 777 /mnt/vda1/nfs
Also, make sure to run
chown -R 999:999 /mnt/vda1/nfs/
So then pods can change the owenership. This is particulary important for Postgres for example, as it doesn’t only write data to mounted volumes, it tries to take the owenership of the directly as well.
NFS
Next, we will configure the NFS server. Install the NFS kernel:
apt install nfs-kernel-server
systemctl start nfs-kernel-server
systemctl enable nfs-kernel-server
systemctl status nfs-kernel-server
Edit the /etc/exports
file and add the following line to it:
/mnt/vda1/nfs 192.168.0.0/24(rw,sync,no_subtree_check) 10.100.0.0/24(rw,sync,no_subtree_check,insecure,no_root_squash)
This configuration makes the NFS available on two networks: your private LAN (192.168.0.0/24
) for desktop access, and the Kubernetes nodes’ network (10.100.0.0/24
). Note that the Edge VM has two network interfaces, connected to both subnets.
Next, export the NFS:
exportfs -a
exportfs -v
To ensure everything is working correctly, try mounting the NFS from your desktop. I’ll leave this as an exercise for the reader, but you should also repeat the process on your cluster nodes to confirm accessibility.
NFS Subdir Provisioner
With the NFS server set up and accessible, we’ll now install the NFS subdir provisioner using Helm. On your desktop, add the Helm repository and install the NFS subdir provisioner:
helm repo add nfs-subdir-external-provisioner https://kubernetes-sigs.github.io/nfs-subdir-external-provisioner/
helm install nfs-subdir-external-provisioner nfs-subdir-external-provisioner/nfs-subdir-external-provisioner \
--set nfs.server=[your-nfs-server-address] \
--set nfs.path=/mnt/vda1/nfs/kube
Replace [your-nfs-server-address]
with the address of your Edge VM (from the 10.100.0.0/24
subnet) and ensure that the path to the subdirectory (/mnt/vda1/nfs/kube
) exists. Also, set appropriate permissions on the kube
subdirectory:
mkdir -p /mnt/vda1/nfs/kube
chown nobody:nogroup /mnt/vda1/nfs/kube
chmod 777 /mnt/vda1/nfs/kube
Next, we have to make sure that it is set as the default storage class
kubectl patch storageclass nfs-client -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
and verify the storage class:
kubectl get storageclass
That’s it! We’ve successfully addressed the persistent storage issue for our cluster.
Test the storage class
Let’s test the storage class with a sample deployment:
apiVersion: v1
kind: Namespace
metadata:
name: test-storage
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
namespace: test-storage
name: nginx-pvc
spec:
storageClassName: nfs-client
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
namespace: test-storage
labels:
app: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:latest
ports:
- containerPort: 80
volumeMounts:
- name: nginx-html
mountPath: /usr/share/nginx/html
volumes:
- name: nginx-html
persistentVolumeClaim:
claimName: nginx-pvc
---
apiVersion: v1
kind: Service
metadata:
namespace: test-storage
name: nginx-service
spec:
selector:
app: nginx
ports:
- protocol: TCP
port: 80
targetPort: 80
type: NodePort
Apply the above configuration:
kubectl apply -f [your-file-name].yaml
Verify that the PVC is claimed:
kubectl get pvc -n test-storage
Mount the NFS on your host and navigate to the subdirectory corresponding to the claim. Create an index.html
file, forward the port to the nginx
service, and visit it in your browser. You should see your index.html
file being served.
Clean up after verifying your setup:
kubectl delete namespace test-storage
And Voila!. Stay tuned for the next episode! 🚀