I'm transitioning some of my hosted services running on a home-server to running as workloads on a Kubernetes cluster where they were previously ran as simple daemons on the same machine. Here are some workloads I run on this cluster at the moment.
For instance, gallery.wesl.ee points to my deployment of pigallery2 running on this cluster. public-demo.raamen.org resolves to a demo instance of my own barebones file-server which I originally wrote in PHP years ago.
Setup
This is all running on one physical machine presently on a bookcase in my room; each new node in the cluster and its resources is provisioned as a VM under libvirt. I used Arch (btw) for the VM OS but only because it's simple to set up Kubernetes and less fuss to get working than NixOS which is my go-to for my other computers (btw).
There are four different VMs I provisioned under libvirt that carry out the functions of this cluster.
kmaster
(10.0.10.5) hosts the control planeknode-1
(10.0.10.6) is a worker nodedb
(10.0.10.3) is running an instance of postgresshare
(10.0.10.2) is hosting an NFSv4 network share
I run nginx on the hypervisor itself and use this as a bootleg ingress gateway
for now. Each of these VMs uses a static IP and is connected to a bridge called
br1
which is the same bridge I set up when configuring my
Wireguard Point-to-Home-Server Setup. This lets me easily access my cluster
directly when I am connected to the Wireguard tunnel. Master and worker nodes
are additionally given access to br0
which connects those VMs to the Internet
directly, for pulling images from public repositories.
When actually configuring Kubernetes I used kubeadm
to make things easier for
myself. In the grand scheme of Kubernetes tooling kubeadm
is still quite
low-level and still affords much control over the cluster. As we'll see it
doesn't even facilitate pod-to-pod communication (we will need a CNI like
Flannel for this). Execute the below on the control-plane node kmaster
.
root@kmaster ~ > pacman -S kubectl kubeadm kubelet containerd cni-plugins root@kmaster ~ > echo overlay >> /etc/modules-load.d/k8s.conf root@kmaster ~ > echo br_netfilter >> /etc/modules-load.d/k8s.conf root@kmaster ~ > modprobe overlay root@kmaster ~ > modprobe br_netfilter root@kmaster ~ > echo 'net.bridge.bridge-nf-call-iptables = 1' >> /etc/sysctl.d/k8s.conf root@kmaster ~ > echo 'net.bridge.bridge-nf-call-ip6tables = 1' >> /etc/sysctl.d/k8s.conf root@kmaster ~ > echo 'net.ipv4.ip_forward = 1' >> /etc/sysctl.d/k8s.conf root@kmaster ~ > sysctl -p /etc/sysctl.d/k8s.conf root@kmaster ~ > systemctl start containerd && systemctl enable containerd root@kmaster ~ > kubeadm init --cri-socket /run/containerd/containerd.sock \ --apiserver-advertise-address=10.0.10.5 \ --pod-network-cidr='10.244.0.0/16' root@kmaster ~ > systemctl start kubelet && systemctl enable kubelet
Save the command output by kubeadm init
which will need to be run on nodes
wishing to join this cluster. Now copy /etc/kubernetes/admin.conf
from
kmaster
to the local machine at ~/.kube/config
to administrate this cluster
remotely.
I found it necessary to edit /etc/kubernetes/kubelet.env
to include the line
KUBELET_ARGS=--node-ip=10.0.10.5
for the master and
KUBELET_ARGS=--node-ip=10.0.10.6
for the worker node in order for the correct
IP address to be advertised; without this I could not open a remote shell to pod
running on a node.
On knode-1
I run the same set of commands except for kubeadm init
where I
instead run the command printed during kubeadm init
to actualy join the node
to the cluster.
Finally I use the Flannel CNI to configure a simple VXLAN over which pods will
communicate. From the local machine with admin.conf
installed to
~/.kube/config
run:
wesl-ee@wonder-pop ~ > kubectl apply -f https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml
On the current setup a VXLAN solution like Flannel is probably not necessary but I found it to be the simplest to deploy.
NFS
I make a few NFS shares accessible to the cluster to use as PersistentVolumes.
These are configured on the VM running NFSv4, 10.0.10.2
, like so:
# /etc/exports - exports(5) - directories exported to NFS clients "/srv/samba/public/DCIM Gallery" 10.0.10.0/24(ro,sync,all_squash,anonuid=65534,anongid=65534) /srv/samba/raamen-demo 10.0.10.0/24(ro,sync,all_squash,anonuid=65534,anongid=65534)
With these NFS shares accessible from the 10.0.10.0/24
subnet which the nodes
are networked to I am able to access this storage in Kubernetes. Here is an
example of the spec for one of these shares which I use at
public-demo.raamen.org:
apiVersion: v1 kind: PersistentVolume metadata: name: raamen-demo-mnt labels: type: local spec: storageClassName: raamen-demo-mnt capacity: storage: 8Gi volumeMode: Filesystem accessModes: - ReadOnlyMany persistentVolumeReclaimPolicy: Retain mountOptions: - hard - nfsvers=4.1 nfs: path: "/raamen-demo" server: 10.0.10.2 --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: raamen-demo-mnt spec: storageClassName: raamen-demo-mnt accessModes: - ReadOnlyMany resources: requests: storage: 8Gi
The rest of my Kubernetes spec is stored on Github at wesl-ee/xen-infra. If I bother to provision these VMs using Nix or Terraform then those configs will be stored here too; I don't see much reason to complicate my setup like this however, especially when it is all hosted on one machine.
Pitfalls
When adding a new node recently to this relatively old cluster I found the following issue after attempting to join the cluster.
this version of kubeadm only supports deploying clusters with the control plane version >= 1.28.0. Current version: v1.27.4
I took it upon myself at 2AM to upgrade the existing cluster instead of simply installing an old version of kubeadm on the new node. I downloaded the v1.28.0 kubeadm and ran this fateful command on my single control node.
[root@kmaster k8s-128]# ./kubeadm upgrade apply v1.28.0 [upgrade/config] Making sure the configuration is correct: [upgrade/config] Reading configuration from the cluster... [upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml' [preflight] Running pre-flight checks. [upgrade] Running cluster health checks [upgrade/version] You have chosen to change the cluster version to "v1.28.0" [upgrade/versions] Cluster version: v1.27.4 [upgrade/versions] kubeadm version: v1.28.0 [upgrade] Are you sure you want to proceed? [y/N]:
After typing “y” and a few minutes of bated breath I found myself with the
pleasant message [upgrade/successful] SUCCESS! Your cluster was upgraded to
"v1.28.0". Enjoy!
Hooray! From here, I was able to join my node to the cluster
without trouble. After upgrading the other worker node to the latest v1.29.2 as
well, I upgraded the control plane to from v1.28.0 to v1.29.2 and called it a
night! TYBG!