Skip to main content

Building a home kubernetes cluster

Learning kubernetes has always been something I wanted to do. A recent job opportunity brought kubernetes back onto my radar. Hosted kubernetes is expensive, especially if it’s just for learning. In this post I’ll walk through how I setup a 3 node cluster in Proxmox. By no means is this perfect, I created this to learn. As the setup evolves and I learn more about best practices I’ll write new posts.

The infrastructure

I have an old desktop PC running Proxmox, so I setup some VMs there. There is 3 nodes all setup using terraform running Ubuntu cloud. Each VM has 2 CPUs and 2gb of memory.

Installing everything

Add the kubernetes repo

echo "deb kubernetes-xenial main" \
  | sudo tee -a /etc/apt/sources.list.d/kubernetes.list \
  && sudo apt-get update

Update everything and install docker and kubernetes

apt update -y && apt upgrade -y && apt install kubelet kubeadm kubernetes-cni

First attempt at starting everything up I had issues because of a cgroup driver mismatch, I had to create /etc/docker/daemon.json with the below contents to force it to use systemd

  "exec-opts": ["native.cgroupdriver=systemd"]

Initialise the master node. Copy the join command

sudo kubeadm init --apiserver-advertise-address= --kubernetes-version v1.23.4 --pod-network-cidr=

Setup some pod networking, I used flannel

kubectl apply -f

Add the worker nodes by pasting in the join command into the other servers

kubeadm join --token <token> <ip>:6443

Check if the nodes are showing up now

kubectl get nodes

Setting up a load balancer

At this point I had a working cluster, I just needed to get a load balancer working. Most cloud providers will have their own load balancer, with bare metal clusters we can use metallb.

Install metallb

kubectl apply -f
kubectl apply -f

I wanted my load balancers to use IP addresses within my network. To do this I deployed the below config map.

apiVersion: v1
kind: ConfigMap
  namespace: metallb-system
  name: config
  config: |
    - name: default
      protocol: layer2

Running a pod

To test everything was working so far I created an nginx pod. I created a manifest file describing the deployment and service. This deployment will have an nginx container on port 80. The deployment will ensure there is 3 replicas of the pod. To expose the nginx container to the network I’m using a service. The service is using the type LoadBalancer, this will expose nginx using one of the IP addresses defined before.

apiVersion: apps/v1
kind: Deployment
  namespace: default
  name: nginx-deployment
    app: nginx
  replicas: 3
      app: nginx
        app: nginx
      - name: nginx
        image: nginx:1.14.2
        - containerPort: 80
apiVersion: v1
kind: Service
  namespace: default
  name: nginx-service
  type: LoadBalancer
    app: nginx
    - protocol: TCP
      port: 80

If we run kubectl get svc, we can see the IP address of the service.

NAME                 TYPE           CLUSTER-IP       EXTERNAL-IP        PORT(S)           AGE
nginx-service        LoadBalancer     80:32413/TCP      10m

To test it’s working, simply visit the IP in a browser or use curl



I’m slowly migrating my containers running on other VMs over to this cluster. I’m keeping everything nice and organised in a git repo, that way I can keep everything versioned and auto deploy. I’m investigating ways to integrate this with a CI/CD solution so everything can be validated and deployed upon commit. I also know I need to learn more about the security side of things. Next on my list is RBAC and centralised logging.