Skip to main content

Building a home kubernetes cluster

Learning kubernetes has always been something I wanted to do. A recent job opportunity brought kubernetes back onto my radar. Hosted kubernetes is expensive, especially if it’s just for learning. In this post I’ll walk through how I setup a 3 node cluster in Proxmox. By no means is this perfect, I created this to learn. As the setup evolves and I learn more about best practices I’ll write new posts.

The infrastructure

I have an old desktop PC running Proxmox, so I setup some VMs there. There is 3 nodes all setup using terraform running Ubuntu cloud. Each VM has 2 CPUs and 2gb of memory.

Installing everything

Add the kubernetes repo

echo "deb http://apt.kubernetes.io/ kubernetes-xenial main" \
  | sudo tee -a /etc/apt/sources.list.d/kubernetes.list \
  && sudo apt-get update

Update everything and install docker and kubernetes

apt update -y && apt upgrade -y && apt install docker.io kubelet kubeadm kubernetes-cni

First attempt at starting everything up I had issues because of a cgroup driver mismatch, I had to create /etc/docker/daemon.json with the below contents to force it to use systemd

{
  "exec-opts": ["native.cgroupdriver=systemd"]
}

Initialise the master node. Copy the join command

sudo kubeadm init --apiserver-advertise-address=192.168.20.151 --kubernetes-version v1.23.4 --pod-network-cidr=10.244.0.0/16

Setup some pod networking, I used flannel

kubectl apply -f https://github.com/coreos/flannel/raw/master/Documentation/kube-flannel.yml

Add the worker nodes by pasting in the join command into the other servers

kubeadm join --token <token> <ip>:6443

Check if the nodes are showing up now

kubectl get nodes

Setting up a load balancer

At this point I had a working cluster, I just needed to get a load balancer working. Most cloud providers will have their own load balancer, with bare metal clusters we can use metallb.

Install metallb

kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.12.1/manifests/namespace.yaml
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.12.1/manifests/metallb.yaml

I wanted my load balancers to use IP addresses within my network. To do this I deployed the below config map.

apiVersion: v1
kind: ConfigMap
metadata:
  namespace: metallb-system
  name: config
data:
  config: |
    address-pools:
    - name: default
      protocol: layer2
      addresses:
      - 192.168.20.190-192.168.20.200

Running a pod

To test everything was working so far I created an nginx pod. I created a manifest file describing the deployment and service. This deployment will have an nginx container on port 80. The deployment will ensure there is 3 replicas of the pod. To expose the nginx container to the network I’m using a service. The service is using the type LoadBalancer, this will expose nginx using one of the IP addresses defined before.

apiVersion: apps/v1
kind: Deployment
metadata:
  namespace: default
  name: nginx-deployment
  labels:
    app: nginx
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.14.2
        ports:
        - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  namespace: default
  name: nginx-service
spec:
  type: LoadBalancer
  selector:
    app: nginx
  ports:
    - protocol: TCP
      port: 80

If we run kubectl get svc, we can see the IP address of the service.

NAME                 TYPE           CLUSTER-IP       EXTERNAL-IP        PORT(S)           AGE
nginx-service        LoadBalancer   10.109.249.248   192.168.20.191     80:32413/TCP      10m

To test it’s working, simply visit the IP in a browser or use curl

curl http://192.168.20.191:80

Success!

I’m slowly migrating my containers running on other VMs over to this cluster. I’m keeping everything nice and organised in a git repo, that way I can keep everything versioned and auto deploy. I’m investigating ways to integrate this with a CI/CD solution so everything can be validated and deployed upon commit. I also know I need to learn more about the security side of things. Next on my list is RBAC and centralised logging.