My private git repos are a mess when it comes to secrets. I’ve wanted to implement vault for a while now. It will allow me to start removing the secrets from plain text files in git. Vault allows me to store everything in a central location and in the future even dynamically generate secrets.
In this one I’ll go over how I set up Vault Secrets Operator (VSO) to sync Vault secrets to Kubernetes.
I’ve been using StepCA in the GDC for a little while now, you can see how I set it up here. While StepCA was great I’ve decided to step away from it for a couple of reasons.
StepCA was falling over way too often and I’d have to restart the container, then re-issue all the expired certs. I wanted to implement HashiCorp Vault for application secrets. Vault has PKI capability so it just made sense to consolidate the two.
This year marked another leap forward for my homelab, with milestones in automation, storage management, and energy efficiency.
Highlights from 2024 Relocating the Server Rack One major change was moving the entire server rack into the garage. The main driver for this was noise. I had a loud power supply for the TrueNAS build and an IBM server that I wanted to use, but it was simply too loud inside.
I’ll be honest, if disaster struck I’d lose a lot of data. I don’t have proper backups in place. This is part 1 of my backup journey, in this one I’ll tackle virtual machine backups.
Firstly, I don’t really care if something happens to a VM itself. Everything exists in IaC and can be quickly rebuilt. Most applications either run in Kubernetes or I have an ansible playbook to spin it all back up.
For a while now I’ve been looking to implement some kind of observability platform. I’ve been using a combination of Loki and Grafana for a while now, so utilising Grafana is preferred.
Some of the goals for this project were:
ingest data from proxmox get data from guest VMs create a dashboard that shows overall performance that can be drilled down After some investigation into some tools, I’ve landed on using:
I’ve been building my infrastructure in my homelab with Terraform for a while now, one thing that has always been a pain is finding an IP address that’s not in use and then creating a DNS record in pfSense.
To solve the IP address problem I’ve decided to use Netbox. This will allow me to plan out the layout of the network and get available IPs using the API.
I’ve decided the first step to address this is to create a Terraform module, this will simplify the code.
I run multiple applications in my homelab. Some within docker containers, some in Kubernetes and some running in VMs. Most applications support TLS or have it enabled by default but are using self signed certificates. This means I get a warning when loading the page. For external servers I’m using Lets Encrypt which is great, but not designed for internal use. In this post I’ll implement a self hosted Certificate Authority.
In my last post I mentioned I wanted to add some better monitoring into my lab. I’ve been using monit for a while but was getting annoyed by constant alerts. I remember hearing of “fatigue filters” in Sensu so I wanted to give it a go.
Sensu allows you to define everything in yaml or json, this is ideal given I want to put everything in version control and deploy everything using CI/CD.
Learning kubernetes has always been something I wanted to do. A recent job opportunity brought kubernetes back onto my radar. Hosted kubernetes is expensive, especially if it’s just for learning. In this post I’ll walk through how I setup a 3 node cluster in Proxmox. By no means is this perfect, I created this to learn. As the setup evolves and I learn more about best practices I’ll write new posts.
title: Easily migrate kubernetes storage backend date: 2023-07-10 draft: true tags:
kubernetes storage linux homelab longhorn I was having some nasty performance issues with some of the applications I host in my homelab. Looking at the logs the application was freezing up because the SQLite database backend was locking. Some research reveleved that this was likley caused because my persistent storage backend was NFS.
As I’m not a hyperscaler, I didn’t have any other options for storage backends until I found LONGHORN.