Monthly Archives: July 2024

How to install Kubernetes onto physical machines for a home lab

On each machine: Install Ubuntu Server LTS 24.04

Ensure you can SSH into it and enable password less sudo

echo "$USER ALL=(ALL:ALL) NOPASSWD: ALL" | sudo tee /etc/sudoers.d/$USER

This helps in running commands on each machine in parallel.

On each machine: Install kubeadm

Based on Bootstrapping clusters with kubeadm

sudo apt-get update
sudo apt-get install -y apt-transport-https ca-certificates curl gnupg
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.30/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.30/deb/ /' | $ sudo tee /etc/apt/sources.list.d/kubernetes.list
sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl
sudo swapoff -a

On each machine: Install containerd

Kubernetes recently deprecated usage of dockerd as the container runtime. So we’ll use containerd directly based on Anthony Nocentino’s blog: Installing and Configuring containerd as a Kubernetes Container Runtime

Configure the required kernel modules:

cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF
sudo modprobe overlay
sudo modprobe br_netfilter

Configure persistence across system reboots

cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables  = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward                 = 1
EOF

Install containerd packages

curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get update && sudo apt-get install containerd.io

Create a containerd configuration file

sudo mkdir -p /etc/containerd
sudo containerd config default | sudo tee /etc/containerd/config.toml

Set the cgroup driver to systemd. Kuberbetes uses systemd while containerd uses something else. They must both use the same setting:

sudo sed -i 's/            SystemdCgroup = false/            SystemdCgroup = true/' /etc/containerd/config.toml
sudo systemctl restart containerd
sudo systemctl enable containerd

Only on the first (master) machine

Initialize the K8s cluster. Save this output somewhere, you’ll need the kubeadm join ... part later.

sudo kubeadm init --pod-network-cidr=10.244.0.0/16
[init] Using Kubernetes version: v1.30.2
[preflight] Running pre-flight checks
...
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
W0629 19:20:06.570522   14350 checks.go:844] detected that the sandbox image "registry.k8s.io/pause:3.8" of the container runtime is inconsistent with that used by kubeadm.It is recommended to use "registry.k8s.io/pause:3.9" as the CRI sandbox image.
...
Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.178.171:6443 --token v1flk8.wy9xyikw6kosevps \
        --discovery-token-ca-cert-hash sha256:e79a8516a0990fa232b6dcde15ed951ffe46880854fe1169ceb3b909d82fff00

On each machine: Follow the recommendation of kubeadmin to update the sandbox image.

Use a text editor to replace sandbox_image = "registry.k8s.io/pause:3.8" with sandbox_image = "registry.k8s.io/pause:3.9"

restart containerd

sudo systemctl restart containerd.service

On the master node: Ensure kubectl knows what cluster you work with

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

On each other machine: join them to the cluster:

kubeadm join 192.168.178.171:6443 \ 
    --token v1flk8.wy9xyikw6kosevps \
    --discovery-token-ca-cert-hash sha256:e79a8516a0990fa232b6dcde15ed951ffe46880854fe1169ceb3b909d82fff00

On the master node: Configure the POD network

wget https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml
kubectl apply -f kube-flannel.yml

Installation finished, check status:

kubectl get nodes

should give output like:

NAME        STATUS   ROLES           AGE     VERSION
optiplex1   Ready    control-plane   2d23h   v1.30.2
optiplex2   Ready    <none>          2d23h   v1.30.2
optiplex3   Ready    <none>          2d23h   v1.30.2