K8s: Moved!

Today, I got up fairly early… earlier than Duncan, who apparently stayed up until nearly 4am! So I decided to take apart his desktop while he slept in and change out the fans that were making noise, as I’d bought replacements for them last week. As it turned out, things were just terribly dirty, and after a clean and tightening one fan back down, it’s right as rain, and I had it all back together before he woke up to boot.

But that left me wanting to do something else than sit and play Horizon all weekend for the third week in a row, so I decided to make use of the second APU1C4 I bought and configure it as my new Kubernetes control plane (in theory, saving almost 50W constant power usage).

The first step: upgrading my 1.23 cluster to 1.24, which was actually suspiciously easy and presented no issues at all. Filled with hubris, I configured Alpine on the APU, shifted it over into the rack (I lack a shelf to sit it on, so it’s sitting on my Xserve), and started setting it up.

I installed kubeadm, kubelet, flannel, and a couple other things, gave it a reboot, and it mostly came up first try. I originally neglected to install the flannel packages, thinking that it would install with the kubectl apply line, which meant a good ten minutes downtime when I didn’t realize it before separating my worker node from the old cluster.

The rest of the downtime? I changed the node name of my one worker node from a single hostname to a FQDN, which upset the affinity rules I had in place… and it didn’t dawn on me why these things were sitting in “pending” state for so long! Once I changed the node affinity rules to match, everything came right up perfectly.

I mostly copied the Alpine Wiki instructions for K8s, but I’ll mirror what I did here anyway:

# Update /etc/apk/repositories:

cat > /etc/apk/repositories
#/media/sda1/apks
http://dl-cdn.alpinelinux.org/alpine/v3.16/main
http://dl-cdn.alpinelinux.org/alpine/v3.16/community
#http://dl-cdn.alpinelinux.org/alpine/edge/main
http://dl-cdn.alpinelinux.org/alpine/edge/community
http://dl-cdn.alpinelinux.org/alpine/edge/testing
^D

apk update
apk add kubeadm containerd kubelet
apk add flannel flannel-contrib-cni cni-plugin-flannel cni-plugins

# "Fix flannel" indeed.
ln -s /usr/libexec/cni/flannel-amd64 /usr/libexec/cni/flannel

# Enable the services
rc-update add kubelet default
rc-update add containerd
service containerd start

# Kernel things
echo 'net.ipv4.ip_forward=1' >> /etc/sysctl.conf
echo 1 > /proc/sys/net/ipv4/ip_forward
echo "br_netfilter" > /etc/modules-load.d/k8s.conf
modprobe br_netfilter

# Hit the go button.
kubeadm init --pod-network-cidr=10.244.0.0/16

kubeadm join k8s-control:6443 --token $TOKEN \
	--discovery-token-ca-cert-hash sha256:$HASH --cri-socket=/run/containerd/containerd.sock

Note that I did not install docker et al.

I then had to install flannel, metallb, both per the instructions, add a couple of secrets back, and finally spin up each of my services. I rebooted the APU afterwards to ensure everything still came up correctly, shut down the old Ubuntu control plane, and I’m all set.

Oh, it’s also worth noting, I needed both my clusters in my kube config at once, but it looks like it’d be ambigious if both use the kube-admin@kubernetes identifier. So I changed my cluster name to something sensible, and kube-admin can literally be whatever the fuck you want, as near as I can tell it doesn’t depend on the user existing on the K8s side, the only thing that matters is the key+cert match what’s there, so it was trivial to merge the two YAML files and I can switch contexts between both clusters. I only actually needed this for about a minute anyway, but it’s good to know.

Horsham, VIC, Australia fwaggle

Published:


Modified:


Filed under:


Location:

Horsham, VIC, Australia

Navigation: Older Entry Newer Entry