Setting up a small k8s test cluster locally instead of miniqube

Yiğit İrez
7 min readApr 23, 2021

--

Edits: Added Static IP configurations, fixed IPs in related parts

Normally, when trying to understand concepts related to k8s, most of the time, mini-cube is enough and I have used it a lot. It is actually super simple to use, just download the exe, install it, then minikube start from cmd. However, when I wanted to try out services or volumes, I generally had to fight with minikube (or it hates me I don’t know). That’s why I decided to create a tiny cluster on my own machine and learn a bit more in the process.

Before we start, I want to state that there are practical environments like this in katakoda, with proper k8s cluster already installed. Understandably though, the environments are time limited and sometimes the whole site goes down, sometimes the nodes suddenly go down, so we need our own cluster for more permanent things.

Requirements;

After installing Virtualbox, we start creating a new centos vm. The path where you put Machine Folder data should have around 50G available. As we progress, we get to understand why Infrastructure as code is so popular right now.

Lets give 2G+ of ram, 2+cpus and follow through steps from here to create in total 3 vms like below. 2 for nodes, 1 for master.

I installed every node+master with the following options, set a pw for our root and be on our way finally.

<optional>

At this point if you want a more permament system make the IP’s of your nodes static.

step1, check for host-only DHCP server, this is your ip range
add another network adapter to nodes

When you login to your nodes, when you use ip addr you should be seeing something extra like below. Critical bits are the bold part.

3: enp0s8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 08:00:27:3d:fa:bc brd ff:ff:ff:ff:ff:ff
inet 192.168.56.106/24 brd 192.168.56.255 scope global noprefixroute enp0s8
valid_lft forever preferred_lft forever
inet6 fe80::a00:27ff:fe3d:fabc/64 scope link
valid_lft forever preferred_lft forever

Now run the following command to setup your static ip. I’ve marked the related parts in bold. This must be done for every node.

cat <<END >> /etc/sysconfig/network-scripts/ifcfg-enp0s8
DEVICE=enp0s8
BOOTPROTO=static
ONBOOT=yes
IPADDR=192.168.56.106
NETMASK=255.255.255.0
END
systemctl restart network

</optional>

Lets do the following to not mix up host names later, also to not fail kubeadm init.

#in master
sudo hostnamectl set-hostname master
#in node01
sudo hostnamectl set-hostname node01
#in node01
sudo hostnamectl set-hostname node02

So our vms are ready. The steps we have to take for some k8s’ing are as follows;

Assume everything will be done with sudo or root unless otherwise stated. I would also like to suggest use of multi-execution mode of MobaxTerm. since most of the commands need to be run on all nodes.

  • Install kubeadm toolbox — detailed steps, as always are in kubernetes.io. My point with this doc is to create a short list on how tos and concentrated whys.
  • Create a cluster with kubeadm
  • ???
  • Test freely

Install kubeadm toolbox

We need to load br_netfilter module first by running the command below. br_netfilter allows VxLAN traffic between pods on all nodes.

lsmod | grep br_netfilter

if we get nothing, then br_netfilter is not installed so run the following.

modprobe br_netfilter
echo "br_netfilter" > /etc/modules-load.d/br_netfilter.conf

then vim to k8s.conf to add below. Since you cannot echo multi-line, you cat.

cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF

Finally run below command to reload .conf variables

sudo sysctl --system

K8s hates swap for some reason, so we need to disable and keep it disabled like below

#disables swap
sudo swapoff -a
#Comments out all swap entries in the /etc/fstab file
sudo sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab

Now to the firewall setup. Normally I would disable firewall and go through but I want to see it as it should be so the commands below should be run.

In master:

sudo firewall-cmd --permanent --add-port=6443/tcp --add-port=2379-2380/tcp --add-port=10250-10252/tcp --add-port=5743/tcp
sudo firewall-cmd --reload

In nodes;

sudo firewall-cmd --permanent --add-port=10250/tcp --add-port=30000-32767/tcp --add-port=5743/tcp
sudo firewall-cmd --reload

At this point in the documentation we notice we haven’t actually installed anything to run the pods on. Pods must run on either Docker, rkt or CRI-O so since we only want an environment to test k8s, lets go ahead with Docker.

We will update our yum repo just in case we have old docker repo already (we don’t), delete old versions we might have had (we don’t) and get a docker install script

sudo yum check-updatesudo yum remove docker \
docker-client \
docker-client-latest \
docker-common \
docker-latest \
docker-latest-logrotate \
docker-logrotate \
docker-engine
#doing it manually
sudo yum install -y yum-utils
sudo yum-config-manager \
--add-repo \
https://download.docker.com/linux/centos/docker-ce.repo
sudo yum install docker-ce docker-ce-cli containerd.io
#if we want a specific version;
#yum list docker-ce --showduplicates | sort -r
#OR#doing it with ready scripts THIS IS WRONG in an enterprise env, but great for quick and dirty
curl -fsSL https://get.docker.com/ | sh

and then finally, run docker and then make sure it keeps running

sudo systemctl start docker
sudo systemctl status docker
sudo systemctl enable docker
#check if you can run the below image perhaps, but dont forget to delete the image and the container after
#sudo docker run hello-world

We have docker, but we need to make a few adjustments to its settings. I have no idea what the first line is, the second is the default logging driver for Docker, 3rd is a safeguard since without it, logs will fill everywhere. Storage driver overlay2 is the preferred driver for Linux as stated in the docs.

sudo mkdir /etc/docker
cat <<EOF | sudo tee /etc/docker/daemon.json
{
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2"
}
EOF

Finally we are finishing docker setup with the below;

sudo systemctl enable docker
sudo systemctl daemon-reload
sudo systemctl restart docker

Next up, starting kubeadm setup. Adding k8s repo to yum repo list, setting SELinux to permissive, and finally installing and enabling kubelet, kubeadm and kubectl.

cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-\$basearch
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
exclude=kubelet kubeadm kubectl
EOF

# Set SELinux in permissive mode (effectively disabling it)
sudo setenforce 0
sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config

sudo yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes

sudo systemctl enable --now kubelet

Setting SELinux in permissive is to allow containers to access host filesystem (needed by pod networks).

Cluster setup

We need to change the hosts file in all our nodes before running this other we will get the following errors if we kubeadm init.

[WARNING Hostname]: hostname “master” could not be reached
[WARNING Hostname]: hostname “master”: lookup master on 192.168.0.1:53: no such host

We use cat again to add our hosts to all nodes

sudo cat <<EOF>> /etc/hosts
192.168.56.106 master
192.168.56.107 node01
192.168.56.105 node02
EOF

Now we need to init kubeadm in the master only first so we run the below command. Dont forget to grab join command displayed at the very end after running the init. We will be needing that soon.

kubeadm init  --token-ttl 0 --apiserver-advertise-address=192.168.56.106 --pod-network-cidr=192.168.0.0/16

Great, but if you somehow wrote the advertise address wrong like me, you can reset with kubeadm reset and rerun the init command. However, if you do reset, deleting the $HOME/.kube folder is evry important. Otherwise you would get cert errors.

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

NOTE

if calico misbehaves after you reset,

delete — network-plugin=cni in /var/lib/kubelet/kubeadm-flags.env and then restart the machine.

Then,

ls /etc/cni/net.d/
sudo rm /etc/cni/net.d/10-calico.conflist
sudo rm /etc/cni/net.d/calico-kubeconfig

END NOTE

We use calico for an overlay network since we can use network policies with it (also it has istio support and want to try it out eventually) but theres flannel, weave, and surely some more.

kubectl apply -f https://docs.projectcalico.org/manifests/tigera-operator.yaml
kubectl apply -f https://docs.projectcalico.org/manifests/custom-resources.yaml
watch kubectl get pods -n calico-system

We should be seeing this

Now lets join our nodes, Run this only in the nodes

kubeadm join 192.168.0.106:6443 --token upvy8k.m0....zw \
--discovery-token-ca-cert-hash sha256:55c603acb89849a9305189d....73357

The 2 nodes have finally joined with the master

TEARS OF JOY!!

Though, lets fix that labeling quickly. Running this in master,

kubectl label nodes node01 kubernetes.io/role=worker
kubectl label nodes node02 kubernetes.io/role=lazy-worker # justfor the lols
yes, seems about right

so lets test this shall we

kubectl run bbtest --image=busybox --restart=Never -- /bin/sh -c "echo 'Is it done, Yuri?'"#after container gets 
kubectl logs bbtest
https://www.youtube.com/watch?v=Rn6GbXZCt-o

Thanks for reading…

A note: This procedure is probably what triggered sys/devops teams from all over the world to move towards IaC concept. We must do the same cluster again but with ansible.

--

--

Yiğit İrez
Yiğit İrez

Written by Yiğit İrez

Let’s talk devops, automation and architectures, everyday, all day long. https://www.linkedin.com/in/yigitirez/

No responses yet