Setting up a small k8s test cluster locally instead of miniqube

Edits: Added Static IP configurations, fixed IPs in related parts

Normally, when trying to understand concepts related to k8s, most of the time, mini-cube is enough and I have used it a lot. It is actually super simple to use, just download the exe, install it, then minikube start from cmd. However, when I wanted to try out services or volumes, I generally had to fight with minikube (or it hates me I don’t know). That’s why I decided to create a tiny cluster on my own machine and learn a bit more in the process.

Before we start, I want to state that there are practical environments like this in katakoda, with proper k8s cluster already installed. Understandably though, the environments are time limited and sometimes the whole site goes down, sometimes the nodes suddenly go down, so we need our own cluster for more permanent things.

Requirements;

After installing Virtualbox, we start creating a new centos vm. The path where you put Machine Folder data should have around 50G available. As we progress, we get to understand why Infrastructure as code is so popular right now.

Lets give 2G+ of ram, 2+cpus and follow through steps from here to create in total 3 vms like below. 2 for nodes, 1 for master.

I installed every node+master with the following options, set a pw for our root and be on our way finally.

<optional>

At this point if you want a more permament system make the IP’s of your nodes static.

When you login to your nodes, when you use ip addr you should be seeing something extra like below. Critical bits are the bold part.

Now run the following command to setup your static ip. I’ve marked the related parts in bold. This must be done for every node.

</optional>

Lets do the following to not mix up host names later, also to not fail kubeadm init.

So our vms are ready. The steps we have to take for some k8s’ing are as follows;

Assume everything will be done with sudo or root unless otherwise stated. I would also like to suggest use of multi-execution mode of MobaxTerm. since most of the commands need to be run on all nodes.

  • Install kubeadm toolbox — detailed steps, as always are in kubernetes.io. My point with this doc is to create a short list on how tos and concentrated whys.
  • Create a cluster with kubeadm
  • ???
  • Test freely

Install kubeadm toolbox

We need to load br_netfilter module first by running the command below. br_netfilter allows VxLAN traffic between pods on all nodes.

if we get nothing, then br_netfilter is not installed so run the following.

then vim to k8s.conf to add below. Since you cannot echo multi-line, you cat.

Finally run below command to reload .conf variables

K8s hates swap for some reason, so we need to disable and keep it disabled like below

Now to the firewall setup. Normally I would disable firewall and go through but I want to see it as it should be so the commands below should be run.

In master:

In nodes;

At this point in the documentation we notice we haven’t actually installed anything to run the pods on. Pods must run on either Docker, rkt or CRI-O so since we only want an environment to test k8s, lets go ahead with Docker.

We will update our yum repo just in case we have old docker repo already (we don’t), delete old versions we might have had (we don’t) and get a docker install script

and then finally, run docker and then make sure it keeps running

We have docker, but we need to make a few adjustments to its settings. I have no idea what the first line is, the second is the default logging driver for Docker, 3rd is a safeguard since without it, logs will fill everywhere. Storage driver overlay2 is the preferred driver for Linux as stated in the docs.

Finally we are finishing docker setup with the below;

Next up, starting kubeadm setup. Adding k8s repo to yum repo list, setting SELinux to permissive, and finally installing and enabling kubelet, kubeadm and kubectl.

Setting SELinux in permissive is to allow containers to access host filesystem (needed by pod networks).

Cluster setup

We need to change the hosts file in all our nodes before running this other we will get the following errors if we kubeadm init.

[WARNING Hostname]: hostname “master” could not be reached
[WARNING Hostname]: hostname “master”: lookup master on 192.168.0.1:53: no such host

We use cat again to add our hosts to all nodes

Now we need to init kubeadm in the master only first so we run the below command. Dont forget to grab join command displayed at the very end after running the init. We will be needing that soon.

Great, but if you somehow wrote the advertise address wrong like me, you can reset with kubeadm reset and rerun the init command. However, if you do reset, deleting the $HOME/.kube folder is evry important. Otherwise you would get cert errors.

NOTE

if calico misbehaves after you reset,

delete — network-plugin=cni in /var/lib/kubelet/kubeadm-flags.env and then restart the machine.

Then,

END NOTE

We use calico for an overlay network since we can use network policies with it (also it has istio support and want to try it out eventually) but theres flannel, weave, and surely some more.

We should be seeing this

Now lets join our nodes, Run this only in the nodes

The 2 nodes have finally joined with the master

Though, lets fix that labeling quickly. Running this in master,

so lets test this shall we

Thanks for reading…

A note: This procedure is probably what triggered sys/devops teams from all over the world to move towards IaC concept. We must do the same cluster again but with ansible.

Just a software everything fighting battles against mostly myself, and gaining small victories lately.