Kubernetes:- On-Premise using kubeadm

Sumesh R acharya
3 min readAug 3, 2021

Pre-requisites

  • Debian-based distribution.
  • Docker should be installed.
  • Machines should be in same subnet and able to ping each other.
  • Master Node minimal requirements -> 2CPU(cores) & 2GB RAM.
  • change the hostname of all the machines to relevant k8s cluster node names:
master / control-plane :- Machine which is acting as master node for the cluster.worker-1 :- Machine which is acting as worker nodes for the cluster, worker-2 and soon.
  • Add hostname & node ip address entries in /etc/hosts for all the nodes, as shown like below:-
$ sudo cat << EOF | sudo tee /etc/hosts  
172.23.0.1 control-plane
172.23.0.2 worker-1
172.23.0.3 worker-2
EOF
  • Disable ufw firewall on all the machines (if present).

Preparing the nodes

  1. Installing k8s components on all nodes:
$ sudo apt-get install -y apt-transport-https ca-certificates curl

$ sudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg

$ echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list

$ sudo apt update
$ sudo apt-get install -y kubelet kubeadm kubectl

2. Modifying the docker daemon configuration:

  • If not present , create /etc/docker.
  • Add docker daemon configuration
$ sudo cat > /etc/docker/daemon.json <<EOF
{
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2",
"storage-opts": [
"overlay2.override_kernel_check=true"
]
}
EOF
  • create docker service directory
  • $ sudo mkdir -p /etc/systemd/system/docker.service.d
  • reload the daemon and restart the docker
$ sudo systemctl daemon-reload
$ sudo systemctl restart docker

3. Remove the swap:

  • Comment out the swap entry in /etc/fstab file.
  • Now you can reboot the machine OR run the below command to immediately remove current swap spaces: swapoff -a

4. Network setup:

  • We are enabling br_netfilter kernal module
$ sudo modeprobe br_netfilter
  • Uncomment or Add below two lines in /etc/sysctl.conf
net.ipv4.ip_forward=1
net.bridge.bridge-nf-call-iptables=1
  • Either reboot OR apply the sysctl changes
$ sudo sysctl -p /etc/sysctl.conf
  • For network plugins which we are going to use, it’s important that the legacy iptables utility should be registered.
$ sudo apt-get install -y iptables arptables ebtables$ sudo update-alternatives --set iptables /usr/sbin/iptables-legacy$ sudo update-alternatives --set ip6tables /usr/sbin/ip6tables-legacy$ sudo update-alternatives --set arptables /usr/sbin/arptables-legacy$ sudo update-alternatives --set ebtables /usr/sbin/ebtables-legacy
  • (Optional) if you plan on using NFS storage one way or another , then your node should have NFS client
$ sudo apt-get install nfs-common

Setting up the master & worker nodes

  1. Intialize the master node using kubeadm init
# command syntax
kubeadm init --apiserver-advertise-address=<API_SERVER_ADDRESS> --pod-network-cidr=<POD_CIDR>
$ sudo kubeadm init --apiserver-advertise-address=172.23.0.1 --pod-network-cidr=192.169.0.0/16

2. After seeing the message Your Kubernetes control-plane has initialized successfully! , you have to follow below steps:

  • To access the cluster as regular user:
$ mkdir -p $HOME/.kube
$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config
  • Now add network plugin for our k8s cluster , we are using kube-flannel :
$ kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
  • Now on your worker nodes , run kubeadm join command which is shown in output of kubeadm init , which looks like below :-
$ sudo kubeadm join 172.23.0.1:6443 --token ************ --discovery-token-ca-cert-hash ***************************

3. Now in master node check the status of nodes:-

$ kubectl get nodes
  • if see you all your nodes and they are in ready state , then you have running kubernetes cluster on-premise.

--

--