Kubernetes Installation
Hey guys welcome back…! In this blog , I will show how to setting up Kubernetes home lab setup for practice for this I will be using oracle VM virtual box manager and centos operating system to deploy one master and worker node.
You need below components to setup the lab.
- Oracle VM virtual box manager : https://www.virtualbox.org/wiki/Downloads
- MobaXterm : https://mobaxterm.mobatek.net/download.html
- Centos ISO: http://isoredirect.centos.org/centos/7/isos/x86_64/
Note : Minimum requirement master and worker nodes must have 2 virtual CPUs and 2-4 GB RAM
Let’s get started to setup K8s cluster 🙂
Step 1: Name first master and worker nodes:
on Masternode
# hostnamectl set-hostname masternode01.lab.local
on Worker node
# hostnamectl set-hostname workernode01.lab.local # hostnamectl set-hostname workernode02.lab.local
Now check hostname at runtime using following command:
# hostnamectl
Step 2: Open a file on both VM’s and add hostnames (DNS Alias ). Perform ping test.
vi /etc/hosts 10.197.163.215 masternode01.lab.local 10.197.163.218 workernode01.lab.local 10.197.163.214 workernode02.lab.local
Step 3: Disable SELinux
The containers need to access the host filesystem. SELinux needs to be set to permissive mode, which effectively disables its security functions.
Use following commands to disable SELinux:
sudo setenforce 0 sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
Step 4: Disable SWAP
Check swap off on master and worker nodes. Reason being k8s says don’t want to store process on swap memory need to be run time memory like physical memory.
sudo sed -i '/swap/d' /etc/fstab sudo swapoff -a
To disable swap permanent, modify file /etc/fstab on both master and worker nodes.
Step 5: Update iptable settings
As a requirement for your Linux Node’s iptables to correctly see bridged traffic, you should ensure net.bridge.bridge-nf-call-iptables is set to 1 in your sysctl config, e.g.
cat <<EOF > /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 EOF
sysctl --system
Make sure that the br_netfilter module is loaded before this step. This can be done by running lsmod | grep br_netfilter. To load it explicitly call modprobe br_netfilter.
For more details please see the Network Plugin Requirements page.
Step 6: Configure Firewall
The nodes, containers, and pods need to be able to communicate across the cluster to perform their functions. Firewalld is enabled in CentOS by default on the front-end. Add the following ports by entering the listed commands.
Please note for home lab environment , if you want just disable the firewall and don’t need to add the ports to firewall.
systemctl status firewalld systemctl stop firewalld
If you are configuring this within production perform below steps as security will be primary focus while performing the installation.
On the Master Node enter:
sudo firewall-cmd --permanent --add-port=6443/tcp sudo firewall-cmd --permanent --add-port=2379-2380/tcp sudo firewall-cmd --permanent --add-port=10250/tcp sudo firewall-cmd --permanent --add-port=10251/tcp sudo firewall-cmd --permanent --add-port=10252/tcp sudo firewall-cmd --permanent --add-port=10255/tcp sudo firewall-cmd –reload
Each time a port is added the system confirms with a ‘success’ message.
Enter the following commands on each worker node:
sudo firewall-cmd --permanent --add-port=10251/tcp sudo firewall-cmd --permanent --add-port=10255/tcp firewall-cmd –reload
Step 7: Install Docker Engine on Master and worker nodes
Install using the repository
Before you install Docker Engine for the first time on a new host machine, you need to set up the Docker repository. Afterward, you can install and update Docker from the repository.
Set up the repository
Install the yum-utils package (which provides the yum-config-manager utility) and set up the repository.
sudo dnf check-update sudo dnf upgrade sudo dnf config-manager --add-repo=https://download.docker.com/linux/centos/docker-ce.repo
Install Docker Engine
sudo dnf install docker-ce --nobest --allowerasing -y
This command installs Docker, but it doesn’t start Docker. It also creates a docker group, however, it doesn’t add any users to the group by default.
Start and enable Docker Engine.
systemctl start docker systemctl enable docker
Step 8: Configure Kubernetes Repository
This step needs to be performed on the Master Node, and each Worker Node you plan on utilizing for your container setup. Enter the following command to retrieve the Kubernetes repositories.
cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg EOF
Step 9: Install kubelet, kubeadm, and kubectl
These 3 basic packages are required to be able to use Kubernetes. Install the following package(s) on each node:
yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes systemctl enable kubelet systemctl start kubelet
How to Deploy a Kubernetes Cluster:
Step 1: Create Cluster with kubeadm
Initialize a cluster by executing the following command:
sudo kubeadm init --pod-network-cidr=10.244.0.0/16
The process might take several minutes to complete based on network speed. Once this command finishes, it displays a kubeadm join message. Make a note of the entry and use it to join worker nodes to the cluster at a later stage.
Note: This tutorial uses the flannel virtual network add-on. The 10.244.0.0/16 network value reflects the configuration of the kube-flannel.yml file. If you plan to use a different third-party provider, change the –pod-network-cidr value to match your provider’s requirements.
Step 2: Manage Cluster as Regular User
To start using the cluster you need to run it as a regular user by typing:
mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config
Step 3: Set Up Pod Network
A Pod Network allows nodes within the cluster to communicate. There are several available Kubernetes networking options. Use the following command to install the flannel pod network add-on:
sudo kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
If you decide to use flannel, edit your firewall rules to allow traffic for the flannel default port 8285.
Step 4: Check Status of Cluster
Check the status of the nodes by entering the following command on the master server:
sudo kubectl get nodes
Once a pod network has been installed, you can confirm that it is working by checking that the CoreDNS pod is running by typing:
sudo kubectl get pods --all-namespaces
Step 5: Join Worker Node to Cluster
As indicated in Step 1, you can use the kubeadm join command on each worker node to connect it to the cluster.
kubeadm join 10.197.163.215:6443 --token opcdjc.m6o797w9f7ft2jzl \ --discovery-token-ca-cert-hash sha256:9fc7728940fb351311507bff3fa0970250ed309e5975f588ec702f3ba2ff0050
Replace the codes with the ones from your master server. Repeat this action for each worker node on your cluster.