TE-IT ADevOps Lab
Experiment No: 3
Aim: To understand the Kubernetes Cluster Architecture, install and Spin Up a Kubernetes
Cluster on Linux Machines/Cloud Platforms.
Lab Outcome: LO1 & LO2
LO1: To understand the fundamentals of Cloud Computing and be fully proficient with Cloud
based DevOps solution deployment options to meet your business requirements.
LO2: To deploy single and multiple container applications and manage application deployments
with rollouts in Kubernetes.
Mapping PO: 1, 2, 3
Theory:
Kubernetes is an open-source platform for deploying and managing containers. It provides a container
runtime, container orchestration, container-centric infrastructure orchestration, self healing
mechanisms, service discovery and load balancing. It’s used for the deployment, scaling, management,
and composition of application containers across clusters of hosts. It aims to reduce the burden of
orchestrating underlying compute, network, and storage infrastructure, and enable application
operators and developers to focus entirely on container centric workflows for self- service operation.
It allows developers to build customized workflows and higher-level automation to deploy and
manage applications composed of multiple containers.
Kubernetes Architecture and Concepts:
From a high level, a Kubernetes environment consists of a control plane (master),
a distributed storage system for keeping the cluster state consistent (etcd), and a number
of cluster nodes (Kubelets).
Kubernetes Control Plane:
The control plane is the system that maintains a record of all Kubernetes objects. It continuously
manages object states, responding to changes in the cluster; it also works to make the actual state
of system objects match the desired state. As the above illustration shows, the control plane is
made up of three major components: kube-apiserver, kube-controller manager and kube-scheduler.
These can all run on a single master node, or can be replicated across multiple master nodes for
high availability. The API Server provides APIs to support lifecycle orchestration (scaling, updates,
and so on) for different types of applications. It also acts as the gateway to the cluster, so the API
server must be accessible by clients from outside the cluster. Clients authenticate via the API Server,
and also use it as a proxy/tunnel to nodes and pods (and services). Most resources contain metadata,
such as labels and annotations, desired state (specification) and observed state (current status).
Controllers work to drive the actual state toward the desired state.
There are various controllers to drive state for nodes, replication (autoscaling), endpoints (services and
pods), service accounts and tokens (namespaces). The Controller Manager is a daemon that runs
the core control loops, watches the state of the cluster, and makes changes to drive status toward
the desired state. The Cloud Controller Manager integrates into each public cloud for optimal
support of availability zones, VM instances, storage services, and network services for DNS,
routing and load balancing.
The Scheduler is responsible for the scheduling of containers across the nodes in the cluster; it
takes various constraints into account, such as resource limitations or guarantees, and affinity and anti-
affinity specifications.
Cluster Nodes: Cluster nodes are machines that run containers and are managed by the master nodes.
The Kubelet is the primary and most important controller in Kubernetes. It’s responsible for driving
the container execution layer, typically Docker.
Pods and Services:
Pods are one of the crucial concepts in Kubernetes, as they are the key construct that developers interact
with. The previous concepts are infrastructure-focused and internal architecture. This logical
construct packages up a single application, which can consist of multiple containers and storage
volumes. Usually, a single container (sometimes with some helper program in an additional
container) runs in this configuration – as shown in the diagram below. A pod represents a running
process on a cluster
Kubernetes Networking:
Networking Kubernetes has a distinctive networking model for cluster-wide, pod to-pod
networking. In most cases, the Container Network Interface (CNI) uses a simple overlay network (like
Flannel) to obscure the underlying network from the pod by using traffic encapsulation (like VXLAN);
it can also use a fully-routed solution like Calico. In both cases, pods communicate over a cluster-
wide pod network, managed by a CNI provider like Flannel or Calico.
Within a pod, containers can communicate without any restrictions. Containers within a pod exist
within the same network namespace and share an IP. This means containers can communicate over
localhost. Pods can communicate with each other using the pod IP address, which is reachable
across the cluster. Moving from pods to services, or from external sources to services, requires
going through kube-proxy.
Kubernetes Tooling and Clients:
Here are the basic tools you should know:
• Kubeadm bootstraps a cluster. It’s designed to be a simple way for new users to build
clusters (more detail on this is in a later chapter).
• Kubectl is a tool for interacting with your existing cluster.
• Minikube is a tool that makes it easy to run Kubernetes locally.
Step 1: Create two EC2 Instances with ubuntu OS, and attach the following security groups to
it.(Rename them as K8s-Master and K8s-Slave)
1. All Traffic (IPV4)
2. All Traffic (IPV6)
Step 2: Create an IAM user/role with Route53, EC2, IAM and S3 full access.
Step 3: Attach IAM role that we just created and attach it to ubuntu server.
Step 4: Connect both the instances using Putty/WinSCP. (Refer ‘Putty Installation’ document)
Steps to Install Kubernetes on Ubuntu
Set up Docker
Step 1: Install Docker
Kubernetes requires an existing Docker installation. If you do not have Kubernetes, install
it by following these steps:
1. Update the package list with the command: sudo apt-get update
2. Next, install Docker with the command: sudo apt-get install docker.io
3. Repeat the process on each server that will act as a node.
4. Check the installation (and version) by entering the following: docker ––version
Step 2: Start and Enable Docker
1. Set Docker to launch at boot by entering the following: sudo systemctl enable docker
2. Verify Docker is running: sudo systemctl status docker
3. To start Docker if it’s not running: sudo systemctl start docker
4. Repeat on all the other nodes.
Installing Kubeadm
Creating a cluster with kubeadmInstall Kubernetes
Step 3: Add Kubernetes Signing Key
Since you are downloading Kubernetes from a non-standard repository, it is essential to ensure that
the software is authentic. This is done by adding a signing key.
1. Enter the following to add a signing key:
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg
sudo apt-key add
2. If you get an error that curl is not installed, install it with: sudo apt-get install curl
Then repeat the previous command to install the signing keys. Repeat for each server node.
Step 4: Add Software Repositories
Kubernetes is not included in the default repositories. To add them, enter the following:
sudo apt-add-repository "deb http://apt.kubernetes.io/ kuberne tes-xenial main"
Repeat on each server node.
Step 5: Kubernetes Installation Tools
Kubeadm (Kubernetes Admin) is a tool that helps initialize a cluster. It fast-tracks setup by using
community-sourced best practices. Kubelet is the work package, which runs on every node and
starts containers. The tool gives you command-line access to clusters.
1. Install Kubernetes tools with the command:
sudo apt-get install kubeadm kubelet kubectl
sudo apt-mark hold kubeadm kubelet kubectl
apt-mark hold command makes sure that these packages does not get auto-upgraded/deleted they
remain as it is.
Allow the process to complete.
Installing Kubeadm
Creating a cluster with kubeadm
2. Verify the installation with:
kubeadm version
Kubernetes Deployment
Step 6: Begin Kubernetes Deployment
Start by disabling the swap memory on each server: sudo swapoff --a
Step 7: Assign Unique Hostname for Each Server Node
Decide which server to set as the master node. Then enter the command:
sudo hostnamectl set-hostname master-node
Next, set a worker node hostname by entering the following on the worker server:
sudo hostnamectl set-hostname worker01
Step 8: Initialize Kubernetes on Master Node (Should be run only on master node)
Switch to the master server node, and enter the following:
sudo kubeadm init --pod-network-cidr=10.244.0.0/16
You’ll encounter an error regrading insufficient disk and CPU, ignore it using
following command
sudo kubeadm init --pod-network-cidr=10.244.0.0/16 --ignore-pr eflight-errors=all
Installing Kubeadm
Creating a cluster with kubeadmOnce this command finishes, it will display a kubeadm join mes- sage
at the end. Make a note of the whole entry. This will be used to join the worker nodes to the cluster.
Next, enter the following to create a directory for the cluster:
kubernetes-master:~$ mkdir -p $HOME/.kube
kubernetes-master:~$ sudo cp -i /etc/kubernetes/admin.conf $HO
ME/.kube/config
kubernetes-master:~$ sudo chown $(id -u):$(id -g) $HOME/.kube/ config
Step 9: Deploy Pod Network to Cluster
A Pod Network is a way to allow communication between different nodes in the cluster. This tuto- rial
uses the flannel virtual network.
Enter the following:
sudo kubectl apply –f https://raw.githubusercontent.com/coreos/flannel/master/Documentation
/kube-flannel.yml
Allow the process to complete.
Verify that everything is running and communicating:
kubectl get pods --all-namespaces
Installing Kubeadm
Creating a cluster with kubeadm
Step 10: Join Worker Node to Cluster
As indicated in Step 7, you can enter the kubeadm join command on each worker node to connect
it to the cluster. Switch to the worker01 system and enter the command you noted from Step 7:
kubeadm join --discovery-token abcdef.1234567890abcdef --disco very-token-ca-cert
-hash sha256:1234..cdef 1.2.3.4:6443
Replace the alphanumeric codes with those from your master server. Repeat for each worker node
on the cluster. Wait a few minutes; then you can check the status of the nodes. Switch to the master
server, and enter: kubectl get nodes
The system should display the worker nodes that you joined to the cluster.
Conclusion: In the above experiment we have learned about Kubernates, also we have installed
Kubernetes Cluster and spined up on AWS EC2 instances.