0% found this document useful (0 votes)
42 views7 pages

Kubernetes Cluster Setup Guide

Uploaded by

Stephen Efange
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as TXT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
42 views7 pages

Kubernetes Cluster Setup Guide

Uploaded by

Stephen Efange
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as TXT, PDF, TXT or read online on Scribd
You are on page 1/ 7

==> Install and configure a container runtime

-docker or containerd

Install Kubernetes packages:

kubelet
kubeadm
kubectl

Note: Install the CR + 3 Kubernetes packages on all the nodes in your cluster.

==> Create the cluster:


Use kubeadm to bootstrap the first node in the clatter called the control plane.
This will create the critical cluster components up and running.
These components are the:
- API Server
- Controller manager
- Etcd etc.

==> Next configure the pod networking environment.


Here we use an overlay network for pod networking in the cluster.

calico or flannel

==> Join the worker nodes to the cluster.


The worker nodes is where we run our applications.

Step 1 - On all cluster nodes:

sudo apt-get install -y containerd

curl-s https://packages.cloud.google.com/apt/doc/apt-key.gpg| sudo apt-keyadd-

cat<<EOF>/etc/apt/sources.list.d/kubernetes.list
debhttps://apt.kubernetes.io/ kubernetes-xenialmain
EOF

apt-get update
apt-get install-y kubelet kubeadm kubectl
apt-mark hold kubelet kubeadm kubectl containerd

Ubuntu 22.04 and Kubernetes 1.29.1

Step 2 - Bootstrap/Create the cluster's first node/control plane:

4 servers:

One CP
3 Worker nodes

2 CPUS each
4 GB RAM
100 GB

Network machines - /etc/hosts file


Disable swap

swapoff -a
vi /etc/fstab

comment out swap line.

=====================================================================AN

#Install containerd...
sudo apt-get install -y containerd

#Create a containerd configuration file


sudo mkdir -p /etc/containerd
sudo containerd config default | sudo tee /etc/containerd/config.toml

#Set the cgroup driver for containerd to systemd which is required for the kubelet.
#For more information on this config file see:
# https://github.com/containerd/cri/blob/master/docs/config.md and also
# https://github.com/containerd/containerd/blob/master/docs/ops.md

#At the end of this section, change SystemdCgroup = false to SystemdCgroup = true
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
...
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
SystemdCgroup = true

#You can use sed to swap in true


sudo sed -i 's/ SystemdCgroup = false/ SystemdCgroup = true/'
/etc/containerd/config.toml

#Verify the change was made


grep 'SystemdCgroup = true' /etc/containerd/config.toml

#Restart containerd with the new configuration


sudo systemctl restart containerd

#Install Kubernetes packages - kubeadm, kubelet and kubectl


#Add k8s.io's apt repository gpg key, this will likely change for each version of
kubernetes release.
sudo apt-get install -y apt-transport-https ca-certificates curl gpg
sudo curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.29/deb/Release.key | sudo gpg
--dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg

#Add the Kubernetes apt repository


echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg]
https://pkgs.k8s.io/core:/stable:/v1.29/deb/ /' | sudo tee
/etc/apt/sources.list.d/kubernetes.list
#Update the package list and use apt-cache policy to inspect versions available in
the repository
sudo apt-get update
apt-cache policy kubelet | head -n 20

#Install the required packages, if needed we can request a specific version.


#Use this version because in a later course we will upgrade the cluster to a newer
version.
#Try to pick one version back because later in this series, we'll run an upgrade
VERSION=1.29.1-1.1
sudo apt-get install -y kubelet=$VERSION kubeadm=$VERSION kubectl=$VERSION
sudo apt-mark hold kubelet kubeadm kubectl containerd

#To install the latest, omit the version parameters. I have tested all demos with
the version above, if you use the latest it may impact other demos in this course
and upcoming courses in the series
#sudo apt-get install kubelet kubeadm kubectl
#sudo apt-mark hold kubelet kubeadm kubectl containerd

#1 - systemd Units
#Check the status of our kubelet and our container runtime, containerd.
#The kubelet will enter a inactive (dead) state until a cluster is created or the
node is joined to an existing cluster.
sudo systemctl status kubelet.service
sudo systemctl status containerd.service

Control Plane

#0 - Creating a Cluster
# Log into our control plane node
ssh aen@c1-cp1

#Create our kubernetes cluster, specify a pod network range matching that in
calico.yaml!
#Only on the Control Plane Node, download the yaml files for the pod network.
wget https://raw.githubusercontent.com/projectcalico/calico/master/manifests/
calico.yaml

#Look inside calico.yaml and find the setting for Pod Network IP address range
CALICO_IPV4POOL_CIDR,
#adjust if needed for your infrastructure to ensure that the Pod network IP
#range doesn't overlap with other networks in our infrastructure.
vi calico.yaml

#You can now just use kubeadm init to bootstrap the cluster
sudo kubeadm init --kubernetes-version v1.29.1

#remove the kubernetes-version parameter if you want to use the latest.


#sudo kubeadm init

#Before moving on review the output of the cluster creation process including the
kubeadm init phases,
#the admin.conf setup and the node join command

#Configure our account on the Control Plane Node to have admin access to the API
server from a non-privileged account.
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

#1 - Creating a Pod Network


#Deploy yaml file for your pod network.
kubectl apply -f calico.yaml

#Look for the all the system pods and calico pods to change to Running.
#The DNS pod won't start (pending) until the Pod network is deployed and Running.
kubectl get pods --all-namespaces

#Gives you output over time, rather than repainting the screen on each iteration.
kubectl get pods --all-namespaces --watch

#All system pods should be Running


kubectl get pods --all-namespaces

#Get a list of our current nodes, just the Control Plane Node Node...should be
Ready.
kubectl get nodes

#2 - systemd Units...again!
#Check out the systemd unit...it's no longer inactive (dead)...its active(running)
because it has static pods to start
#Remember the kubelet starts the static pods, and thus the control plane pods
sudo systemctl status kubelet.service

#3 - Static Pod manifests


#Let's check out the static pod manifests on the Control Plane Node
ls /etc/kubernetes/manifests

#And look more closely at API server and etcd's manifest.


sudo more /etc/kubernetes/manifests/etcd.yaml
sudo more /etc/kubernetes/manifests/kube-apiserver.yaml

#Check out the directory where the kubeconfig files live for each of the control
plane pods.
ls /etc/kubernetes

Worker Nodes

#For this demo ssh into c1-node1


ssh aen@c1-node1

#Disable swap, swapoff then edit your fstab removing any entry for swap partitions
#You can recover the space with fdisk. You may want to reboot to ensure your config
is ok.
swapoff -a
vi /etc/fstab

#0 - Joining Nodes to a Cluster

#Install a container runtime - containerd


#containerd prerequisites, and load two modules and configure them to load on boot
#https://kubernetes.io/docs/setup/production-environment/container-runtimes/
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF

sudo modprobe overlay


sudo modprobe br_netfilter

# sysctl params required by setup, params persist across reboots


cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
EOF

# Apply sysctl params without reboot


sudo sysctl --system

#Install containerd...
sudo apt-get install -y containerd

#Configure containerd
sudo mkdir -p /etc/containerd
sudo containerd config default | sudo tee /etc/containerd/config.toml

#Set the cgroup driver for containerd to systemd which is required for the kubelet.
#For more information on this config file see:
# https://github.com/containerd/cri/blob/master/docs/config.md and also
# https://github.com/containerd/containerd/blob/master/docs/ops.md

#At the end of this section, change SystemdCgroup = false to SystemdCgroup = true
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
...
# [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
SystemdCgroup = true

#You can use sed to swap in true


sudo sed -i 's/ SystemdCgroup = false/ SystemdCgroup = true/'
/etc/containerd/config.toml

#Verify the change was made


grep 'SystemdCgroup = true' /etc/containerd/config.toml

#Restart containerd with the new configuration


sudo systemctl restart containerd

#Install Kubernetes packages - kubeadm, kubelet and kubectl


#Add Google's apt repository gpg key
sudo apt-get install -y apt-transport-https ca-certificates curl gpg
sudo curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.29/deb/Release.key | sudo gpg
--dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg

#Add the Kubernetes apt repository


echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg]
https://pkgs.k8s.io/core:/stable:/v1.29/deb/ /' | sudo tee
/etc/apt/sources.list.d/kubernetes.list

#Update the package list and use apt-cache policy to inspect versions available in
the repository
sudo apt-get update
apt-cache policy kubelet | head -n 20

#Install the required packages, if needed we can request a specific version.


#Use this version because in a later course we will upgrade the cluster to a newer
version.
#Try to pick one version back because later in this series, we'll run an upgrade
VERSION=1.29.1-1.1
sudo apt-get install -y kubelet=$VERSION kubeadm=$VERSION kubectl=$VERSION
sudo apt-mark hold kubelet kubeadm kubectl containerd

#To install the latest, omit the version parameters


#sudo apt-get install kubelet kubeadm kubectl
#sudo apt-mark hold kubelet kubeadm kubectl

#Check the status of our kubelet and our container runtime.


#The kubelet will enter a inactive/dead state until it's joined
sudo systemctl status kubelet.service
sudo systemctl status containerd.service

#Log out of c1-node1 and back on to c1-cp1


exit
#You can also use print-join-command to generate token and print the join command
in the proper format
#COPY THIS INTO YOUR CLIPBOARD
kubeadm token create --print-join-command

#Back on the worker node c1-node1, using the Control Plane Node (API Server) IP
address or name, the token and the cert has, let's join this Node to our cluster.
ssh aen@c1-node1

#PASTE_JOIN_COMMAND_HERE be sure to add sudo


sudo kubeadm join 172.16.94.10:6443 \
--token yn8tkx.f5ssw0qn1ycqskt2 \
--discovery-token-ca-cert-hash
sha256:66ff307c46617ca400060e54b0db58f1597419f0a54bd971ed074b1a12067ee0

#Log out of c1-node1 and back on to c1-cp1


exit

#Back on Control Plane Node, this will say NotReady until the networking pod is
created on the new node.
#Has to schedule the pod, then pull the container.
kubectl get nodes

#On the Control Plane Node, watch for the calico pod and the kube-proxy to change
to Running on the newly added nodes.
kubectl get pods --all-namespaces --watch

#Still on the Control Plane Node, look for this added node's status as ready.
kubectl get nodes

#GO BACK TO THE TOP AND DO THE SAME FOR c1-node2 and c1-node3
#Just SSH into c1-node2 and c1-node3 and run the commands again.

You might also like