Kubernetes:
Kubernetes, also known as K8s, is an open-source system for automating the deployment, scaling, and management of
containerized applications. It groups containerized applications into logical units for easy management and deployment.
Automates deployment, Managers scaling, enables self-healing, Provides load balancing
Stateless and Stateful Applications:
Stateless Application
Don't maintain persistent state across restarts or pod rescheduling.
Each request is treated independently and processed based on the provided information.
Examples: Web servers, API servers, load balancers, caching services.
Stateful Applications:
Maintain state information that needs to be preserved across pod restarts or rescheduling.
This state could be data stored in databases, file systems, or other persistent storage mechanisms.
Examples: Databases (MySQL, PostgreSQL), message brokers (Kafka, RabbitMQ), session managers.
The machine that makes up a Kubernetes cluster is called a node. A node can be physical or virtual.
The Kubernetes is made up of two planes namely control plane and data plane.
Controls plane:
The control plane is the brain of Kubernetes; it manages the overall state of the cluster and instructs the data plane on
what to do. It typically runs on a separate set of machines from the ones that host your applications.
It consists of :
o API server: They are one or more API servers and they act as the entry point for REST and kubectl API. It accepts
commands from users or applications and translates them into instructions for the data plane.
o Kube controller manager- Manages the lifecycle of different objects in the cluster, such as deployments, pods,
and services. It ensures the desired state of the cluster is maintained.
o Kube-scheduler- Responsible for deciding where to deploy new pods (groups of containers) based on available
resources and user-defined constraints. This schedules pods to worker nodes.
o etcd: A highly available key-value store that stores the shared state of the cluster, such as the configuration of
pods, services, and deployments.
Data plane:
The data plane is where the actual work of running containerized applications happens. It is made up of worker nodes.
It consists of:
o Worker Nodes: These nodes run container runtime software (like Docker) and host the containerized
applications (pods) that make up your services.
o Kubelet: An agent running on each worker node that receives instructions from the control plane and manages
the lifecycle of containers on that node. It acts as a conduit between the API server and node.
o Kube-proxy: A network proxy that runs on each worker node and implements the networking policies defined
for the cluster. It routes traffic to the appropriate pods within the cluster.it manages IP translation and routing.
o Pods: The smallest deployable unit in Kubernetes. A pod represents a group of one or more containers that are
tightly coupled and share storage. It is a thin wrapper around one or more containers. The lifecycle of a pod is
given below.
In Kubernetes, YAML (YAML Ain't Markup Language) files are the primary way to define and configure various resources
that make up your application. These resources represent the desired state of your application within the Kubernetes
cluster.
Kubernetes objects:
They are entities that are used to represent the state of the cluster.
An object is a “record of intent”. Once created, the cluster does its best to ensure it exists as defined. This is known as the
cluster’s “desired state”.
A desired state can describe:
o What pods are running? on which node?
o IP endpoints that map to a logical group of containers.
o How many replicas of a container are running?
Deployment:
Deployments provide a declarative way to manage your application. You specify the desired state, and Kubernetes works
towards achieving and maintaining that state. Details how to roll out(or roll back) across versions of your application.
Declarative Approach (Preferred):
o Describes the desired state of your application in the cluster.
o Uses YAML files to define resources like deployments, services, and pods.
Imperative Approach (Less Common):
o Issues commands directly to the Kubernetes API server to manipulate resources.
o Uses tools like kubectl to interact with the API server using commands and flags.
A Service defines how to expose your application to the outside world (other services within the cluster or external
clients).It maps a fixed ip address to a logical group of pods.
o NodePort: Exposes your service on a port on all worker nodes in the cluster, allowing access from outside
with a specific port number.
o LoadBalancer (cloud-provider specific): Creates a load balancer in your cloud environment to distribute
traffic across multiple pod replicas of your application.
In Kubernetes, several controller types manage different aspects of deploying and running containerized applications.
o ReplicaSet: A ReplicaSet ensures a specified number of identical pods are running in the cluster at all times.
It acts as the basic unit for scaling pods horizontally (adding more replicas for increased capacity).
o Job: A Job is used to run a task to completion, typically for non-persistent workloads. It ensures a specified
number of pods reach completion and then terminates. It ensures a pod runs to completion.
o Daemonset: A DaemonSet ensures a long-running pod runs on every node in the Kubernetes cluster (or a
subset of nodes based on selectors). It's useful for running system daemons or monitoring agents that need
to be present on all worker nodes. Implement a single instance of a pod on all worker nodes.
o Label: Labels are key-value pairs attached to Kubernetes to identify and group them.
Kubectl commands:
Commands Action
//gcloud:
1 gcloud config set project dev-solstice-531 To set the project as dev-solistice-531 in gcloud
2 export my_zone=us-central1-a To create a variable to store the zone
3 export my_cluster=demo-cluster To create a variable to store cluster name
4 gcloud container clusters create $my_cluster --num- To create a cluster called demo-cluster in us-central1-a with
nodes 3 --zone $my_zone --enable-ip-alias 3 nodes
5 gcloud container clusters get-credentials $my_cluster
--zone $my_zone
6 cat ~/.kube/config Contains the config file of kubernetes
//kubectl commands
7 kubectl get nodes To view the nodes created in cluster
8 kubectl get nodes -o wide To view nodes created in cluster in more detailed
9 kubectl version To view the kubernetes version installed in your host
10 kubectl get service kubernetes To view the service called “kubernetes, you should be able to
view the cluster IP address
// kubectl deployment
11 kubectl create deployment --image nginx nginx-1 To create a deployment with nginx image called nginx-1
12 kubectl get pods To view the pods created and running, you will be able to
see one pod for the above command
13 kubectl get deployments To view all the deployment created (nginx-1 here)
14 kubectl describe pods Gives a detailed information of the running pods
15 kubectl describe pods nginx-1 Gives details information of the pod running nginx
(or) deployment
Kubectl describe pods <podname>
16 kubectl apply -f hello.yaml To run a YAML file called hello.yaml(declaractive approach)
17 Kubectl get pods Get the name of the pod running nginx-1 and save it under
my_nginx_pod=nginx-2-dc589d58b-wxrjj variable my_nginx_pod
18 kubectl exec -it $my_nginx_pod -- bash To exec any bash commands inside the deployment pod.
19 kubectl delete deployement nginx-1 To delete deployment
//services:
20 kubectl describe service kubernetes To describe the service called kubernetes
21 Kubectl expose deployment nginx-deployment -- To expose a port and host the nginx-deployment as a service
target-port=8080 --type=LoadBalancer on a LoadBalancer
//scale deployment
22 kubectl scale --replicas=0 deployment nginx- To scale the number of replicas/pods to 0. Here even though
deployment the number of pods is 0, the deployments is still present and
is not deleted
23 kubectl scale --replicas=3 deployment nginx- To scale the number of replica/pods to 3
deployment
//rollout and roll back
24 kubectl set image deployment/nginx-deployment This command changes the image to nginx version 1.16.1
nginx=nginx:1.16.1 and rollouts the change like a blue-green deployment, one
pod after another.
25 kubectl rollout status deployment/nginx-deployment To check the rollout status, if entered fast we can view the
rollout changes happening
26 kubectl set image deployment/nginx-deployment Changes the image version to nginx: 1.14.2 and rollout the
nginx=nginx:1.14.2 change
27 kubectl rollout status deployment/nginx-deployment This shows the status of the rollout
28 kubectl rollout history deployment/nginx-deployment This shows the history of all the roll out done
29 kubectl rollout undo deployment/nginx-deployment This will roll back to the previous version.
//jobs
30 cd k8s/job_cronjobs creates a container that prints the value of pi. Since it’s a job,
kubectl apply -f example-job.yaml it will stop after the process is completed.
31 kubectl get jobs Will show you the created job and if its successful
completion.
32 kubectl describe job example-job Will show you detailed view of the job called example-job
33 kubectl get pods The pod will show completed job
34 kubectl logs [POD-NAME] This will print the log of the job, which in this case is the
value of pi (3.14….)
35 kubectl delete job example-job This will delete the example-job created
//cronjobs
36 kubectl apply -f example-cronjob.yaml creates a container that prints the hello world every one
minute. So a new pod will be created after every 1 minute to
print hello world
37 kubectl get jobs Will show you the created job and if its successful
kubectl describe job [job_name] completion.
38 kubectl logs [POD-NAME] Get the logs for the cron job, “hello. World!” in this case
39 kubectl get jobs To delete the cronjob and viewing to see if there are any jobs
kubectl delete cronjob hello after deleting.
kubectl get jobs
GKE Notes:
commands Actions
//autoscaler
40 cd ~/k8s/Autoscaling/ To change to the autoscaling folder and run the web.yaml file
kubectl create -f web.yaml --save-config
41 kubectl expose deployment web --target- Here we are exposing the web deployment as a service on port
port=8080 --type=NodePort 8080 and then checking if the service is hosted in a load
Kubectl expose deployment web --target- balancer
port=8080 --type=LoadBalancer Now if you want to deploy as cluster IP then you can give
kubectl get service web NodePort and if you want to use
kubectl get deployment
42 kubectl autoscale deployment web --max 4 --min 1 This command created an autoscaler to autoscale the
--cpu-percent 1 deployments with maximum 4pods and minimum 1 pod if the
kubectl get deployment cpu percent become greater than 1%.
43 kubectl get hpa (or) This command show the autoscaler which was created along
kubectl get horizontalpodautoscaler with the min, max and created replicas
44 kubectl describe horizontalpodautoscaler web This describes the autoscaler created for the web service.
45 kubectl get horizontalpodautoscaler web -o wide This shows the autoscaler created for the web service in a
more detailed way.
46 kubectl apply -f loadgen.yaml This command deploy a load gen application that generated
load to the web service
47 kubectl delete hpa web
48 kubectl scale deployment loadgen --replicas 0 here we are setting the load gen replica to zero, so the number
kubectl get deployment of web service pods will decrease back to 1
//taint tolerate In Kubernetes, taints and tolerations are a mechanism used to
control where pods are scheduled to run on nodes within the
cluster.
Taints: These are attributes applied to a Kubernetes node that
mark it as unsuitable for certain pods. A taint is essentially a
key-value pair with an optional effect.
NoSchedule: Pods without a toleration for the taint
will not be scheduled on the tainted node.
PreferNoSchedule: Kubernetes will try to avoid
scheduling pods without a toleration on the tainted
node, but it may do so in exceptional circumstances.
NoExecute: Existing pods on the tainted node are
evicted (removed) if they don't have a toleration for
the taint.
49 export my_zone=us-central1-a Here we have declared two varaibles my_zone and _cluster for
export my_cluster=demo-cluster zone and cluster name
$ gcloud container node-pools create "temp-pool- Here, we are creating a separate node pool called temp-pool-1
1" --cluster=$my_cluster --zone=$my_zone --num- in the same cluster with 2 nodes and we are creating albel
nodes "2" --node-labels=temp=true --preemptible called temp=true and the nodes as premptible (spot instance)
# Tainting a node
50 $ kubectl get nodes To gets the nodes, you will be able to see the demo-cluster
nodes and the temp-pool-1 nodes
51 $ kubectl get nodes -l temp=true To get the temp-pool nodes alone as we are asking for nodes
with label temp=true
52 $ kubectl taint node -l temp=true This command will taint the temp-pool-1 nodes and will
nodetype=preemptible:NoExecute remove any pods deployed on the tainted nodes which are
preemptible.
53 Kubectl apply -f web-toleration.yaml This will deploy web application in the tainted node.
//PVC
54 cd ~/k8s/Storage/ This command will show any SSD volumes created
kubectl get persistentvolumeclaim (or)
kubectl get pvc
55 kubectl apply -f pvc-demo.yaml This yaml file will create a PVC volume
56 kubectl apply -f pod-volume-demo.yaml This yaml file will attach the above created PVC volume to the
pod created
57 kubectl get pods You should be able to see a pod running for the above
deployment
58 kubectl exec -it pvc-demo-pod -- sh This command will open the created pod
59 echo Test webpage in a persistent We create index.html file inside the /var/www path in the
volume!>/var/www/html/index.html container
60 chmod +x /var/www/html/index.html This command is issued to make the index.html file executable
kubectl and then we leave the container
exit
//Test persistence of PVC:
61 kubectl delete pod pvc-demo-pod To test the PVC volume, we are deleting the created pod, but
kubectl get pods the volume will be retained
62 kubectl get persistentvolumeclaim If you check you will still be able to see the volume
63 kubectl apply -f pod-volume-demo.yaml Now we are running the pod-volume-demo yaml file again, so
kubectl get pods another pod will be created with the same volume.
You can see the new created pod has a different name
64 kubectl exec -it pvc-demo-pod -- sh Now we again login to the created container
65 cat /var/www/html/index.html You will e able to see the same file created in the previous pod,
exit thus proving that the volume is persisted
66 kubectl delete pod pvc-demo-pod To delete the created pod
kubectl get pods To check if the pod is deleted
//StatefulSets with PVCs Stateful Applications are Often used with persistent volumes
(PVs) and persistent volume claims (PVCs)
67 kubectl apply -f statefulset-demo.yaml This will create a service and update (rolling update) it to a
stateful service with the the PVC volume mounted
# Verify the connection of pods in statefulsets
68 kubectl describe statefulset statefulset-demo This command is used to describe the stateful service
69 kubectl get pods You will be able to see 3 pods running this service
70 kubectl get pvc To view the PVC volume, the volume name here is hello-web-
kubectl describe pvc hello-web-disk-statefulset- disk-statefulset-demo-0
demo-0
//Verify the persistence of Persistent Volume
connections to Pods managed by StatefulSets
71 kubectl exec -it statefulset-demo-0 -- sh So to check the persistence volume in stateful set we again
login to the pod
72 cat /var/www/html/index.html There won’t be any html file as this is a new volume
73 echo Test webpage in a persistent volume > We create index.html file inside the /var/www path in the
/var/www/html/index.html container
74 chmod +x /var/www/html/index.html This command is issued to make the index.html file executable
cat /var/www/html/index.html and then we leave the pod
exit
75 kubectl delete pod statefulset-demo-0 Now we are deleting the pod
76 kubectl get pods You will notice another pod automatically create but this pod
which is created will be created on the same name and not a
different name cause it is stateful
77 kubectl exec -it statefulset-demo-0 -- sh To check, we can login to thew new pod now
78 cat /var/www/html/index.html And check if the file exist, you will be able to see the .html file
exit
//config map In Kubernetes, a ConfigMap is an API object used to store non-
confidential configuration data for your applications. It
provides a way to decouple environment-specific configuration
from your container images, making your deployments more
portable and manageable.
80 cd k8s/Secrets This command create a config map called sample with key-
kubectl create configmap sample --from- value pair “message=hello”
literal=message=hello
kubectl describe configmaps sample
81 kubectl create configmap sample2 --from- This command creates a config map called sample2 with the
file=sample2.properties key-value pair found in the sample2.properties
kubectl describe configmaps sample2
82 kubectl apply -f config-map-3.yaml This command creates a config map called sample 3 using the
kubectl describe configmaps sample3 yaml file.
83 kubectl apply -f pubsub-configmap.yaml This yaml file creates a deployment that stores the config map
kubectl get pods env inside the container
84 kubectl exec -it [MY-POD-NAME] -- sh So to check that we are going to login to the container
85 printenv And this command prints all the env variable, so you should be
exit able to see the env of the config map
//secrets:
86 kubectl create secret generic pubsub-key --from-
file=key.json=$HOME/credentials.json
87 rm -rf ~/credentials.json
88 kubectl apply -f pubsub-secret.yaml
89 kubectl get pods -l app=pubsub
90 gcloud pubsub topics publish $my_pubsub_topic --
message="Hello, world!"
91 kubectl logs -l app=pubsub