Kubernetes
Imperative way:
kubectl run my-pod --image=nginx
Declarative way:
Using yaml file
apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
contianers:
- name: nginx
image: nginx:latest
kubectl apply -f filename.yml
watch kebectl get po
kubectl get po
kubectl get po -o wide
kubectl run nginx1 --image=nginx
kubectl exec -it multi-container-pod -- bash
Name Spaces:
A namespace in Kubernetes is a way to logically divide a cluster into
multiple isolated environments. It helps in managing resources efficiently when
multiple teams, projects, or applications share the same cluster.
kubectl get namespaces
kubectl create <namespace-name>
controlplane:~$ history
1 exit
2 halt
3 FILE=/ks/wait-background.sh; while ! test -f ${FILE}; do clear; sleep 0.1;
done; bash ${FILE}
4 vi pod.yml
5 kubectl apply -f pod.yml
6 kubectl get po
7 kubectl get no
8 kubectl run nginx1 --image=nginx
9 clear
10 kubectl get po
11 kubectl exec -it multi-container-pod --bash
12 kubectl exec -it multi-container-pod -- bash
13 kubectl get po -o wide
14 cat pod.yml
15 vi pod.yml
16 kubectl get po
17 kubectl --help
18 kubectl delete controllerrevisions.apps
19 kubectl delete multi-container-pod
20 kubectl get po -- wide
21 kubectl get po - wide
22 kubectl get po -o wide
23 kubectl delete multi-container-pod
24 kubectl delete multi-container-pod --force
25 kubectl delete nginx --force
26 lubectl get po
27 kubectl get po
28 kubectl get pod
29 kubectl get po
30 kubectl delete multi-container-pod
31 kubectl --help
32 clear
33 kubectl get po
34 kubectl get po -f wide
35 kubectl get po -o wide
36 kubectl delete multi-container-pod
37 kubectl delete pod multi-container-pod
38 clear
39 kubectl get po -o wide
40 kubectl delete pod nginx1
41 kubectl get po -o wide
42 clear
43 kubectl create ns dev
44 ls
45 kubectl apply --help
46 clear
47 kubectl apply -f pod.yml --image=nginx --namespace=dev
48 kubectl apply -f pod.yml --image=nginx -n=dev
49 kubectl apply -f pod.yml -n=dev
50 kubectl get po -n=dev
51 history
52 kubectl explain servicename
K8s SERVICE:
A Service in Kubernetes is an abstraction that defines a logical set of
Pods and a policy by which they can be accessed. Services provide stable networking
for pods that may change dynamically (due to scaling, restarts, or failures).
Why Do We Need a Service?
Pods are ephemeral – they can be created or destroyed at any time.
Each Pod gets a unique IP, but it changes when a Pod is restarted or
rescheduled.
Services provide a fixed IP (ClusterIP) and DNS name to access pods reliably.
What is a Cluster in Kubernetes?
A Kubernetes Cluster is a collection of machines (nodes) that work together to run
containerized applications. It provides the infrastructure to deploy, manage, and
scale applications automatically.
Components of a Kubernetes Cluster
A Kubernetes cluster consists of two main types of nodes:
11️⃣Control Plane (Master Node)
The control plane is responsible for managing the entire cluster, scheduling
workloads, and maintaining the desired state of applications.
🔹 Key components of the Control Plane:
API Server (kube-apiserver) → Acts as the entry point for all cluster operations.
Controller Manager (kube-controller-manager) → Handles controllers like node
controller, replication controller, etc.
Scheduler (kube-scheduler) → Decides which node should run a new pod.
etcd → A distributed key-value store that stores all cluster data.
2️⃣ Worker Nodes
Worker nodes are the machines where application workloads (pods) run.
🔹 Key components of a Worker Node:
Kubelet → The agent that ensures pods are running properly on the node.
Container Runtime (e.g., Docker, containerd) → Runs the actual containers.
Kube Proxy → Handles networking between services and pods.
How Does a Cluster Work?
The user defines a desired state (e.g., running 3 replicas of an app).
The control plane schedules the workloads on available worker nodes.
The kubelet on each worker node ensures the assigned pods are running.
Services expose applications internally or externally for access.
Why Do We Need a Kubernetes Cluster?
✅ Ensures high availability and fault tolerance.
✅ Allows automatic scaling and self-healing of applications.
✅ Manages networking between microservices efficiently.
Yes, exactly! Services in Kubernetes provide a way to expose pods through different
accessible mediums, ensuring stable networking.
Services Expose Pods via:
ClusterIP (Default) – Accessible only within the cluster via an internal IP.
NodePort – Exposes the pod on each node’s IP and a fixed port (e.g.,
NodeIP:NodePort).
LoadBalancer – Uses a cloud provider's ELB (Elastic Load Balancer) to provide
external access.
ExternalName – Maps the service to an external DNS name instead of routing to pods.
✅ Main Purpose of a Service:
Provides a stable endpoint for communication.
Load balances traffic to multiple pods.
Allows external access where needed.
kubectl expose deployment nginx --type=NodePort --port=80
Why Use a Deployment Instead of a Pod or ReplicaSet?
Self-Healing & High Availability
If a pod crashes, the deployment automatically recreates it.
Scalability
You can easily scale the number of replicas up or down
Rolling Updates
Allows updating the application without downtime
kubectl get deploy,rs,po -o wide
kubectl create deployment nginx-deployment --image=nginx:1.17.10 --replica=5
kubectl rolloyt history deployment.apps.nginx-deploy
kubectl describe pod/name
kubectl logs
kubectl rollout undo deployment.apps/nginx0deploy --to-revision=1
kubectl rollout status
kubectl rollout history deployment.apps/nginx-deploy deployment.apps/nginx-
deployment
Scale Deployment
kubectl scale deployment nginx-deployment --replicas=5