702.2.
- Container Usage
Weight 5
Description Candidates should be able to run and manage multiple containers
that work together to provide a service. This includes the
orchestration of Docker containers using Docker Compose in
conjunction with an existing Docker Swarm cluster as well as
using an existing Kubernetes cluster. This objective covers the
feature sets of Docker Compose version 1.14 or later, Docker
Swarm included in Docker 17.06 or later and Kubernetes 1.6 or
later.
1.- Understand the application model of Docker Compose
Docker Compose is a tool for defining and running multi-container Docker
applications. It allows you to describe your application's services, networks, and
volumes in a single YAML file, making it easy to manage complex applications.
• Concept: Docker Compose focuses on defining and running multi-container
applications on a single host (primarily for development and testing
environments) or as part of a Docker Swarm stack for simple orchestration.
• Application-as-Code: The entire application's configuration is codified in a
YAML file, ensuring consistency and repeatability.
• Services: Each service in a docker-compose.yml file represents a container or
a group of identical containers that run a specific part of your application
(e.g., a web server, a database, a microservice).
• Networks: Compose automatically sets up a default network for all services
defined in the file, allowing them to communicate by service name. You can
also define custom networks.
• Volumes: You can define named volumes or bind mounts to persist data for
your services.
2.- Create and run Docker Compose Files (version 3 or later)
The docker-compose.yml file is the core of a Docker Compose application. Version
3 is the recommended and widely used format.
version: '3.8' # Specifies the Compose file format version
services:
web: # Service name (can be used as hostname within the network)
image: nginx:latest
ports:
- "80:80" # Host_port:Container_port
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf:ro # Bind mount a config file
networks:
- app_net # Connects to this network
depends_on: # Defines service dependencies (startup order, but not
readiness)
- api
- db
api:
build: . # Build from Dockerfile in current directory
ports:
- "5000:5000"
environment:
DATABASE_URL: db # Connect to the 'db' service
networks:
- app_net
db:
image: postgres:13
environment:
POSTGRES_DB: mydatabase
POSTGRES_USER: user
POSTGRES_PASSWORD: password
volumes:
- db_data:/var/lib/postgresql/data # Named volume for persistence
networks:
- app_net
volumes: # Define named volumes
db_data:
networks: # Define custom networks
app_net:
driver: bridge # Default driver for user-defined networks
Docker Compose Commands:
COMMAND DESCRIPTION
docker-compose up Builds, (re)creates, attaches to, and starts containers for
all services.
• -d: Run in detached mode (background).
• --build: Rebuild images before starting containers.
docker-compose down Stops and removes containers, networks, volumes, and
images created by up
docker-compose ps Lists containers for the current project.
docker-compose logs Displays log output from services.
[service-name]
docker-compose exec Executes a command inside a running service container.
[service-name]
<command>
docker-compose restart Restarts services
[service-name]
docker-compose build Builds or rebuilds services.
[service-name]
3.- Understand the architecture and functionality of Docker Swarm mode
Docker Swarm mode is Docker's native clustering and orchestration solution, built
directly into Docker Engine (version 1.12+). It's simpler to set up than Kubernetes but
offers less advanced features. Docker Swarm transforms a group of Docker Engines
into a single, virtual Docker Engine (a Swarm). It enables deploying and scaling
applications as "services" across multiple machines.
Architecture:
• Manager Nodes: Handle orchestration tasks, maintain the cluster state,
schedule tasks, and expose the Swarm API. A Swarm should have an odd
number of manager nodes for fault tolerance (e.g., 1, 3, 5).
• Worker Nodes: Run the actual containers (tasks) as scheduled by the manager
nodes. They report their state to the manager.
• Tasks: A running container that is part of a Swarm service.
• Services: The definition of how to run a containerized application across the
Swarm (e.g., image to use, number of replicas, ports, volumes, networks).
• Stacks: A group of interrelated services, defined in a docker-compose.yml
file, deployed together as a single unit onto a Swarm.
Key Functionality:
• Service Discovery: Services can discover each other by name within the
Swarm.
• Load Balancing: Built-in load balancing distributes requests across service
replicas.
• Desired State Reconciliation: Managers continuously monitor the cluster and
reschedule tasks if nodes fail to maintain the desired number of replicas.
• Rolling Updates: Allows deploying new service versions incrementally.
4.- Run containers in a Docker Swarm, including the definition of services, stacks
and the usage of secrets
Initializing docker swarm init [-- Initializes Swarm mode on a
Swarm advertise-addr <IP>] manager node.
docker swarm join --token Adds a worker node to the Swarm.
<token> <manager-
ip>:<port>
Defining docker service create Creates a new service.
Services [OPTIONS] <image> • --name <service-name>
[COMMAND]: • --replicas <N>: Number of
desired instances.
• -p <target-port>:<published-
port>: Publish ports.
• --network <network-name>
• --mount
type=volume,source=<volum
e-name>,target=<container-
path>
• --secret <secret-name>
Managing docker service ls Lists services.
Services docker service ps <service- Shows tasks/containers for a
name> service.
docker service scale Scales a service up or down.
<service-name>=<N>
docker service update Updates a service (e.g., change
[OPTIONS] <service-name> image, replicas).
docker service rm <service- Removes a service.
name>
Deploying docker stack deploy -c Deploys all services defined in the
Stacks <compose-file.yml> <stack- Compose file as a stack.
name>
Secrets docker secret create <secret- Creates a secret from a file.
Management name> <file-path>
docker service create -- Mounts a secret into a service's
secret <secret-name> ... containers (as a file in
/run/secrets/).
docker secret ls Lists secrets.
docker secret rm <secret- Removes a secret.
name>
5.- Understand the architecture and application model Kubernetes
Kubernetes (K8s) is a powerful, open-source container orchestration system for
automating deployment, scaling, and management of containerized applications. It's
more complex than Swarm but offers greater flexibility and features for enterprise-
grade applications. K8s groups hosts (nodes) into a cluster, and manages containers
on those nodes as a single system. It automates load balancing, storage
orchestration, scaling, self-healing, and service discovery.
Architecture:
Control Plane kube-apiserver The front-end for the Kubernetes
(Master Node control plane; exposes the Kubernetes
components) API.
etcd A highly available key-value store for
storing all cluster data (cluster state,
configuration, metadata).
kube-scheduler Watches for newly created Pods with
no assigned node and selects a node
for them to run on.
kube-controller-manager Runs controller processes that
regulate the cluster's desired state.
(e.g., Node Controller, Job Controller).
Worker Nodes kubelet An agent that runs on each node in the
(Minion/Kubele cluster. It ensures that containers
t components) described in PodSpecs are running
and healthy.
kube-proxy A network proxy that runs on each
node and maintains network rules on
nodes, allowing network
communication to your Pods from
inside or outside the cluster.
Container Runtime The software that runs containers
(e.g., Docker, containerd, CRI-O).
Application Model (Core Kubernetes Objects)
Kubernetes uses a declarative API where you describe your desired state, and
Kubernetes works to achieve that state. These are typically defined in YAML
manifest files.
• Pod: The smallest deployable unit in Kubernetes. A Pod represents a single
instance of a running process in your cluster. It can contain one or more
containers (tightly coupled, co-located, and sharing resources like network
namespace and storage volumes). Its purpose is to ensure containers that
need to share resources or communicate closely run together. Pods are
designed to be ephemeral and disposable.
• ReplicaSet: Ensures that a specified number of Pod replicas are running at any
given time. If a Pod fails, the ReplicaSet creates a new one. Its purpose is to
guarantee availability and scalability of a set of Pods. Users rarely create
ReplicaSets directly since its managed by deployments.
• Deployment: A higher-level abstraction that manages the deployment and
scaling of a set of Pods. It automatically manages ReplicaSets to achieve the
desired state. Its purpose is to handle rolling updates, rollbacks, and self-
healing for stateless applications. You define the desired number of replicas,
the container image, and other Pod specifications.
• Service: An abstract way to expose an application running on a set of Pods as
a network service. Services provide a stable IP address and DNS name, even
if Pods underneath them change. Types:
• ClusterIP (Default): Exposes the Service on an internal IP in the cluster.
Only reachable from within the cluster.
• NodePort: Exposes the Service on the same port on each selected Node's
IP, making it accessible from outside the cluster.
• LoadBalancer: Creates an external load balancer in the cloud (if supported
by the cloud provider) and assigns a fixed, external IP address to the
Service.
• ExternalName: Maps the Service to a DNS name. Its purpose is to enable
stable communication between Pods and provides external access to
applications.
6.- Define and manage a container-based application for Kubernetes, including
the definition of Deployments, Services, ReplicaSets and Pods
“kubectl” is the command-line tool for running commands against Kubernetes
clusters. Functions:
• Deploy applications.
• Inspect and manage cluster resources.
• View logs..
Kubernetes resources are defined declaratively in YAML files.
Pods: apiVersion: v1
kind: Pod
metadata:
name: my-app-pod
labels:
app: my-app
spec:
containers:
- name: my-app-container
image: nginx:latest
ports:
- containerPort: 80
Replica Set: apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: my-app-replicaset
labels:
app: my-app
spec:
replicas: 3
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
spec:
containers:
- name: my-app-container
image: nginx:latest
ports:
- containerPort: 80
Service: apiVersion: v1
kind: Service
metadata:
name: my-app-service
spec:
selector:
app: my-app
ports:
- protocol: TCP
port: 80
targetPort: 80
type: ClusterIP
Deployment: apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app-deployment
labels:
app: my-app
spec:
replicas: 3
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
spec:
containers:
- name: my-app-container
image: nginx:latest
ports:
- containerPort: 80
Kubectl commands for management:
COMMAND DESCRIPTION
kubectl apply -f Creates or updates resources defined in a YAML file. This
<file.yml> is the primary way to deploy and manage applications.
kubectl get <resource- Lists resources (e.g., kubectl get pods, kubectl get
type> deployments, kubectl get services).
kubectl describe Shows detailed information about a specific resource.
<resource-type>
<name>
kubectl logs <pod- Displays logs from a Pod's containers.
name>
kubectl exec -it <pod- Executes a command inside a running Pod.
name> -- <command>
kubectl delete -f Deletes resources defined in a YAML file.
<file.yml>
kubectl scale Scales a deployment.
deployment <name> --
replicas=<N>
1.- Given the following Kubernetes deployment:
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
myapp 2 2 2 0 17s
Which command scales the application to five containers?
a) kubectl edit deployment/myapp replicas=5
b) kubectl deployment myapp replicas=5
c) kubectl scale deployment/myapp –replicas=5
d) kubectl replicate deployment/myapp +3
e) kubectl clone deployment/myapp 3
2.- A docker swarm contains the following node:
ID HOSTNAME STATUS AVAILABILITY MANAGERSTATUS
162jocpx41h8kdjt98kao8qek node-5 Ready active Reachable
2ds7z8m2poifjsiqbngbsmavp node-4 Ready active Reachable
fv7p1b2i2swo916zypyohiyfa node-3 Ready active
hrr0eouc6qhnoihndt8uu610g node-2 Ready active
txgjvd0azv0mzsjj6o7ob00h8 * node-1 Ready active Leader
Which of the nodes should be configured as DOCKER_HOST in order to run
services on the swarm? (Specify ONLY the HOSTNAME of one of the potential
target nodes)
node-4, node-5 or node-1
3.- The file myapp.yml exists with the following content:
version: “3”
services:
frontend:
image: frontend
ports:
- “80:80”
backend:
image: backend
deploy:
replicas: 2
Given that file was successfully processed by docker stack deploy myapp –
compose --file myapp.yml, which of the following objects might be created?
(Choose THREE correct answers)
a) An overlay network called myapp_default.
b) A node called myapp_frontend.
c) A container called myapp_backend.2.ymia7v7of5g02j3j3i1btt8z.
d) A volumen called myapp_frontend.1.
e) A service called myapp_frontend.
4.- If docker stack is to be used to run a Docker Compose file on a Docker Swarm,
how are the images referenced in the Docker Compose configuration made
available on the Swarm nodes?
a) docker stack instructs the Swarm nodes to pull the images from a registry,
although it does not upload the images to the registry.
b) docker stack transfers the image from its local Docker cache to each Swarm
node.
c) docker stack passes the images to the Swarm master which distributes the
images to all other Swarm nodes.
d) docker stack builds the images locally and copies them to only those Swarm
nodes which run the service.
e) docker stack triggers the build process for the images on all nodes of the
Swarm.
5.- Which elements exist on the highest level of the definition of every Kubernetes
Objects? (Specify the name of one of the elements, without any values.)
apiVersion
6.- Consider the following Kubernetes Deployment:
What happens if one of the Pods is terminated with the command kubect1 pod
delete?
a) The remaining Pods are stopped and the Deployment switches to the state
Failed.
b) The number of replicas in the ReplicaSet is changed to 4.
c) The ReplicaSet immediately starts a new replacement Pod.
d) The remaining Pods are stopped and a new ReplicaSet is started.
e) The Deployment switches to the state Degraded.