Module 4
Module 4
• When you use the docker pull or docker run commands, Docker pulls the required images from your configured registry.
When you use the docker push command, Docker pushes your image to your configured registry.
• Docker objects
• When you use Docker, you are creating and using images, containers, networks, volumes, plugins, and other
objects. This section is a brief overview of some of those objects.
• Images
• An image is a read-only template with instructions for creating a Docker container. Often, an image is based on
another image, with some additional customization. For example, you may build an image which is based on
the ubuntu image, but installs the Apache web server and your application, as well as the configuration details
needed to make your application run.
• You might create your own images or you might only use those created by others and published in a registry. To
build your own image, you create a Dockerfile with a simple syntax for defining the steps needed to create the
image and run it. Each instruction in a Dockerfile creates a layer in the image. When you change the Dockerfile
and rebuild the image, only those layers which have changed are rebuilt. This is part of what makes images so
lightweight, small, and fast, when compared to other virtualization technologies.
• Containers
• A container is a runnable instance of an image. You can create, start, stop, move, or delete a container using the
Docker API or CLI. You can connect a container to one or more networks, attach storage to it, or even create a
new image based on its current state.
• By default, a container is relatively well isolated from other containers and its host machine. You can control
how isolated a container's network, storage, or other underlying subsystems are from other containers or from
the host machine.
• A container is defined by its image as well as any configuration options you provide to it when you create or
start it. When a container is removed, any changes to its state that aren't stored in persistent storage disappear.
Docker Image
• Docker Image is an executable package of software that includes everything needed to
run an application. This image informs how a container should instantiate, determining
which software components will run and how.
• Docker Container is a virtual environment that bundles application code with all the
dependencies required to run the application. The application runs quickly and reliably
from one computing environment to another.
working with Images
• Docker images are the blueprints used to create Docker containers.
Images are lightweight, standalone, and executable software
packages that include everything needed to run an application:
code, runtime, libraries, environment variables, and configurations.
Docker Hub
• Docker Hub is a repository service and it is a cloud-based service where people push their
Docker Container Images and also pull the Docker Container Images from the Docker
Hub anytime or anywhere via the internet. It provides features such as you can push your
images as private or public.
• Mainly DevOps team uses the Docker Hub. It is an open-source tool and freely available for
all operating systems. It is like storage where we store the images and pull the images when it
is required. When a person wants to push/pull images from the Docker Hub they must have a
basic knowledge of Docker. Let us discuss the requirements of the Docker tool.
• Docker is a tool nowadays enterprises adopting rapidly day by day. When a Developer team
wants to share the project with all dependencies for testing then the developer can push
their code on Docker Hub with all dependencies. Firstly create the Images and push the
Image on Docker Hub. After that, the testing team will pull the same image from the Docker
Hub eliminating the need for any type of file, software, or plugins for running the Image
because the Developer team shares the image with all dependencies.
What is Docker Image?
Docker images are built using the Docker file which consists of a set of
instructions that are required to containerize an application. The docker image
includes the following to run a piece of software. A docker image is a platform-
independent image that can be built in the Windows environment and it can be
pushed to the Docker hub and pulled by others with different OS environments
like Linux.
•Application ode.
•Runtime.
•Libraries
•Environmentaltools.
Docker image is very light in weight so can be portable to different platforms
very easily. To explore more about creating and managing Docker images within
a DevOps pipeline, the DevOps Engineering – Planning to Production course
covers Docker image creation, optimization, and deployment.
Components of Docker Image
The following are the terminologies and components related to Docker Image:
•Layers: Immutable filesystem layers stacked to form a complete image.
•Base Image: The foundational layer, often a minimal OS or runtime environment.
•Dockerfile: A text file containing instructions to build a Docker image.
•Image ID: A unique identifier for each Docker image.
•Tags: Labels used to manage and version Docker images.
Continued…
• Main Uses of Docker Hub
• Efficient Image Management: Docker Hub simplifies the storage, management,
and sharing of Docker images, making it easy to organize and access container
images from anywhere.
• Enhanced Security: It runs security checks on images and provides detailed
reports on potential vulnerabilities, ensuring safer deployments.
• Automation Capabilities: With features like webhooks, Docker Hub can automate
continuous deployment and testing processes, streamlining your CI/CD pipeline.
• Integration and Collaboration: Docker Hub integrates seamlessly with popular
tools like GitHub and Jenkins, and allows managing permissions for users and
teams, facilitating efficient collaboration.
Docker Trusted Registry
Docker Trusted Registry (DTR)
• Docker Trusted Registry (DTR) is an enterprise-grade, private image registry used
to store, manage, and secure Docker images. It is part of Docker Enterprise
Edition (EE) and allows organizations to:
• Store and manage Docker images on-premises or in the cloud.
• Enforce access control and security policies.
• Sign and verify images for content trust.
• Perform image vulnerability scanning.
• 📌 Key Difference from Docker Hub:
• Docker Hub is a public registry (default) for sharing images.
• DTR is a private registry for internal use, with enhanced security and
management features.
Key Features of Docker Trusted
Registry
• a) Private and Secure Image Storage
• DTR provides private image storage, ensuring that your images remain within your organization's
infrastructure.
• Supports TLS encryption for secure image transfer.
• b) Role-Based Access Control (RBAC)
• DTR offers granular permissions with RBAC, allowing you to define who can push, pull, and manage images.
• You can assign roles like:
• Admin → Full control over the registry.
• Developer → Can push/pull images.
• Viewer → Can only view images.
• 🔍 c) Image Scanning and Security
• Built-in vulnerability scanning checks for security risks in your images.
• Integrates with CVE (Common Vulnerabilities and Exposures) databases.
• Helps prevent the deployment of vulnerable images.
• d) Image Signing and Content Trust
• DTR supports Docker Content Trust (DCT), allowing you to sign images.
• Verifies the authenticity and integrity of images before deployment.
• 🌐 e) Integration with Docker Enterprise
• Works seamlessly with Docker Universal Control Plane (UCP) for cluster
management.
• Provides single sign-on (SSO) and LDAP/Active Directory integration.
• 🚀 f) Automated Image Lifecycle Management
• You can configure policies for automated image retention, pruning, and
cleanup.
Docker File & Commands
• A Dockerfile is a script that uses the Docker platform to generate containers automatically. It
is essentially a text document that contains all the instructions that a user may use to create
an image from the command line. The Docker platform is a Linux-based platform that allows
developers to create and execute containers, self-contained programs, and systems that are
independent of the underlying infrastructure. Docker, which is based on the Linux kernel’s
resource isolation capabilities, allows developers and system administrators to transfer
programs across multiple systems and machines by executing them within containers.
• Docker containers may operate on any Linux host thanks to Dockerfiles. Docker images are
used to construct container environments for applications, and they may be produced
manually or automatically using Dockerfiles. Docker containers can execute Linux and
Windows apps. Developers may use Dockerfiles to construct an automated container build
that steps through a series of command-line instructions. Docker containerization is
essentially virtualization at the operating system level. Without the startup overhead of
virtual machines, several independent containers can run within a single Linux instance.
• Docker builds images automatically by reading the instructions from a Dockerfile.
• It is a text file without any .txt extensions that contains all commands in order, needed to
build a given image.
• It is always named Dockerfile.
• Docker image consists of read-only layers each of which represents a Dockerfile
instruction. The layers are stacked and each one is created by the change from the
previous layer. For example, if I create a base layer of ubuntu and then in second
instruction I install Python it will create a second layer. Likewise, if I do any changes by the
instructions(RUN , COPY , ADD) it will create a new layer in that image.
• Containers are read-write layers that are created by docker images.
• In simple words, a Dockerfile is a set of instructions that creates a stacked-layer for each
instruction that collectively makes an image(which is a prototype or template for
containers)
Frequently used Dockerfile commands -
•FROM - Defines a base image, it can be pulled from docker hub
(for example- if we want to create a javascript application with node as backend
then we need to have node as a base image, so it can run node application.)
•RUN - Executes command in a new image layer( we can have multiple run
commands )
•CMD - Command to be executed when running a container( It is asked to have
one CMD command, If a Dockerfile has multiple CMDs, it only applies the
instructions from the last one.
•EXPOSE - Documents which ports are exposed (It is only used for
documentation)
•ENV - Sets environment variables inside the image
•COPY - It is used to copy your local files/directories to Docker Container.
•ADD - It is more feature-rich version of the COPY instruction. COPY is preferred
over ADD. Major difference b/w ADD and COPY is that ADD allows you to copy
from URL that is the source can be URL but in COPY it can only have local ones.
•ENTRYPOINT - Define a container's executable (You cannot override and
ENTRYPOINT when starting a container unless you add the --entrypoint flag.)
•VOLUME - It defines which directory in an image should be treated as a volume.
The volume will be given a random name which can be found using
docker inspect command.
•WORKDIR - Defines the working directory for subsequent instructions in the
Dockerfile(Important point to remember that it doesn't create a new
intermediate layer in Image)
Devops Monitoring Tool
Introduction to Nagios
Nagios is an open-source monitoring and alerting solution designed to oversee IT infrastructure
components like servers, networks, applications, and services. Originally developed by Ethan
Galstad in 1999 under the name NetSaint, Nagios has since grown into a robust and widely
adopted tool for ensuring the availability, performance, and security of critical systems. With
contributions from a large open-source community, Nagios has evolved into a cornerstone of IT
operations, offering solutions like Nagios XI, Log Server, Network Analyzer, and Fusion, which
cater to the diverse needs of modern infrastructure monitoring.
How Nagios works
Nagios is a comprehensive monitoring tool designed to ensure the smooth operation of IT
infrastructure. It offers flexibility with both command-line and web-based interfaces, allowing
administrators to monitor systems efficiently. Here's how Nagios works, step by step:
1.Monitoring Setup: Nagios provides two options for monitoring:
1. Agent-based: Independent agents are installed on servers to collect data, which
is then sent to the Nagios server.
2. Agentless: Uses existing protocols to gather data without installing additional
software on servers. Both methods monitor critical system metrics like file system
usage, CPU performance, and service status.
2.Dashboards and Alerts: The Nagios dashboard offers a real-time overview of key
parameters, making it easy to track system health. When predefined thresholds, such
as high CPU usage or low disk space, are crossed, Nagios sends alerts via email or SMS.
This ensures administrators can respond quickly to issues, minimizing downtime.
3.Plugins and Scripts: Nagios runs as a service on a server and uses small scripts or
plugins to check the status of hosts and services in your network. These plugins, written
in languages like Perl or shell script, are executed at regular intervals. Results are
collected and stored for review. If a significant change is detected, additional scripts are
triggered, and further actions or notifications are initiated.
4.Integration with AWS: Nagios integrates seamlessly with AWS environments. When
installed on AWS, it provides scalable and secure monitoring for cloud infrastructure.
The collected data is accessible through the Nagios web interface, allowing
administrators to monitor both local and cloud systems in real-time. We discuss this
installation process in more detail in the section below
• Monitoring Process
• Nagios Web Interface (GUI): The Nagios Web Interface is a user-friendly
dashboard where administrators can see the real-time status of all monitored
resources. It helps users quickly check the status of different services, get
alerts, and track performance over time. Accessible through any modern web
browser, this interface is crucial for real-time monitoring and fixing issues
easily.
• Alert Notifications (SMS and Email): One of Nagios' key features is its ability
to alert administrators when something critical happens. These alerts can be
sent through SMS or email, based on the settings. If a resource or service
reaches a critical point, like low disk space or a service going down, Nagios
quickly sends a notification to ensure the issue gets addressed right away
Key Features of Nagios
• Virtualization is a powerful technology that has many benefits, but it also has a few limitations
that you should be aware of. Some of the main limitations of virtualization include:
• It requires significant RAM and CPU resources to run multiple operating systems and a virtual
hardware copy.
• The shift between private and public clouds and data centers makes the software development
lifecycle more complex.
• It’s monolithic and runs applications as single, large files.
• Adds up computing resources and cycles very quickly.
• It doesn’t run some applications properly.
• Some older or specialized applications may not be compatible with virtualization software, or
may require additional configuration to work properly
• Virtualized environments may not be as easy to scale as physical ones, especially when it
comes to adding more hardware resources.
What are containers?
Includes a separate, completely independent operating system Involves a user-mode operating system that can be tailored to
Operating System with the kernel and requires more CPU, memory, and storage contain only those services your app needs, so its light on
resources resource requirement
Compatible with almost all operating systems inside a virtual Compatible only with a similar operating system version as the
Guest Compatibility
machine host
Uses a virtual hard disk (VHD) for single VM local storage or server Uses local disks for local storage for a single node and SMB for
Persistent Storage
message block (SMB) for shared storage on multiple servers shared storage on multiple servers or nodes
Networking Conducted via virtual network adapters (VNA) Uses an isolated view of a VNA for lightweight virtualization
Micro-services and Containerization
• What are microservices?
• Gartner defines a microservices as a service-oriented application component that is:
• Tightly scoped
• Strongly encapsulated
• Loosely coupled
• Independently deployable
• Independently scalable
• According to AWS, microservice architecture involves building an application as independent components that run each application
process as a service, with these services communicating via a well-defined interface using lightweight APIs.
• The characteristics of microservices as described by microservices.io is that they are:
• Highly maintainable and testable
• Loosely coupled
• Independently deployable
• Organized around business capabilities; and
• Owned by a small team
• Take the example of an e-commerce web application as shown below. A microservice architecture approach would see the separation
of the:
• Logging service
• Inventory service
• Shipping service
• Each service would have its own database and communicate with the other services via an API gateway. The development and
• The approach to microservices adoption has been geared towards
refactoring existing monolithic applications, rather than building new
applications from scratch. While microservices provide agility and
scalability options, they also require relevant supporting
infrastructure especially when you consider the complexity that
comes with managing hundreds of microservices across different
teams.
• For this reason, approaches such as DevOps and CI/CD are better
suited to ensure that the services are efficiently and effectively
managed from design, through development and deployment.
Orchestration
• Orchestration in DevOps refers to the automated coordination and
management of complex IT workflows, services, and infrastructure. It
involves:
• Automating repetitive tasks
• Managing multiple services across environments
• Ensuring smooth deployments and operations
• Reducing human intervention and improving efficiency
• 📌 Key Difference:
• Automation: Performing individual tasks automatically (e.g., deploying a
container).
• Orchestration: Managing the entire workflow (e.g., CI/CD pipelines,
infrastructure provisioning).
IT orchestration and automation
differences
Automation Orchestration
Automates repetitive tasks and processes to streamline Manages complex workflows and processes involving multiple
Complexity
operations and reduce human intervention. systems, tools, and dependencies.
Focuses on specific tasks or processes and may not consider Integrates and coordinates tasks across diverse systems, APIs
Integration
interactions between different systems. and platforms for end-to-end processes.
Follows predefined rules and instructions without real-time Enables decision-making logic and branching within workflows,
Decision-making
decision-making capabilities. allowing dynamic routing and adaptations.
Typically lacks collaboration features, focusing solely on task Facilitates collaboration among cross-functional teams through
Collaboration
execution and process automation. workflow visibility and shared task management.
Minimizes human intervention by automating repetitive tasks Enables human intervention at specific decision points for
Compliance
and reducing the risk of errors. complex decisions or exceptions to workflows.