0% found this document useful (0 votes)
37 views30 pages

Devops Lab Manual

The document serves as a record notebook for the DevOps and Microservices course at NANDHA College of Technology, detailing experiments conducted during the academic year 2023-24. It includes a bonafide certificate, a list of experiments, and specific procedures for tasks such as creating Git repositories, installing Docker, and building Docker images for Python applications. Each experiment outlines the aim, procedure, and results achieved, emphasizing practical skills in software development and deployment.

Uploaded by

mnaveena26062002
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
37 views30 pages

Devops Lab Manual

The document serves as a record notebook for the DevOps and Microservices course at NANDHA College of Technology, detailing experiments conducted during the academic year 2023-24. It includes a bonafide certificate, a list of experiments, and specific procedures for tasks such as creating Git repositories, installing Docker, and building Docker images for Python applications. Each experiment outlines the aim, procedure, and results achieved, emphasizing practical skills in software development and deployment.

Uploaded by

mnaveena26062002
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd

NANDHA COLLEGE OF TECHNOLOGY

ERODE - 638052

DEPARTMENT OF COMPUTER SCIENCE AND


ENGINEERING

IF4073 – DEVOPS AND MICROSERVICES


(Regulations 2021)

THIRD SEMESTER
(ACADEMIC YEAR 2023 - 24)

RECORD NOTEBOOK

REGISTER NUMBER

NAME OF THE STUDENT


NANDHA COLLEGE OF TECHNOLOGY
ERODE - 638052

DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING

Bonafide Certificate

REGISTERNUMBER

Certified that this is the bonafide record of work done by

, Third Semester,

M.E. – COMPUTER SCIENCE AND ENGINEERING branch during the

academic year 2023-24 in the DEVOPS AND MICROSERVICES.

STAFF IN-CHARGE HEAD OF THE DEPARTMENT

Submitted for the University Practical Examination on .

INTERNAL EXAMINER EXTERNAL EXAMINER


LIST OF EXPERIMENTS
Page
Ex.No. Date Title of the Experiment Marks Sign
No.
1 Creating a new Git repository, cloning existing 1
repository, Checking changes into a Git
repository, Pushing changes to a Git remote,
Creating a Git
branch
2 Installing Docker container on windows/Linux, 9
issuing docker commands

3 Building Docker Images for Python Application 11

4 Setting up Docker and Maven in Jenkins 18


and First Pipeline Run

5 Running Unit Tests and Integration Tests in 24


Jenkins Pipelines

AVERAGE:
CREATING A NEW GIT REPOSITORY, CLONING
EX.NO:1
EXISTING REPOSITORY, CHECKING CHANGES INTO A
GIT REPOSITORY, PUSHING CHANGES TO A GIT
DATE :
REMOTE, CREATING A GIT BRANCH

AIM:
To creating a new git repository, cloning existing repository, checking changes into a
git repository, pushing changes to a git remote, creating a git branch.

PROCEDURE:
CREATE A NEW GIT REPOSITORY
To put your project up on GitHub, you will need to create a repository for it to live in.
You can store a variety of projects in GitHub repositories, including open source projects.
With open source projects, you can share code to make better, more reliable software. You
can use repositories to collaborate with others and track your work.
1. In the upper-right corner of any page, use the drop-down menu, and select New
repository.

2. Type a short, memorable name for your repository. For example, "hello-world".

3. Optionally, add a description of your repository. For example, "My first repository on
GitHub."

1
4. Choose a repository visibility. For more information, see "About repositories."

5. Select Initialize this repository with a README.

6. Click Create repository.

2
CLONING AN EXISTING REPOSITORY
If you want to get a copy of an existing Git repository — for example, a project you’d
like to contribute to — the command you need is git clone. If you’re familiar with other
VCSs such as Subversion, you’ll notice that the command is "clone" and not "checkout".
This is an important distinction — instead of getting just a working copy, Git receives a full
copy of nearly all data that the server has. Every version of every file for the history of the
project is pulled down by default when you run git clone. In fact, if your server disk gets
corrupted, you can often use nearly any of the clones on any client to set the server back to
the state it was in when it was cloned (you may lose some server-side hooks and such, but all
the versioned data would be there — see Getting Git on a Server for more details).
You clone a repository with git clone <url>. For example, if you want to clone the Git
linkable library called libgit2, you can do so like this:

$ git clone https://github.com/libgit2/libgit2


That creates a directory named libgit2, initializes a .git directory inside it, pulls down all the
data for that repository, and checks out a working copy of the latest version. If you go into the
new libgit2 directory that was just created, you’ll see the project files in there, ready to be
worked on or used.
If you want to clone the repository into a directory named something other than libgit2, you
can specify the new directory name as an additional argument:
$ git clone https://github.com/libgit2/libgit2 mylibgit
That command does the same thing as the previous one, but the target directory is
called mylibgit.
Git has a number of different transfer protocols you can use. The previous example
uses the https:// protocol, but you may also see git:// or user@server:path/to/repo.git , which
uses the SSH transfer protocol. Getting Git on a Server will introduce all of the available
options the server can set up to access your Git repository and the pros and cons of each.

CHECKING CHANGES INTO A GIT REPOSITORY


1. To push your local changes to the remote repository, in the repository bar, click Push
origin.

3
2. If GitHub Desktop prompts you to fetch new commits from the remote, click Fetch.

3. Optionally, click Preview Pull Request to open a preview dialog where you can
review your changes and begin to create a pull request. For more information,
see "Creating an issue or pull request."

PUSHING CHANGES TO A GIT REMOTE


A commit is like a snapshot of all the files in your project at a particular point in time.
When you created your new repository, you initialized it with
a README file. README files are a great place to describe your project in more detail, or
add some documentation such as how to install or use your project. The contents of
your README file are automatically shown on the front page of your repository.
Let's commit a change to the README file.
1. In your repository's list of files, click README.md.

4
2. Above the file's content, click .
3. On the Edit file tab, type some information about yourself.

4. Above the new content, click Preview changes.

5. Review the changes you made to the file. You will see the new content in green.

5
6. At the bottom of the page, type a short, meaningful commit message that describes
the change you made to the file. You can attribute the commit to more than one
author in the commit message. For more information, see "Creating a commit with
multiple authors."

7. Below the commit message fields, decide whether to add your commit to the
current branch or to a new branch. If your current branch is the default branch, you
should choose to create a new branch for your commit and then create a pull
request. For more information, see "Creating a pull request."

8. Click Propose file change.

CREATING A GIT BRANCH


Anyone with write permission to a repository can create a branch for an issue. You can
link multiple branches for an issue.
1. On GitHub.com, navigate to the main page of the repository.
2. Under your repository name, click Issues.

6
3. In the list of issues, click the issue that you would like to create a branch for.
4. In the right sidebar under "Development", click Create a branch. If the issue already
has a linked branch or pull request, click and at the bottom of the drop-down menu
click Create a branch.

7
5. By default, the new branch is created in the current repository from the default
branch. Edit the branch name and details as required in the "Create a branch for
this issue" dialog.

6. Choose whether to work on the branch locally or to open it in GitHub Desktop.


7. When you are ready to create the branch, click Create branch.

RESULT:
Thus creating a new git repository, cloning existing repository, checking changes into a git
repository, pushing changes to a git remote, creating a git branch was done successfully.

8
EX.NO:2
INSTALLING DOCKER CONTAINER ON
WINDOWS/LINUX, ISSUING DOCKER COMMANDS
DATE :

AIM:
To installing docker container on windows/linux, issuing docker commands.

PROCEDURE:
Windows Subsystem for Linux (WSL) 2 is a full Linux kernel built by Microsoft,
which allows Linux distributions to run without managing virtual machines. With
Docker Desktop running on WSL 2, users can leverage Linux workspaces and avoid
maintaining both Linux and Windows build scripts. In addition, WSL 2 provides
improvements to file system sharing and boot time.
Docker Desktop uses the dynamic memory allocation feature in WSL 2 to improve
the resource consumption. This means, Docker Desktop only uses the required amount of
CPU and memory resources it needs, while enabling CPU and memory-intensive tasks such
as building a container, to run much faster.
Additionally, with WSL 2, the time required to start a Docker daemon after a cold
start is significantly faster. It takes less than 10 seconds to start the Docker daemon compared
to almost a minute in the previous version of Docker Desktop.
Prerequisites
Before you turn on the Docker Desktop WSL 2, ensure you have:
 Windows 10, version 1903 or higher, or Windows 11.
 Enabled WSL 2 feature on Windows. For detailed instructions, refer to the Microsoft
documentation.
 Downloaded and installed the Linux kernel update package.
Turn on Docker Desktop WSL 2
1. Download Docker Desktop for Windows.
2. Follow the usual installation instructions to install Docker Desktop. If you are
running a supported system, Docker Desktop prompts you to enable WSL 2 during
installation. Read the information displayed on the screen and enable WSL 2 to
continue.
3. Start Docker Desktop from the Windows Start menu.

9
4. From the Docker menu, select Settings and then General.
5. Select the Use WSL 2 based engine check box.
If you have installed Docker Desktop on a system that supports WSL 2, this option is
enabled by default.
6. Select Apply & Restart.
Now docker commands work from Windows using the new WSL 2 engine.
Enabling Docker support in WSL 2 distros
WSL 2 adds support for “Linux distros” to Windows, where each distro behaves like a VM
except they all run on top of a single shared Linux kernel.
Docker Desktop does not require any particular Linux distros to be installed. The docker CLI
and UI all work fine from Windows without any additional Linux distros. However for the
best developer experience, we recommend installing at least one additional distro and
enabling Docker support by:
1. Ensure the distribution runs in WSL 2 mode. WSL can run distributions in both v1
or v2 mode.
To check the WSL mode, run:
$ wsl.exe -l -v
To upgrade your existing Linux distro to v2, run:
$ wsl.exe --set-version (distro name) 2
To set v2 as the default version for future installations, run:
$ wsl.exe --set-default-version 2
2. When Docker Desktop starts, go to Settings > Resources > WSL Integration.
The Docker-WSL integration is enabled on your default WSL distribution. To change
your default WSL distro, run wsl --set-default <distro name>
For example, to set Ubuntu as your default WSL distro, run:
$ wsl --set-default ubuntu
Optionally, select any additional distributions you would like to enable the Docker-
WSL integration on.
3. Select Apply & Restart.

RESULT:
Thus the docker container on windows/linux was installed successfully.

10
EX.NO:3
BUILDING DOCKER IMAGES FOR PYTHON
APPLICATION
DATE :

AIM:
To building docker images for python application.

PROCEDURE:
Each time you create a new release on GitHub, you can trigger a workflow to publish
your image. The workflow in the example below runs when the release event triggers with
the created activity type. For more information on the release event, see "Events that trigger
workflows."
In the example workflow below, we use the Docker login-action and build-push-
action actions to build the Docker image and, if the build succeeds, push the built image to
Docker Hub.
To push to Docker Hub, you will need to have a Docker Hub account, and have a
Docker Hub repository created. For more information, see "Pushing a Docker container
image to Docker Hub" in the Docker documentation.
The login-action options required for Docker Hub are:
 username and password: This is your Docker Hub username and password. We
recommend storing your Docker Hub username and password as secrets so they
aren't exposed in your workflow file. For more information, see "Encrypted secrets."
The metadata-action option required for Docker Hub is:
 images: The namespace and name for the Docker image you are building/pushing to
Docker Hub.
The build-push-action options required for Docker Hub are:
 tags: The tag of your new image in the format DOCKER-HUB-
NAMESPACE/DOCKER-HUB-REPOSITORY:VERSION. You can set a single tag
as shown below, or specify multiple tags in a list.
 push: If set to true, the image will be pushed to the registry if it is built successfully.

11
YAML
# This workflow uses actions that are not certified by GitHub.
# They are provided by a third-party and are governed by separate terms of service,
privacy policy, and support documentation.
# GitHub recommends pinning actions to a commit SHA.
# To get a newer version, you will need to update the SHA.
# You can also reference a tag or branch, but the action may change without warning.
name: Publish Docker image
on:
release:
types: [published]
jobs:
push_to_registry:
name: Push Docker image to Docker
Hub runs-on: ubuntu-latest
steps:
- name: Check out the repo
uses:
actions/checkout@v3
- name: Log in to Docker Hub
uses: docker/login-action@f054a8b539a109f9f41c372932f1ae047eff08c9
with:
username: ${{ secrets.DOCKER_USERNAME
}} password: ${{ secrets.DOCKER_PASSWORD
}}
- name: Extract metadata (tags, labels) for
Docker id: meta
uses: docker/metadata-action@98669ae865ea3cffbcbaa878cf57c20bbf1c6c38
with:
images: my-docker-hub-namespace/my-docker-hub-repository
- name: Build and push Docker image
uses: docker/build-push-action@ad44023a93711e3deb337508980b4b5e9bcdc5dc
with:
context: .
push: true

12
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
The above workflow checks out the GitHub repository, uses the login-action to log in to the
registry, and then uses the build-push-action action to: build a Docker image based on your
repository's Dockerfile; push the image to Docker Hub, and apply a tag to the image.
Publishing images to GitHub Packages
Each time you create a new release on GitHub, you can trigger a workflow to publish your
image. The workflow in the example below runs when the release event triggers with the
created activity type. For more information on the release event, see "Events that trigger
workflows."
In the example workflow below, we use the Docker login-action, metadata-action, and build-
push-action actions to build the Docker image, and if the build succeeds, push the built
image to GitHub Packages.
The login-action options required for GitHub Packages are:
 registry: Must be set to ghcr.io.
 username: You can use the ${{ github.actor }} context to automatically use the
username of the user that triggered the workflow run. For more information,
see "Contexts."
 password: You can use the automatically-generated GITHUB_TOKEN secret for the
password. For more information, see "Automatic token authentication."
The metadata-action option required for GitHub Packages is:
 images: The namespace and name for the Docker image you are
building. The build-push-action options required for GitHub Packages are:
 context: Defines the build's context as the set of files located in the specified path.
 push: If set to true, the image will be pushed to the registry if it is built successfully.
 tags and labels: These are populated by output from metadata-action.
YAML
# This workflow uses actions that are not certified by GitHub.
# They are provided by a third-party and are governed by
# separate terms of service, privacy policy, and support
# documentation.

# GitHub recommends pinning actions to a commit SHA.

13
# To get a newer version, you will need to update the SHA.
# You can also reference a tag or branch, but the action may change without warning.

name: Create and publish a Docker

image on:
push:
branches: ['release']

env:
REGISTRY: ghcr.io
IMAGE_NAME: ${{ github.repository }}

jobs:
build-and-push-image:
runs-on: ubuntu-latest
permissions:
contents: read
packages:
write

steps:
- name: Checkout repository
uses:
actions/checkout@v3

- name: Log in to the Container registry


uses: docker/login-action@f054a8b539a109f9f41c372932f1ae047eff08c9
with:
registry: ${{ env.REGISTRY }}
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}

- name: Extract metadata (tags, labels) for Docker


id: meta

14
uses: docker/metadata-action@98669ae865ea3cffbcbaa878cf57c20bbf1c6c38
with:
images: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}

- name: Build and push Docker image


uses: docker/build-push-action@ad44023a93711e3deb337508980b4b5e9bcdc5dc
with:
context: .
push:
true
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
The above workflow is triggered by a push to the "release" branch. It checks out the GitHub
repository, and uses the login-action to log in to the Container registry. It then extracts labels
and tags for the Docker image. Finally, it uses the build-push-action action to build the image
and publish it on the Container registry.
Publishing images to Docker Hub and GitHub Packages
In a single workflow, you can publish your Docker image to multiple registries by using
the login-action and build-push-action actions for each registry.
The following example workflow uses the steps from the previous sections ("Publishing
images to Docker Hub" and "Publishing images to GitHub Packages") to create a single
workflow that pushes to both registries.
YAML
# This workflow uses actions that are not certified by
GitHub. # They are provided by a third-party and are
governed by
# separate terms of service, privacy policy, and support
# documentation.

# GitHub recommends pinning actions to a commit SHA.


# To get a newer version, you will need to update the
SHA.
# You can also reference a tag or branch, but the action may change without warning.

name: Publish Docker image

15
on:
release:
types: [published]

jobs:
push_to_registries:
name: Push Docker image to multiple
registries runs-on: ubuntu-latest
permissions:
packages: write
contents: read
steps:
- name: Check out the repo
uses:
actions/checkout@v3

- name: Log in to Docker Hub


uses: docker/login-action@f054a8b539a109f9f41c372932f1ae047eff08c9
with:
username: ${{ secrets.DOCKER_USERNAME
}} password: ${{ secrets.DOCKER_PASSWORD
}}

- name: Log in to the Container registry


uses: docker/login-action@f054a8b539a109f9f41c372932f1ae047eff08c9
with:
registry: ghcr.io
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}

- name: Extract metadata (tags, labels) for Docker


id: meta
uses: docker/metadata-action@98669ae865ea3cffbcbaa878cf57c20bbf1c6c38
with:
images: |

16
my-docker-hub-namespace/my-docker-hub-repository ghcr.io/$
{{ github.repository }}

- name: Build and push Docker images


uses: docker/build-push-action@ad44023a93711e3deb337508980b4b5e9bcdc5dc
with:
context: .
push:
true
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
The above workflow checks out the GitHub repository, uses the login-action twice to log
in to both registries and generates tags and labels with the metadata-action action. Then the
build-push-action action builds and pushes the Docker image to Docker Hub and the
Container registry.

RESULT:
Thus building Docker images for Python application was done successfully.

17
EX.NO:4
SETTING UP DOCKER AND MAVEN IN JENKINS AND
FIRST PIPELINE RUN
DATE :

AIM:
To setting up Docker and Maven in Jenkins and First Pipeline Run.

PROCEDURE:
When installing Docker, make sure to use a Stable release as opposed to an Edge
release, or some functionality found in this post may not work.
Preparing the Application and Spinning Up Jenkins
1. First, make sure you are logged in to GitHub in any web browser. Then fork the
Spring PetClinic repository (the example application we’ll use). If you want more of a
challenge, swap out Spring PetClinic for your own application.
2. Clone your fork locally. Be sure to replace shanemacbride with your own GitHub
username. We will do all work within this directory, so cd into it as well.
$ git clone https://github.com/shanemacbride/spring-petclinic.git
$ cd spring-petclinic
3. Start up Docker. Our Jenkins container will make use of it.
4. Use Liatrio’s Alpine-Jenkins image, which is specifically configured for using Docker in
pipelines. To spin up the Alpine-Jenkins container and give it access to Docker, use docker
run. If you are interested in how the image is configured, be sure to look at the liatrio/alpine-
jenkins repository’s Dockerfile for an overview.
$ docker run -p 8080:8080 -v
/var/run/docker.sock:/var/run/docker.sock liatrio/jenkins-alpine
5. Wait for the image to download and run. Afterward, Jenkins should be visible in a
web browser at localhost:8080.
Creating a Basic Pipeline Job
1. Click a new Pipeline job in Jenkins by clicking New Item, naming it,
and selecting Pipeline.
2. Configure the pipeline to refer to GitHub for source control management by selecting
Pipeline script from SCM. Set the repository URL to your fork of Spring PetClinic. The
URL I’ve entered here is: https://github.com/shanemacbride/spring-petclinic.git.

18
3. Save the job.
Creating a Dockerfile That Runs Our Java Application
1. Create a Dockerfile that will run the Jar generated by Spring PetClinic building. Create a
file named Dockerfile using your favorite text editor. We want to start with a Java image,
so specify Anapsix’s Alpine Java image as our base image.
FROM anapsix/alpine-java
2. Specify who the maintainer of this image should be using a maintainer
label. LABEL maintainer="[email protected]"
3. Ensure the image has the Spring PetClinic on it so it can be run. When Spring PetClinic is
built, the Jar will be placed in a target directory. We simply need to copy that into the
image. COPY /target/spring-petclinic-1.5.1.jar /home/spring-petclinic-1.5.1.jar
4. Run Spring PetClinic when the container starts
up. FROM anapsix/alpine-java
LABEL maintainer="[email protected]"
COPY /target/spring-petclinic-1.5.1.jar /home/spring-petclinic-1.5.1.jar
CMD ["java","-jar","/home/spring-petclinic-1.5.1.jar"]
5. Commit this new file. We aren’t pushing any changes yet because we still need to create
a Jenkinsfile for the Pipeline job to execute correctly.
$ git add Dockerfile
$ git commit -m 'Created Dockerfile'
Creating a Basic Jenkinsfile
1. Create a Jenkinsfile to instruct our Pipeline job on what needs to be done. First, create the
file named Jenkinsfile and specify the first stage. In this stage, we are telling Jenkins to use
a Maven image, specifically version 3.5.0, to build Spring PetClinic. After this stage is
complete, it will generate a jar and place it in the target directory.
#!groovy
pipeline {
agent none
stages {

stage('Maven Install') {
agent {
docker {
image 'maven:3.5.0'
}

19
}
steps {
sh 'mvn clean install'
}
}
}
}
2. Run our Pipeline job created before. Make sure to push the Jenkinsfile up to
GitHub beforehand. You can run the job by clicking on the clock icon to the right. It
should successfully install Spring PetClinic using the Maven image.
$ git add Jenkinsfile
$ git commit -m 'Created Jenkinsfile with Maven Install Stage'
$ git push
Adding a Docker Build Stage to the Jenkinsfile
1. Confirm Spring PetClinic is successfully installing. Then package our application inside
an image using the Dockerfile created previously. It’s time for another Jenkinsfile stage. In
this stage, we won’t require a specific Docker image to be used, so any agent will do. The
image will be built using the Dockerfile in the current directory, and it will be tagged with
my Docker Hub username and repository as the latest image.
#!groovy

pipeline {
agent none
stages {

stage('Maven Install') {
agent {
docker {
image 'maven:3.5.0'
}
}
steps {
sh 'mvn clean install'
}
}

20
stage('Docker Build')
{ agent any
steps {
sh 'docker build -t shanem/spring-petclinic:latest .'
}
}
}
}
2. Ensure the image was successfully built (it should be if the updated Jenkinsfile is
pushed up to GitHub and the job is run again). You can verify this by either looking at the
job’s console output or examining your images through the Docker CLI.
$ git add Jenkinsfile$ git commit -m 'Added Docker Build Stage'
$ git push
$ # Run the Jenkins job which will execute this new stage and wait for it to finish...
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
shanem/spring-petclinic latest ef41393a932d 28 seconds ago 160MB
3. Verify that our Dockerfile was working as expected now that we’ve built our image by
running our new image with port 8080, the port that the Java servlet runs on, forwarded to
port 8081. We do this because our Alpine-Jenkins container is already running on port
8080. After it spins up, we should be able to see Spring PetClinic in a web browser at
localhost:8081. Awesome!
$ docker run -p 8081:8080 shanem/spring-petclinic

Adding Docker Hub Credentials to Jenkins


Now that we have our application successfully installing and packaging itself into a Docker
image, we need to make that image available using Docker Hub.
1. Add your Docker Hub credentials into Jenkins. First, click on Credentials from the
Jenkins home page.
2. Click Add credentials under the global drop down menu.
3. Enter your Docker Hub credentials. Make sure to use only your Docker Hub username
and not your email address. These credentials will be referenced in the Jenkinsfile using their
ID value. Hit OK.

21
Adding a Docker Push Stage to the Jenkins file
Finally, the last stage will be added to our Jenkinsfile that pushes our image up to Docker
Hub.
1. Create this stage using any agent because we don’t need to run our Docker CLI commands
in a specific image. Using withCredentials, we can specify to use our Docker Hub
credentials defined within Jenkins to login to Docker Hub via the Docker CLI and push our
newly built image up.
#!groovy
pipeline {
agent none stages {
stage('Maven Install') {
agent {
docker {
image 'maven:3.5.0'
}
}
steps {
sh 'mvn clean install'
}
}
stage('Docker Build')
{ agent any
steps {
sh 'docker build -t shanem/spring-petclinic:latest .'
}
}
stage('Docker Push')
{ agent any
steps {
withCredentials([usernamePassword(credentialsId: 'dockerHub', passwordVariable:
'dockerHubPassword', usernameVariable: 'dockerHubUser')]) {
sh "docker login -u ${env.dockerHubUser} -p ${env.dockerHubPassword}"
sh 'docker push shanem/spring-petclinic:latest'
}

22
}
}
}
}
2. Commit these changes, push them up to the GitHub repository, and trigger our Pipeline
job to build in Jenkins.$ git add Jenkinsfile
$ git add Jenkinsfile
git commit -m 'Added Docker Push Stage'
$ git push
$ # Run the Jenkins job which will execute this new stage and wait for it to finish...
4. Wait for the job to finish running. Your image should now be on Docker Hub. Great!

RESULT:
Thus setting up Docker and Maven in Jenkins and First Pipeline Run was done
successfully.

23
EX.NO:5
RUNNING UNIT TESTS AND INTEGRATION TESTS IN
JENKINS PIPELINES
DATE :

AIM:
To Running Unit Tests And Integration Tests In Jenkins Pipelines.

PROCEDURE:
Docker Desktop WSL 2 backend on Windows
Windows Subsystem for Linux (WSL) 2 is a full Linux kernel built by Microsoft,
which allows Linux distributions to run without managing virtual machines. With Docker
Desktop running on WSL 2, users can leverage Linux workspaces and avoid maintaining
both Linux and Windows build scripts. In addition, WSL 2 provides improvements to file
system sharing and boot time.
Docker Desktop uses the dynamic memory allocation feature in WSL 2 to improve
the resource consumption. This means, Docker Desktop only uses the required amount of
CPU and memory resources it needs, while enabling CPU and memory-intensive tasks such
as building a container, to run much faster.
Additionally, with WSL 2, the time required to start a Docker daemon after a cold start
is significantly faster. It takes less than 10 seconds to start the Docker daemon compared
to almost a minute in the previous version of Docker Desktop.
Prerequisites
Before you turn on the Docker Desktop WSL 2, ensure you have:
 Windows 10, version 1903 or higher, or Windows 11.
 Enabled WSL 2 feature on Windows. For detailed instructions, refer to the Microsoft
documentation.
 Downloaded and installed the Linux kernel update package.
Turn on Docker Desktop WSL 2
1. Download Docker Desktop for Windows.
2. Follow the usual installation instructions to install Docker Desktop. If you are
running a supported system, Docker Desktop prompts you to enable WSL 2 during
installation. Read the information displayed on the screen and enable WSL 2 to
continue.

24
3. Start Docker Desktop from the Windows Start menu.
4. From the Docker menu, select Settings and then General.
5. Select the Use WSL 2 based engine check box.
If you have installed Docker Desktop on a system that supports WSL 2, this option is
enabled by default.
6. Select Apply & Restart.
Now docker commands work from Windows using the new WSL 2 engine.
Enabling Docker support in WSL 2 distros
WSL 2 adds support for “Linux distros” to Windows, where each distro behaves like a VM
except they all run on top of a single shared Linux kernel.
Docker Desktop does not require any particular Linux distros to be installed. The docker CLI
and UI all work fine from Windows without any additional Linux distros. However for the
best developer experience, we recommend installing at least one additional distro and
enabling Docker support by:
1. Ensure the distribution runs in WSL 2 mode. WSL can run distributions in both v1 or
v2 mode.
To check the WSL mode, run:
$ wsl.exe -l -v
To upgrade your existing Linux distro to v2, run:
$ wsl.exe --set-version (distro name) 2
To set v2 as the default version for future installations, run:
$ wsl.exe --set-default-version 2
2. When Docker Desktop starts, go to Settings > Resources > WSL Integration.
The Docker-WSL integration is enabled on your default WSL distribution. To change
your default WSL distro, run wsl --set-default <distro name>
For example, to set Ubuntu as your default WSL distro, run:
$ wsl --set-default ubuntu
Optionally, select any additional distributions you would like to enable the Docker-
WSL integration on.
3. Select Apply & Restart.
Develop with Docker and
WSL
The following section describes how to start developing your applications using Docker and
WSL 2. We recommend that you have your code in your default Linux distribution for the
best development experience using Docker and WSL 2. After you have enabled WSL 2 on

25
Docker Desktop, you can start working with your code inside the Linux distro and ideally
with your IDE still in Windows. This workflow is straightforward if you are using VSCode.
1. Open VS Code and install the Remote - WSL extension. This extension allows you
to work with a remote server in the Linux distro and your IDE client still on
Windows.
2. Now, you can start working in VS Code remotely. To do this, open your terminal
and type:

3. $ wsl
4. $ code .
This opens a new VS Code connected remotely to your default Linux distro which
you can check in the bottom corner of the screen.
Alternatively, you can type the name of your default Linux distro in your Start menu,
open it, and then run code .
5. When you are in VS Code, you can use the terminal in VS Code to pull your code and
start working natively from your Windows machine.
GPU support
Starting with Docker Desktop 3.1.0, Docker Desktop supports WSL 2 GPU
Paravirtualization (GPU-PV) on NVIDIA GPUs. To enable WSL 2 GPU Paravirtualization,
you need:
 A machine with an NVIDIA GPU
 The latest Windows Insider version from the Dev Preview ring
 Beta drivers from NVIDIA supporting WSL 2 GPU Paravirtualization
 Update WSL 2 Linux kernel to the latest version using wsl --update from an elevated
command prompt
 Make sure the WSL 2 backend is enabled in Docker Desktop
To validate that everything works as expected, run the following command to run a short
benchmark on your GPU:
$ docker run --rm -it --gpus=all nvcr.io/nvidia/k8s/cuda-sample:nbody nbody -gpu -
benchmark
The following displays:
Run "nbody -benchmark [-numbodies=<numBodies>]" to measure performance.
-fullscreen (run n-body simulation in fullscreen mode)
-fp64 (use double precision floating point values for simulation)
-hostmem (stores simulation data in host memory)
-benchmark (run benchmark to measure performance)
-numbodies=<N> (number of bodies (>= 1) to run in simulation)
26
-device=<d> (where d=0,1,2.....for the CUDA device to
-numdevices=<i> (where i=(number of CUDA devices > 0) to use for
-compare (compares simulation results running once on the default GPU and once
on the CPU)
-cpu (run n-body simulation on the CPU)
-tipsy=<file.bin> (load a tipsy model file for simulation)

> NOTE: The CUDA Samples are not meant for performance measurements. Results
may vary when GPU Boost is enabled.

> Windowed mode


> Simulation data stored in video memory
> Single precision floating point simulation
> 1 Devices used for simulation
MapSMtoCores for SM 7.5 is undefined. Default to use 64 Cores/SM
GPU Device 0: "GeForce RTX 2060 with Max-Q Design" with compute capability 7.5

> Compute 7.5 CUDA device: [GeForce RTX 2060 with Max-Q
Design] 30720 bodies, total time for 10 iterations: 69.280 ms
= 136.219 billion interactions per second
= 2724.379 single-precision GFLOP/s at 20 flops per interaction

RESULT:
Thus Running Unit Tests and Integration Tests in Jenkins Pipelines was done
successfully.

27

You might also like