CICD Project Workshop
In this course we are going to create a CI CD pipeline by using various tools.
Tech stacks used in this project
GitHub
Jenkins
Terraform
Ansible
Maven
SonarQube
Jfrog
Docker
Kubernetes
Helm Charts
Prometheus
Grafana
Steps to perform during this CICD pipeline project
Set up Terraform
Provision Jenkins master, build Node and Ansible using Terraform.
Set up Ansible server.
Configure Jenkins master and build node using Ansible.
Create a Jenkins pipeline job
Create a Jenkins file from scratch.
Create Multi-branch pipeline
CICD Project Workshop 1
Enable webhook on GitHub.
Configuring Sonar Cube and Sonar Scanner.
Execute Sonar Cube analysis.
Define rules and gates on Sonar Cube.
Sonar callback rules.
Jfrog Artifactory Setup.
Create a Docker file
Store Docker Images on Jfrog Artifactory.
Provisioned Kubernetes cluster using Terraform.
Create Kubernetes Objects.
Deploying the Kubernetes objects using Helm.
Set up Prometheus and Grafana using Helm Charts.
Monitor Kubernetes Cluster using Prometheus.
Pre-requisites
Install below tools on local system
Visual Studio
Git
Terraform
AWS CLI
Mobaxterm
Terraform
## Prepare Terraform Environment on Windows
As part of this, we should setup
1. Terraform
2. VS Code
3. AWSCLI
### Install Terraform
1. Download terraform the latest version from [here](https://developer.hashicorp.com/terraform/downloads)
2. Setup environment variable
click on start --> search "edit the environment variables" and click on it
Under the advanced tab, chose "Environment variables" --> under the system variables select path variable
and add terraform location in the path variable. system variables --> select path
add new --> terraform_Path
in my system, this path location is C:\Program Files\terraform_1.3.7
1. Run the below command to validate terraform version
```sh
terraform -version
```
the output should be something like below
```sh
Terraform v1.3.7
on windows_386
```
### Install Visual Studio code
Download vs code latest version from [here](https://code.visualstudio.com/download) and install it.
### AWSCLI installation
Download AWSCLI latest version from [here](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html) and insta
CICD Project Workshop 2
or you can run the below command in powershell or the command prompt
Terraform-code
1. Create IAM user
2. Login to aws cli
3. Write the First Terraform code
4. Run terraform command > terrafrom init > terraform validate > terraform plan
CICD Project Workshop 3
5. Before running the 'terraform apply' command check AWS console
6. Run terraform apply
7. EC2 instance created
CICD Project Workshop 4
8. To destroy created infrastructure run 'terraform destroy'
==============================================================
Terraform with Ansible
This document discusses using Terraform and Ansible to provision infrastructure and configure
Jenkins master and slave servers.
The document includes Terraform code for setting up EC2 instances, a VPC, and other resources. It also mentions
the process of converting one instance to an AWS instance and using Ansible playbooks to configure the Jenkins
servers.
Ansible server is going to manage two different systems and through Ansible playbooks we are going to convert
one server as a Jenkins master and another one as a Jenkins slave.
Screenshots of Terraform commands and the created instances are provided.
I have written a Terraform manifest file to create three EC2 instances, by using 'for each block'.
I need to convert one of these instances as a AWS instance. Then this
Features
Setup 3 EC2 instances through Terraform
Provision Jenkins-master, Jenkins-slave and Ansible
Setup Ansible Server
Configure Jenkins master using Ansible
CICD Project Workshop 5
We need to run the same script multiple times to create multiple instances. Rather than this, I am going to use one more
parameter called 'for each'.
Write TF script to provision infrastructure V2-EC2-with-vpc-for-each.
Terraform code-with-VPC-for-each
CICD Project Workshop 6
💡 provider "aws" {
region = "us-east-1"
}
resource "aws_instance" "demo-server" {
ami = "ami-053b0d53c279acc90"
instance_type = "t2.micro"
key_name = "linux-KP"
//security_groups = ["demo-sg"]
vpc_security_group_ids = [aws_security_group.demo-sg.id]
subnet_id = aws_subnet.Nam-public-subnet-01.id
for_each = toset(["jenkins-master", "jenikns-slave", "ansible"])
tags = {
Name = "${each.key}"
}
}
resource "aws_security_group" "demo-sg" {
name = "demo-sg"
description = "SSH Access"
vpc_id = aws_vpc.Nam-vpc.id
ingress {
description = "Shh access"
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
egress {
from_port =0
to_port =0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
ipv6_cidr_blocks = ["::/0"]
}
tags = {
Name = "ssh-port"
}
}
resource "aws_vpc" "Nam-vpc" {
cidr_block = "10.1.0.0/16"
tags = {
Name = "Nam-vpc"
}
}
resource "aws_subnet" "Nam-public-subnet-01" {
vpc_id = aws_vpc.Nam-vpc.id
cidr_block = "10.1.1.0/24"
map_public_ip_on_launch = "true"
availability_zone = "us-east-1a"
tags = {
Name = "Nam-public-subent-01"
}
}
CICD Project Workshop 7
resource "aws_subnet" "Nam-public-subnet-02" {
vpc_id = aws_vpc.Nam-vpc.id
cidr_block = "10.1.2.0/24"
map_public_ip_on_launch = "true"
availability_zone = "us-east-1b"
tags = {
Name = "Nam-public-subent-02"
}
}
resource "aws_internet_gateway" "Nam-igw" {
vpc_id = aws_vpc.Nam-vpc.id
tags = {
Name = "Nam-igw"
}
}
resource "aws_route_table" "Nam-public-rt" {
vpc_id = aws_vpc.Nam-vpc.id
route {
cidr_block = "0.0.0.0/0"
gateway_id = aws_internet_gateway.Nam-igw.id
}
}
resource "aws_route_table_association" "Nam-rta-public-subnet-01" {
subnet_id = aws_subnet.Nam-public-subnet-01.id
route_table_id = aws_route_table.Nam-public-rt.id
}
resource "aws_route_table_association" "Nam-rta-public-subnet-02" {
subnet_id = aws_subnet.Nam-public-subnet-02.id
route_table_id = aws_route_table.Nam-public-rt.id
}
terraform init
terraform validate
CICD Project Workshop 8
terraform plan
terraform apply
CICD Project Workshop 9
Three instances created
All code has committed into GitHub repo
We have seen how to set up three different instances by using Terraform. Now we need to convert one of these instances
as a AWS instance and then this Ansible server is going to manage two different systems and through Ansible playbooks we
CICD Project Workshop 10
are going to convert one server as a Jenkins master and another one as a Jenkins slave. So, in the Jenkins slave we are
going to install Maven.
=============================================
Ansible setup
This document provides instructions for setting up Ansible.
It covers installing Ansible on Ubuntu 22.04, adding Jenkins master and slave as hosts, copying .pem files to the
Ansible server, testing the connection, and configuring Jenkins-master and Jenkins-slave in the hosts file.
1. Install Ansible
Take the public IP of ansible server and login to it through Mobaxterm
Install ansibe on Ubuntu 22.04
sudo apt update
sudo apt install software-properties-common
sudo add-apt-repository --yes --update ppa:ansible/ansible
sudo apt install ansible
ansible —version
CICD Project Workshop 11
Add Jenkins master and slave as hosts Add Jenkins master and slave private IPs in the inventory file in this case,
we are using /opt is our working directory for Ansible.
Jenkins-master
2. Copy .pem file into ANSIBLE server
CICD Project Workshop 12
Move .pem file at /opt location
Provide only read permission to .pem file
3. Test the connection
ansible -i hosts all -m ping
CICD Project Workshop 13
Configure Jenkins-master and Jenkins-slave in hosts file
===================================================
Ansible playbook to install Jenkins on Jenkins-master server
This document provides an Ansible playbook to install Jenkins on a Jenkins-master server. The playbook includes
steps to add the Jenkins repo keys, add the repository, install dependencies, and install Jenkins.
It also includes instructions for a dry run and running the playbook, as well as checking the Java version and Jenkins
status on the Jenkins-master server. The document concludes with instructions for opening port 8080 to access
Jenkins and accessing Jenkins-master.
Add the Jenkins repo keys to system
Add repository to system
Install dependencies
Install Jenkins
CICD Project Workshop 14
Write ansible playbook to install Jenkins on Jenkins-master instance
dry run : ansible-playbook -i /opt/hosts jenkins-master-setup.yml --check
Run playbook
CICD Project Workshop 15
Checked java version and Jenkins status on Jenkins-master server
To access Jenkins port 8080 should be opened
-modified V4-EC2-With_VPC_for_each.tf
CICD Project Workshop 16
Run terraform plan
CICD Project Workshop 17
Run terraform apply.
Port 8080 opened
Access Jenkins-master
===================================================
Ansible playbook to configure maven on Jenkins-slave server
This document provides an Ansible playbook to configure Maven on a Jenkins-slave server. The steps include
updating the system, installing Java, downloading and extracting Maven packages, and adding the path to the
bash_profile.
The document also includes screenshots of logging into the Jenkins-slave server, editing the playbook file, running
the ansible-playbook command, and checking the Maven version on the Maven-server.
CICD Project Workshop 18
Update the System
Install Java
Download Maven Packages Extract it
Add path to bash_profile
Login to Jenkins-slave server
vi jenkins-slave-setup.yml
Run ansible-playbook
CICD Project Workshop 19
Check mvn version on Maven-server
This is how you can configure your Jenkins master and slave systems by using Ansible. So this is how we can use
Ansible in the real world for our project.
==================================================
Jenkins Pipeline
This document provides instructions for adding credentials and a slave node to Jenkins master. The steps include
adding Jenkins-slave server credentials, creating a new node, and testing with a small freestyle project.
I need to follow two steps that is add credentials and slave node we need to add to Jenkins master. These credentials
are used to log in to the slave system from the master node.
# Jenkins Master and Slave Setup
1. Add credentials
2. Add node
### Add Credentials
1. Manage Jenkins --> Manage Credentials --> System --> Global credentials --> Add credentials
2. Provide the below info to add credentials
kind: `ssh username with private key`
CICD Project Workshop 20
Scope: `Global`
ID: `maven_slave`
Username: `ubuntu`
private key: `linux-KP.pem key content`
### Add node
Follow the below setups to add a new slave node to the jenkins
1. Goto Manage Jenkins --> Manage nodes and clouds --> New node --> Permanent Agent
2. Provide the below info to add the node
Number of executors: `3`
Remote root directory: `/home/ubuntu/jenkins`
Labels: `maven`
Usage: `Use this node as much as possible`
Launch method: `Launch agents via SSH`
Host: `<Private_IP_of_Slave>`
Credentials: `<Jenkins_Slave_Credentials>`
Host Key Verification Strategy: `Non verifying Verification Strategy`
Availability: `Keep this agent online as much as possible`
Adding Jenkins-slave server credentials to Jenkins-master
CICD Project Workshop 21
Adding Node to Jenkins-master
Create New Node
Add
CICD Project Workshop 22
Node created
CICD Project Workshop 23
Logs : Agent successfully connected and online
Testing with small freestyle project
Created a test-job freestyle project
CICD Project Workshop 24
CICD Project Workshop 25
Build Now
maven.txt file created
===================================================
Multi-branch Pipeline and webhook
CICD Project Workshop 26
This document provides a step-by-step guide on setting up a pipeline and webhook in Jenkins. It covers topics such
as creating a pipeline job, writing a Jenkinsfile, adding GitHub credentials, creating a multi-branch pipeline, and
setting up a GitHub webhook. Screenshots are included to illustrate the process.
# Enable Webhook
1. Install "multibranch scan webhook trigger" plugin
From dashboard --> manage jenkins --> manage plugins --> Available Plugins
Search for "Multibranch Scan webhook Trigger" plugin and install it.
2. Go to multibranch pipeline job
job --> configure --> Scan Multibranch Pipeline Triggers --> Scan Multibranch Pipeline Triggers --> Scan by webhook
Trigger token: `<token_name>`
3. Add webhook to GitHub repository
Github repo --> settings --> webhooks --> Add webhook
Payload URl: `<jenkins_IP>:8080/multibranch-webhook-trigger/invoke?token=<token_name>`
Content type: `application/json`
Which event would you like to trigger this webhook: `just the push event`
Once it is enabled make changes to source to trigger the build.
Creating Pipeline in Jenkins
New Project - Nam-trend-job as pipeline
Pipeline declarative script
Build now
CICD Project Workshop 27
This is how to clone the job through pipeline job
Write Jenkins file with build stage. To update Jenkinsfile on Git repo first clone this repo on local
CICD Project Workshop 28
Cloned repo on local and created a file 'Jenkinsfile'
Write a Jenkins file with a build stage
a)You need to mention maven path where you have installed in your system
CICD Project Workshop 29
b)Give maven path in Jenkins file
c)commit the Jenkins file into Git repo
CICD Project Workshop 30
d)code committed to git
Now build the job in Jenkins
CICD Project Workshop 31
How to add GitHub credentials to Jenkins.
Create Personal Token in Gitgub
Add GitHub credentials in Jenkins
CICD Project Workshop 32
Add credentials in Nam-trend-job
Create Multi branch pipeline
CICD Project Workshop 33
CICD Project Workshop 34
I have created a branch dev and clicked on scan multibranch pipeline now dev branch is automatically updated If you add any
new branches, you need to scan multi branch pipeline now.
CICD Project Workshop 35
Commit code into git
Added Jenkinsfile to stage branch
CICD Project Workshop 36
Branch Stage added in Jenkins multibranch pipeline
Setup GitHub Webhook
# Enable Webhook
1. Install "multibranch scan webhook trigger" plugin
From dashboard --> manage jenkins --> manage plugins --> Available Plugins
Search for "Multibranch Scan webhook Trigger" plugin and install it.
2. Go to multibranch pipeline job
job --> configure --> Scan Multibranch Pipeline Triggers --> Scan Multibranch Pipeline Triggers --> Scan by webhook
Trigger token: `<token_name>`
CICD Project Workshop 37
3. Add webhook to GitHub repository
Github repo --> settings --> webhooks --> Add webhook
Payload URl: `<jenkins_IP>:8080/multibranch-webhook-trigger/invoke?token=<token_name>`
Content type: `application/json`
Which event would you like to trigger this webhook: `just the push event`
Once it is enabled make changes to source to trigger the build.
Enable Scan by Webhook and set payload url accordingly
Payload URl: `<jenkins_IP>:8080/multibranch-webhook-trigger/invoke?token=<token_name>`
Content type: `application/json`
add webhook
CICD Project Workshop 38
Made some changes in README.md file of main branch
Push has been triggered
CICD Project Workshop 39
Build successfully triggered
I have made changes in all branched in Git and build have successfully done in all branches
CICD Project Workshop 40
===================================================
Sonar Qube Integration Setup
This document provides instructions for setting up SonarQube integration with Jenkins.
It covers steps such as creating a SonarQube account, generating authentication tokens, installing SonarQube
plugins, configuring SonarQube server and scanner, and adding SonarQube stages in the Jenkinsfile.
The document also includes screenshots of the setup process and demonstrates how to run builds and perform
quality gate checks using SonarQube.
SonarQube account and add sonar credentials to Jenkins
## SonarQube Configuration
1. Create Sonar cloud account on https://sonarcloud.io
2. Generate an Authentication token on SonarQube
Account --> my account --> Security --> Generate Tokens
3. On Jenkins create credentials
Manage Jenkins --> manage credentials --> system --> Global credentials --> add credentials
- Credentials type: `Secret text`
- ID: `sonarqube-key`
4. Install SonarQube plugin
Manage Jenkins --> Available plugins
CICD Project Workshop 41
Search for `sonarqube scanner`
5. Configure sonarqube server
Manage Jenkins --> Configure System --> sonarqube server
Add Sonarqube server
- Name: `sonar-server`
- Server URL: `https://sonarcloud.io/`
- Server authentication token: `sonarqube-key`
6. Configure sonarqube scanner
Manage Jenkins --> Global Tool configuration --> Sonarqube scanner
Add sonarqube scanner
- Sonarqube scanner: `sonar-scanner`
Login to Sonarcloud.io
Go to Projects
Create Organization
CICD Project Workshop 42
Choose a free plan and click on Analyze new project
Add Organization and project name
CICD Project Workshop 43
Generate Authentication token on sonar-qube
CICD Project Workshop 44
So this is the token which is generated.
And if you see here new token Jenkins token has been created. Make sure that you copied it. You won't be able to see
it again.
Download SonarQube plugings
Jenkins>manage jenkins > plugins> Available plugins > sonarqube scanner
Add sonar qube server: manage jenkins > systems> SonarQube server
CICD Project Workshop 45
Add sonar qube scanner
Manage jenkins > Tools
write sonar-properties file
CICD Project Workshop 46
commit to git
CICD Project Workshop 47
Add SonarQube stage in the Jenkinsfile.
commit the code into git
CICD Project Workshop 48
Build successfully done
CICD Project Workshop 49
SonarQube Dashboard
Add Quality gates
CICD Project Workshop 50
Add conditions I have added quality gates for bugs and code smell and set the values as well. If both cross the values
my build will get fail.
Go to Projects > Administration> Quality Gates and add the created QG.
CICD Project Workshop 51
Run Build now
Quality Gate has been passed since the bug is less than 50.
CICD Project Workshop 52
Added sonar gate in Jenkinsfile
Commit the code in Git
CICD Project Workshop 53
Build triggered automatically
Sonar Gate test passes successfully
CICD Project Workshop 54
===================================================
Jfrog Artifactory Integration with Jenkins
So far we have seen how to integrate GitHub with Jenkins, Maven Server with Jenkins, SonarQube with Jenkins. In
this section we are going to see how to integrate Jfrog Artifactory with Jenkins.
This document provides a step-by-step guide on integrating Jfrog Artifactory with Jenkins. The process includes
creating an Artifactory account, generating an access token, adding credentials in Jenkins, installing the Artifactory
plugin, updating the Jenkinsfile, creating a Dockerfile, and publishing a Docker image on Artifactory. The document
also includes screenshots illustrating the process.
## Publish jar file onto Jfrog Artifactory
CICD Project Workshop 55
1. Create Artifactory account
2. Generate an access token with username (username must be your email id)
3. Add username and password under jenkins credentials
4. Install Artifactory plugin
5. Update Jenkinsfile with jar publish stage
```sh
def registry = 'https://valaxy01.jfrog.io'
stage("Jar Publish") {
steps {
script {
echo '<--------------- Jar Publish Started --------------->'
def server = Artifactory.newServer url:registry+"/artifactory" , credentialsId:"artifactory_token"
def properties = "buildid=${env.BUILD_ID},commitid=${GIT_COMMIT}";
def uploadSpec = """{
"files": [
{
"pattern": "jarstaging/(*)",
"target": "libs-release-local/{1}",
"flat": "false",
"props" : "${properties}",
"exclusions": [ "*.sha1", "*.md5"]
}
]
}"""
def buildInfo = server.upload(uploadSpec)
buildInfo.env.collect()
server.publishBuildInfo(buildInfo)
echo '<--------------- Jar Publish Ended --------------->'
}
}
}
```
Check-point:
Ensure below are update
1. your jfrog account details in place of `https://valaxy01.jfrog.io` in the defination of registry `def registry = 'https://valaxy01.j
2. Credentials id in the place of `jfrogforjenkins` in the `def server = Artifactory.newServer url:registry+"/artifactory" , credenti
3. Maven repository name in the place of `libs-release-local` in the `"target": "ttrend-libs-release-local/{1}",`
Create an Artifactory Account.
Generate an access token with a username.
Add username and Password under Jenkins Credentials.
Install the Artifactory plugin.
Update Jenkinsfile with jar publish stage.
Create a Dockerfile.
Create and publish a docker image on Artifactory.
Login to jfrog.com create an account over there and set up your platform environment on the cloud.
CICD Project Workshop 56
Create Maven repository
CICD Project Workshop 57
Create access token
CICD Project Workshop 58
Generate token
Token generated
CICD Project Workshop 59
Add credentials in Jenkins
Now install 'Artifactory' plugins
Update Jenkinsfile with jar publish stage.
source code path:
jfrog Artifactory url: https://sanam01.jfrog.io
Artifact location: /home/ubuntu/jenkins/workspace/Nam-trend-multibranch_main/jarstaging/com/valaxy/demo-workshop/2.1.2
CICD Project Workshop 60
Credentials: jfrogartifact-credentials
Updated Jenkinsfile (update target path carefully)
CICD Project Workshop 61
Committed code into Git
CICD Project Workshop 62
Build successful
Published artifact in Jfrog Artifactory
CICD Project Workshop 63
===================================================
Docker Integration with Jenkins
We have successfully built our code using Maven and performed SonarQube analysis. We have also published the
artifact to JFrog Artifactory as a JAR file.
However, since we are deploying this as a microservice, we need to convert it into a Docker image. To do this, we need
to generate Docker images as artifacts.
## Build and Publish a Docker image
1. Write and add dockerfile in the source code
```sh
FROM openjdk:8
ADD jarstaging/com/valaxy/demo-workshop/2.0.2/demo-workshop-2.0.2.jar demo-workshop.jar
ENTRYPOINT ["java", "-jar", "demo-workshop.jar"]
```
`Check-point:` version number in pom.xml and dockerfile should match
1. Create a docker repository in the Jfrog
repository name: valaxy-docker
1. Install `docker pipeline` plugin
1. Update Jenkins file with the below stages
```sh
def imageName = 'valaxy01.jfrog.io/valaxy-docker/ttrend'
def version = '2.0.2'
stage(" Docker Build ") {
steps {
script {
echo '<--------------- Docker Build Started --------------->'
app = docker.build(imageName+":"+version)
echo '<--------------- Docker Build Ends --------------->'
}
}
}
stage (" Docker Publish "){
steps {
script {
echo '<--------------- Docker Publish Started --------------->'
docker.withRegistry(registry, 'artifactory_token'){
app.push()
}
echo '<--------------- Docker Publish Ended --------------->'
}
}
}
```
CICD Project Workshop 64
Check-point:
1. Provide jfrog repo URL in the place of `valaxy01.jfrog.io/valaxy-docker` in `def imageName = 'valaxy01.jfrog.io/valaxy-docker/ttrend
2. Match version number in `def version = '2.0.2'` with pom.xml version number
3. Ensure you have updated credentials in the field of `artifactory_token` in `docker.withRegistry(registry, 'artifactory_token'){`
Note: make sure docker service is running on the slave system, and docker should have permissions to /var/run/docker.sock
Integrate docker with Jenkins : Build and Publish a Docker image
Install docker on Jenkins slave system (Maven-server) .
Create a Dockerfile.
Create a docker repository in jfrog.
Install “docker pipeline” plugin.
Update Jenkins file with docker build and publish stage.
1. Using Ansible playbook to install Docker on Jenkins-slave/maven-server.
2. Update jenkins-slave-setup.yml file
3. Docker installed on jenkins-slave/maven-server
CICD Project Workshop 65
4. Committed code into Git
CICD Project Workshop 66
Write and add Dockerfile in the source code
Committed to Git
Pushed into Git Repo
CICD Project Workshop 67
Create a docker repository in the Jfrog : repository name: namg-docker
CICD Project Workshop 68
Repo created
CICD Project Workshop 69
Install docker pipeline plugin in Jenkins
Update Jenkins file with the below stages
Check-point:
1. Provide jfrog repo URL in the place of valaxy01.jfrog.io/valaxy-docker in def imageName = 'namg04.jfrog.io/namg-
docker/namtrend'
2. Match version number in def version = '2.1.2' with pom.xml version number
3. Ensure you have updated credentials in the field of artifactory_token in docker.withRegistry(registry, 'jfrogartifact-
credentials'){
Note: make sure docker service is running on the slave system, and docker should have permissions to
/var/run/docker.sock
CICD Project Workshop 70
Commit updated Jenkins file into Git
CICD Project Workshop 71
Build triggered
Verify Docker repo in Jfrog artifactory
CICD Project Workshop 72
Testing Docker image by creating container out of it
Open 8000 port in jenkins-slave/maven-server security group and access image
===================================================
Kubernetes Setup
CICD Project Workshop 73
To set up a Kubernetes cluster using Terraform, follow these steps:
1) Access the EKS module code from the provided GitHub repository. This module helps create IAM roles, policies, the
EKS cluster, and a node group.
2) Copy the eks and sg_eks modules into the Terraform folder.
3) Create a VPC folder and move existing files into it.
4) Add the sg_eks and eks modules in the vpc.tf file, specifying the necessary parameters.
Once these steps are completed, the Terraform modules are ready to create the cluster.
Created vpc folder and moved existing files inside to this
Copy eks and sg_eks modules onto terraform folder
Add sg_eks module and eks modules in the vpc.tf file
CICD Project Workshop 74
Added EKS module to Terraform
CICD Project Workshop 75
Renamed vpc file name
CICD Project Workshop 76
Execute terraform manifest file to setup EKS cluster
CICD Project Workshop 77
CICD Project Workshop 78
Cluster created in AWS
To destroy EKS infrastructure, just comment modules in vpc project
CICD Project Workshop 79
===================================================
Integrate maven server with kubernetes cluster
Setup kubectl
curl -LO https://dl.k8s.io/release/v1.27.3/bin/linux/amd64/kubectl
chmod +x ./kubectl
mv ./kubectl /usr/local/bin
kubectl version
CICD Project Workshop 80
Make sure you have installed awscli latest version. If it has awscli version 1.X then remove it and install awscli 2.X
yum remove awscli
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
unzip awscliv2.zip
sudo ./aws/install --update
Configure awscli to connect with aws account
aws configure
Provide access_key, secret_key
CICD Project Workshop 81
Download Kubernetes credentials and cluster configuration (.kube/config file) from the cluster
aws eks update-kubeconfig --region us-east-1 --name namg-eks-01
let's execute our command so that it can able to authenticate with our cluster. You can see here now that Kubeconfig
has been copied into the slash home ubuntu. Earlier it could be in the slash root cube and config.
CICD Project Workshop 82
CICD Project Workshop 83
===================================================
Deploying application on kubernetes
Create directories called kubernetes
Write manifest files under kubernetes: have written manifest file to create Namespace
Run command to create namespace
’kubectl apply -f namespace.yml’
’kubectl get ns’ will give all namespace available in kubernetes
Write deployment.yaml manifest file
NOTE:
why did configure the environment variables in the manifest file?
Consumer_key, consuemer_token, Access_token, Access_token_secret
CICD Project Workshop 84
These are application specific token which has been created by application team.
Just like if you use mysql you have to pass some variable like username and password. If you dont use mysql will not
start. This is how mysql has been design by application team.
Similarly this is how this project is designed by application team. you don't need to bother about it. They will share
you documentation how to use any particular image.
Run command ’kubectl apply -f deployment.yaml’ to deploy deployment.yaml
Describe your pod to see the error
CICD Project Workshop 85
Warning Failed 17m (x4 over 18m) kubelet Failed to pull image "namg04.jfrog.io/namg-docker/namtrend:2.1.2": rpc
error: code = Unknown desc = failed to pull and unpack image "namg04.jfrog.io/namg-docker/namtrend:2.1.2": failed to resolve
reference "namg04.jfrog.io/namg-docker/namtrend:2.1.2": failed to authorize: failed to fetch anonymous token: unexpected
status: 401
Warning Failed 17m (x4 over 18m) kubelet Error: ErrImagePull
Warning Failed 16m (x6 over 18m) kubelet Error: ImagePullBackOff
Normal BackOff 3m15s (x66 over 18m) kubelet Back-off pulling image "namg04.jfrog.io/namg-docker/namtrend:2.1.2"
To resolve this error we need to use the secrets so that it can able to communicate with our jfrog artifactory to pull the
image.
## Integrate Jfrog with Kubernetes cluster
Create a dedicated user to use for a docker login
user menu --> new user
`user name`: jfrogcred
`email address`:
[email protected] `password`: <passwrod>
CICD Project Workshop 86
To pull an image from jfrog at the docker level, we should log into jfrog using username and password
```sh
docker login https://namg04.jfrog.io
```
genarate encode value for ~/.docker/config.json file
```sh
cat ~/.docker/config.json | base64 -w0
```
`Note:` For more refer to [Kuberentes documentation](https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-regist
Make sure secret value name `regcred` is updated in the deployment file.
`copy auth value to encode`
cat ~/.docker/config.json | base64 -w0
Create docker login user
CICD Project Workshop 87
CICD Project Workshop 88
save password in notepad
write secret.yml file
CICD Project Workshop 89
create service.yml
open port 30082 in security group
CICD Project Workshop 90
Using deploy.sh file
CICD Project Workshop 91
So far, we have seen how to deploy our application as a microservice and how we can access this outside of Kubernetes
cluster.
switch to ubuntu user
so this command doesn't work because we don't have the cluster credentials.
===================================================
Commit manifests in GitHub
Drag manifests files from Mobaxterm to Project source code folder
CICD Project Workshop 92
Commit the code into Github
CICD Project Workshop 93
Give executable permission to deploy.sh file
Committed in to Github
CICD Project Workshop 94
Build has triggered
Deploying app using Jenkins through deploy.sh
CICD Project Workshop 95
Write deploy stage in Jenkinsfile
Commit the file in Github
CICD Project Workshop 96
Build completed automatically
===================================================
Helm Charts
Helm is a package manager for Kubernetes.
CICD Project Workshop 97
💡 Package manager nothing but it is a utility like, you know, about the APT or Yum. So whenever we want to
install any software or packages we are going to use yum install git, yum install httpd. Even apt install git,
apt install apache2. So similar way we are going to use the helm. Similar way we are going to use the helm. It
is a package manager which helps to install specific packages. For example, we want to deploy Jenkins
server on Kubernetes, then we can use helm install Jenkins, this is how we can use Helm.
A chart is a package for a Pre-configured Kubernetes resources, so chart is nothing but
a manifest file.
💡 So here, if we want to create any resources on Kubernetes, we need to use the charts.
These charts are pre-configured nothing, but somebody has already written. We just need to use it. As I said,
if we want to install Jenkins, then we can use the predefined charts to install Jenkins on our Kubernetes.
A repository is a group of published charts which can be made available to others.
💡 Where these charts are located that place we call it as a repository. All predefined charts are available in the
repository so we can pull the charts from the repository and we can use it for our purpose.
Helm is used for the repetitive tasks and applications.
💡 Helm will be help us to perform the repetitive tasks and applications.
Helm should be installed on our Jenkins slave (Build server).
💡 In our case, we need to install Helman Jenkins live.
# Helm setup
1. Install helm
```sh
curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3
chmod 700 get_helm.sh
./get_helm.sh
```
1. Validate helm installation
```sh
helm version
helm list
```
1. [optional] Download .kube/config file to build the node
```sh
aws eks update-kubeconfig --region us-eats-1 --name namg-eks-01
```
1. Setup helm repo
```sh
helm repo list
helm repo add stable https://charts.helm.sh/stable
helm repo update
helm search repo <helm-chart>
helm search repo stable
```
1. Install mysql charts on Kubernetes
```sh
helm install demo-mysql stable/mysql
```
1. To pull the package from repo to local
```sh
CICD Project Workshop 98
helm pull stable/mysql
```
*Once you have downloaded the helm chart, it comes as a zip file. You should extract it.*
In this directory, you can find
- templates
- values.yaml
- README.md
- Chart.yaml
If you'd like to change the chart, please update your templates directory and modify the version (1.6.9 to 1.7.0) in the chart.yaml
then you can run the command to pack it after your update
```sh
helm package mysql
```
To deploy helm chat
```sh
helm install mysqldb mysql-1.6.9.tgz
```
Above command deploy MySQL
To check deployment
```sh
helm list
```
To uninstall
```sh
helm uninstall mysqldb
```
To install nginx
```sh
helm repo search nginx
helm install demo-nginx stable/nginx-ingress
Install helm on Jenkins-slave server
Add stable repository
CICD Project Workshop 99
To install mysql repository
helm install demo-mysql stable/mysql
CICD Project Workshop 100
Pull helm charts
CICD Project Workshop 101
Modified Chart.yml
Modified values.yml
CICD Project Workshop 102
deploy it- helm install demo-1-mysql mysql
This is how you can able to do changes to your predefined helm charts according to your requirement and deploy it.
CICD Project Workshop 103
To delete it
How to deploy Namtrend app using helm chart?
So over here we are going to see how to create a helm chart and how we can deploy our own namtrend application for
this.
First, we need to create a helm chart to create a helm chart.
Deleted existing manifest files and copied own manifest files created for namtrend application
CICD Project Workshop 104
Deploy it
CICD Project Workshop 105
This is how we can deploy our microservice by using the helm charts
===================================================
Helm chart ‘namtrend’ app deployment using Jenkins
# Create a custom Helm chart
1. To create a helm chart template
```sh
helm create ttrend
```
by default, it contains
- values.yaml
- templates
- Charts.yaml
- charts
2. Replace the template directory with the manifest files and package it
```sh
helm package ttrend
```
3. Change the version number in the
```sh
helm install ttrend ttrend-0.1.0.tgz
CICD Project Workshop 106
```
4. Create a jenkins job for the deployment
```sh
stage(" Deploy ") {
steps {
script {
echo '<--------------- Helm Deploy Started --------------->'
sh 'helm install ttrend ttrend-0.1.0.tgz'
echo '<--------------- Helm deploy Ends --------------->'
}
}
}
```
5. To list installed helm deployments
```sh
helm list -a
```
Other useful commands
1. to change the default namespace to valaxy
```sh
kubectl config set-context --current --namespace=valaxy
```
Delete deployments
helm charts copied into source code repository (tweet-trend-new)
CICD Project Workshop 107
Added deploy stage into Jenkinsfile
Commit the code into Github
Pipelines success
CICD Project Workshop 108
===================================================
Prometheus and Grafana
💡 So far we have implemented pulling the code from the GitHub and build it with help of Jenkins slave. We
have done SonarQube analysis on the source code. Next we have Build a Jar file and published in the Jfrog
Artifactory then we converted that into a Docker image and published into the Jfrog Artifactory. After that, we
have deployed it into the Kubernetes cluster by using Helm.
Now we would like to monitor our Kubernetes cluster for that we can use Prometheus and Grafana.
Prometheus is an open source system monitoring and alerting toolkit which helps us to monitor systems.
And in case if we find any abnormal behavior, it is going to send the alerts.
Prometheus collects and stores The Matrix as a Time series data. It goes and collects the data from the sources
and it keeps in the server, in the server.
It stores as a time series data.
For example, let's take a there is a web server.
That web server will have requests.
So the requests will be changing time to time now it could be 100 requests.
After five minutes, it could be 120.
After ten minutes it could be 150.
So like that, based on time, the request numbers are going to change.
So that time series data, it is going to help us to store.
Prometheus scraps targets.
Nothing but Prometheus itself goes and collects the data from the targets.
Targets doesn't sends the data to the Prometheus.
Next Promql is the language to query time series in Prometheus.
In the Prometheus we store matrix If we want to retrieve, we need to use the promql language.
We can just give the request.
That request will convert it into the promql language and it queries the data.
CICD Project Workshop 109
Service Discovery helps find out services and monitor them. So by default Prometheus can able to monitor some
of the services that will be done through the help of service discovery.
Even we don't need to install anything on Kubernetes. Kubernetes will be monitored by Prometheus by default
because it is part of service discovery.
Exporters helps to monitor third party components. Prometheus does not able to identify those targets by using
the service discovery. In such cases, we need to install the exporters. Exporters are nothing but an agent for the
Prometheus.
Let's take you are running a nginx server.
We want to get the information of nginx server.
How many requests are there?
How it is performing?
Then we need to install exporters on the nginx because by default service discovery is not available for Nginx.
This is just an example. In such cases we need to install the exporters.
Exporters are nothing but just like an agent next to thing, Prometheus can send alerts to the alert manager.
By default, Prometheus comes up with an alert manager. Alert manager is like a GUI, which helps us to find out
what are the alerts are there.
Instead of alert manager, we are going to use the Grafana. Grafana and alert manager both do the same tasks, but
alert manager is the inbuilt feature in the Prometheus.
Prometheus runs on port number 9090 and alert manager runs on 9093.
So these two will be in the same server.
Prometheus by default runs on port number 9090 and alert manager 9093.
Prometheus architecture.
First let's understand about the retrieval retrieval nothing but it pulls the matrix from various sources. Various sources nothing
but from the service.
Discovery Service nothing but Prometheus can able to identify some of the services by default among them, Kubernetes is one
of the service. You don't need to install any agent on the Kubernetes cluster. By default, it can able to monitor the Kubernetes
cluster. For that, it uses the service discovery, discover targets and retrieve the matrix from the Kubernetes cluster.
CICD Project Workshop 110
Similar way we can monitor by using the exporters. As said exporters are agents. Some services are not possible to monitor by
Prometheus by default in such cases we need to use the exporters. If you install exporters, exporters will help us to pull the
matrix onto the target server. I mean to say Prometheus server.
Next thing is short lived jobs nothing but some jobs will be just instantly run and it disappears. Those kind of jobs we call it as a
short lived jobs. Those jobs collects the data and it push to the pushgateway. So PushGateway will have all the matrix. That
data will be pulled by the retrieval. So the retrieval major task is collect all the matrix from the various sources.
Once it is collected the data, it stores it in the Tsdb. Tsdb stands for Time series Database. TSDB will have all the matrix with the
different intervals. I want to know how my web server is performing. Our database server is performing in such cases.
If I see the data from past couple of days or from past couple of weeks, how many requests are coming in each time then only I
will come to know that how it is performing now and if there is any degradation in the performance, I can try to increase the
capacity that will be possible only in case if you collect the matrix in the different time intervals. Let's take that every five
minutes. I want to monitor Http requests are every five minutes. I want to see how many database connections are there so that
data will be stored in the time series database.
Next http server http server can help us to pull the data to the alert manager and alert manager is going to see whether the
alerts are normal or abnormal in case if it is abnormal, it is going to send the alert to the different sources. It can be a pagerduty
or email or slack or MS teams.
Let's check the CPU utilization is more or more requests are there are more connections are there in the database?
All these kind of things can be monitored by the alert manager and it will send the notification.
Same data you can convert it as a data visualization with the help of Prometheus, Web UI or Grafana or any other clients.
So same data you can see over here as a graphical user interface with the nice graphs so that you can take appropriate
decision and also these alerts can be monitored over here.
That's the reason Prometheus and Grafana works together to give the better solution.
===================================================
Prometheus Setup
# Prometheus setup
### pre-requisites
1. Kubernetes cluster
2. helm
## Setup Prometheus
1. Create a dedicated namespace for prometheus
```sh
kubectl create namespace monitoring
```
2. Add Prometheus helm chart repository
```sh
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
```
3. Update the helm chart repository
```sh
helm repo update
helm repo list
```
4. Install the prometheus
```sh
helm install prometheus prometheus-community/kube-prometheus-stack --namespace monitoring
```
5. Above helm create all services as ClusterIP. To access Prometheus out side of the cluster, we should change the service type load ba
```sh
kubectl edit svc prometheus-kube-prometheus-prometheus -n monitoring
```
CICD Project Workshop 111
6. Loginto Prometheus dashboard to monitor application
https://ELB:9090
7. Check for node_load15 executor to check cluster monitoring
8. We check similar graphs in the Grafana dashboard itself. for that, we should change the service type of Grafana to LoadBalancer
```sh
kubectl edit svc prometheus-grafana
```
9. To login to Grafana account, use the below username and password
```sh
username: admin
password: prom-operator
```
10. Here we should check for "Node Exporter/USE method/Node" and "Node Exporter/USE method/Cluster"
USE - Utilization, Saturation, Errors
11. Even we can check the behavior of each pod, node, and cluster
Create namespace monitoring
Add Prometheus helm chart repository and update helm chart repository
CICD Project Workshop 112
Install the prometheus
kubectl get all
CICD Project Workshop 113
Above helm create all services as ClusterIP. To access Prometheus out side of the cluster, we should change the
service type load balancer
kubectl edit svc prometheus-kube-prometheus-prometheus -n monitoring
CICD Project Workshop 114
service changed
CICD Project Workshop 115
Loginto Prometheus dashboard to monitor application- https://ELB:9090
Alerts
CICD Project Workshop 116
===================================================
Grafana
💡 Grafana is a multi-platform, open source analytics and interactive visualization web application. So Grafana
is a open source tool which helps us to analyze and generate the interactive visualizations.
CICD Project Workshop 117
💡 It provides charts, graphs and alerts for the web when connected to the supported data services. It requires
the data services, data services, nothing but which can help us to provide the data. Whenever it connects to
the data services it generates the charts, graphs and alerts based on the data, what it gets from the other
data services.
💡 Grafana allows us to query, visualize, alert and understand our metrics no matter where they are stored.
Some supported data sources, in addition to the Prometheus are Cloudwatch, Azure Monitor, PostgreSQL,
Elasticsearch and many more. Grafana requires the data sources, data sources.
nothing but we have seen the Prometheus. I mean to say metrics so similar way Cloud, Azure Monitor,
PostgreSQL, Elasticsearch. These are the data sources from these services. It takes the data and it
generates the queries, visualize the data and generate the alerts as well.
💡 We can create our own dashboards or use the existing ones provided by the Grafana. We can personalize the
dashboards as per our requirement.
So Grafana provides some of the existing dashboards. Apart from existing dashboards, we can create our
own custom dashboards.
How to enable Grafana
We check similar graphs in the Grafana dashboard itself. for that, we should change the service type of Grafana to LoadBalancer
kubectl edit svc prometheus-grafana
```
To login to Grafana account, use the below username and password
```sh
username: admin
password: prom-operator
```
Here we should check for "Node Exporter/USE method/Node" and "Node Exporter/USE method/Cluster"
USE - Utilization, Saturation, Errors
Even we can check the behavior of each pod, node, and cluster
kubectl edit svc prometheus-grafana
CICD Project Workshop 118
Change type to Loadbalancer
username: admin
password: prom-operator
Access on port :80
CICD Project Workshop 119
we should check for "Node Exporter/USE method/Node" and "Node Exporter/USE method/Cluster"
USE - Utilization, Saturation, Errors
Check for node_load15 executor to check cluster monitoring
CICD Project Workshop 120
To delete the Prometheus and Grafana setup. Grafana: change the LoadBalancer to ClusterIP
On Prometheus: Changed IP from LoadBalancer to ClusterIP
CICD Project Workshop 121
Type has changed to ClusterIP
Load Balancers have deleted from AWS
CICD Project Workshop 122
😇
CICD Project Workshop 123