0% found this document useful (0 votes)
76 views81 pages

Devsecops

The document outlines a DevSecOps project aimed at automating Continuous Integration and Continuous Delivery (CI/CD) with integrated security measures. It highlights the use of various tools for security checks at each stage of the software development lifecycle, emphasizing early vulnerability detection and compliance through automated processes. The project aims to enhance secure software delivery while maintaining the agility of DevOps practices.

Uploaded by

ydouzi42
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
76 views81 pages

Devsecops

The document outlines a DevSecOps project aimed at automating Continuous Integration and Continuous Delivery (CI/CD) with integrated security measures. It highlights the use of various tools for security checks at each stage of the software development lifecycle, emphasizing early vulnerability detection and compliance through automated processes. The project aims to enhance secure software delivery while maintaining the agility of DevOps practices.

Uploaded by

ydouzi42
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 81

Université Mohammed Premier

École Nationale des Sciences Appliquées


Oujda

End of Year Project :


DevSecOps
Branch : Information Security & Cyber Security

Presented in : 07/07/2025

Prepared by :

DOUZI Youssef
AIT AHMAD El Mehdi
ERRAZI Ayoub

Automated CI/CD pipeline with security


Integration

Mentored by : Jury Members :


M. Y. Jabri M. Y. Jabri
M. R. Malek
M. Y. Reggad

Année Universitaire 2024 - 2025


Acknowledgment

We would like to express our gratitude to the National School of Applied


Sciences of Oujda for the unwavering support and the resources provided to us
throughout this project. We especially thank our supervisor, Mr. Jabri, for his
insightful guidance and continuous support.
We also extend our sincere thanks to all our professors and, in particular, to
Professor Jabri, for their commitment and availability. Their rigorous teaching and
the practical opportunities they offered us have been essential for our academic
and professional development.
Our thanks also go to our parents, whose unconditional support and encoura-
gement have been a constant source of motivation. Their invaluable help allowed
us to overcome the challenges encountered during our academic journey and to
always aim higher.
Finally, we would like to thank all those who contributed to this project through
their advice, help, or moral support, whether from our academic, family, or friend
circles.

2
Abstract

In today’s rapidly evolving digital landscape, integrating security into the soft-
ware development lifecycle (SDLC) is critical to mitigate growing cyber threats.
This project presents a DevSecOps pipeline that automates Continuous Inte-
gration and Continuous Delivery (CI/CD) while embedding security checks
at every stage. Leveraging Jenkins as the orchestration tool, the pipeline incor-
porates secret scanning (Talisman), static/dynamic analysis (SonarQube,
Semgrep, OWASP ZAP), dependency scanning (OWASP Dependency-
Check, Snyk), container security (Trivy), and Software Bill of Materials
(SBOM) generation (Syft).
Key contributions include :
— Automated Security Gates : Early detection of vulnerabilities via SAST,
DAST, and SCA.
— Compliance & Traceability : SBOM generation for dependency transpa-
rency and audit readiness.
— Secure Deployment : Integration of ModSecurity (WAF) and the ELK
Stack (Elasticsearch, Logstash, Kibana) for real-time monitoring and
threat detection.
— Infrastructure as Code (IaC) : Containerized deployment using Docker
and Docker Compose, ensuring consistency across environments.
The pipeline enforces "shift-left" security, reducing risks before production
while maintaining DevOps agility. By combining automation, compliance, and
monitoring, this project demonstrates a robust framework for secure, scalable,
and auditable software delivery.

3
Résumé

Dans un paysage numérique en constante évolution, l’intégration de la sécurité


dans le cycle de vie du développement logiciel (SDLC) est devenue cruciale pour
faire face aux menaces cybernétiques croissantes. Ce projet propose la mise en
œuvre complète d’un pipeline DevSecOps automatisant l’intégration et le déploie-
ment continus (CI/CD), tout en intégrant des mécanismes de sécurité à chaque
étape du cycle.
Le pipeline repose sur Jenkins en tant qu’outil d’orchestration, et intègre :
— la détection de secrets avec Talisman,
— l’analyse statique et dynamique avec SonarQube, Semgrep, et OWASP
ZAP,
— le scan des dépendances via OWASP Dependency-Check et Snyk,
— la génération de la nomenclature logicielle (SBOM) avec Syft,
— et l’analyse de sécurité des conteneurs avec Trivy.
Les contributions clés de ce projet sont :
— Sécurité automatisée dès le début : détection précoce des vulnérabilités
grâce à l’intégration des outils SAST, DAST et SCA dans le pipeline CI/CD.
— Conformité et auditabilité : génération automatique d’un SBOM pour
assurer la transparence des composants logiciels et la préparation aux audits.
— Déploiement sécurisé : utilisation de ModSecurity (WAF) et de la
stack ELK (Elasticsearch, Logstash, Kibana) pour la surveillance en
temps réel, la journalisation centralisée et l’alerte.
— Infrastructure as Code (IaC) : déploiement conteneurisé avec Docker et
Docker Compose, garantissant une cohérence des environnements et facilitant
les déploiements reproductibles.
En instaurant une approche « shift-left » de la sécurité, ce pipeline réduit consi-
dérablement les risques avant la mise en production tout en maintenant l’agilité
et la vitesse de livraison propres à DevOps. Ce projet constitue ainsi une réfé-
rence pour les pratiques modernes de développement sécurisé, tout en offrant une
architecture adaptable à des contextes industriels variés.

4
Contents

List of Abbreviations 6

Introduction 7

1 Project Context and Background 9


1.1 The importance of Security in IT . . . . . . . . . . . . . . . . . . . 9
1.2 Project Context . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.3 Software Development Life Cycle (SDLC) . . . . . . . . . . . . . . . 11
1.4 DevOps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.5 DevSecOps: Addressing Security and Business Continuity Chal-
lenges in DevOps . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
1.6 DevSecOps: Culture, Best Practices, and Pipeline Stages . . . . . . 15
1.7 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17

2 Continuous Integration/Continuous Delivery 18


2.1 What is Continuous Integration (CI) . . . . . . . . . . . . . . . . . 18
2.2 What is Continuous Delivery (CD) . . . . . . . . . . . . . . . . . . 18
2.3 What is a CI/CD Server . . . . . . . . . . . . . . . . . . . . . . . . 19
2.4 Jenkins as a CI/CD Server . . . . . . . . . . . . . . . . . . . . . . . 19
2.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26

3 Continuous Integration 27
3.1 Secret Scanning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
3.2 Software Composition Analysis (SCA) . . . . . . . . . . . . . . . . 29
3.3 Software Bill of Materials (SBOM) . . . . . . . . . . . . . . . . . . 34
3.4 Static Application Security Testing (SAST) . . . . . . . . . . . . . 36
3.5 Build . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
3.6 Push Stage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
3.7 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45

4 Continuous Delivery 47
4.1 Container Security Scanning . . . . . . . . . . . . . . . . . . . . . . 47
4.2 DAST . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
4.3 Deployment with Alerting and Monitoring . . . . . . . . . . . . . . 52
4.4 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56

5
5 DevSecOps in Action 57
5.1 Objective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
5.2 Project Repository Structure . . . . . . . . . . . . . . . . . . . . . . 57
5.3 Entire jenkinsfile . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
5.4 Pipeline setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
5.5 Interface and Reports . . . . . . . . . . . . . . . . . . . . . . . . . . 59
5.6 Pipeline Execution . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
5.7 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71

Annexes 73
docker-compose-waf.yml . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
Jenkinsfile . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
Secret Scan with Talisman . . . . . . . . . . . . . . . . . . . . . . . . . . 78
talisman-to-html.sh . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
Dockerfile . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
Vulnerable Source Code Snippets . . . . . . . . . . . . . . . . . . . . . . 81
List of Abbreviations

CI/CD: Continuous Integration/Continuous Delivery

DAST: Dynamic Application Security Testing

DevOps: Development and operations

DevSecOps: Development, Security, and Operations

ELK Stack: Elasticsearch, Logstash and Kibana Stack

IaaC: Infrastructure as a Code

OWASP: Open Web Application Security Project

SAST: Static Application Security Testing

SBOM: Software Bill of Materials

SCA: Source Composition Analysis

SCM: Source Code Management

SDLC: Software Development Life Cycle

WAF: Web Application Firewall

7
Introduction

The integration of security into the software development lifecycle has become a
critical priority as cyber threats grow in both frequency and sophistication. In this
context, DevSecOps (Development, Security, and Operations) has emerged
as a key methodology to ensure continuous and proactive security throughout the
software development process. By embedding security practices from the earliest
stages of design and development, DevSecOps ensures protection against vulner-
abilities at every phase of the development pipeline.

However, for this approach to be truly effective, robust compliance monitoring


and auditing mechanisms must be implemented to continuously assess adherence to
security best practices. A security compliance audit in a DevSecOps pipeline in-
volves analyzing, monitoring, and validating whether established processes, tools,
and practices meet legal, regulatory, and organizational security requirements.
This process ensures that security measures are not only properly in-
tegrated but also consistently followed, thereby reducing the risk of
security incidents in an agile development environment.

This report focuses on Security Compliance Auditing in a DevSecOps


pipeline, examining best practices, essential tools, and evaluation methodologies.
Key challenges in security integration will be identified, along with solutions to
ensure continuous compliance throughout the software development lifecycle.

8
Chapter 1

Project Context and Background

1.1 The importance of Security in IT


In today’s digitally driven world, information technology (IT) systems
are the backbone of nearly every business operation. As organisations increasingly
rely on cloud infrastructure, web applications, and distributed services, the attack
surface for cyber threats has grown dramatically. Security in IT is no longer a
luxury or a final step—it is a fundamental necessity from the earliest phases of
development through to deployment and maintenance.

1.1.1 Why Security Matters


— Protection of Sensitive Data IT systems handle personal data, financial
records, intellectual property, and other business-critical assets. A breach
can result in identity theft, financial loss, and irreversible damage to brand
trust.
— Compliance and Legal Obligations Regulatory frameworks such as GDPR,
HIPAA, PCI-DSS, and ISO/IEC 27001 enforce strict security require-
ments. Non-compliance can lead to heavy fines, lawsuits, and operational
restrictions.
— Business Continuity Cyberattacks—including ransomware, DDoS, and
supply-chain attacks—can disrupt operations, leading to downtime, lost rev-
enue, and reputational damage. Robust security measures ensure resilience
and faster recovery.
— Reputation and Customer Trust Customers expect their data to be safe.
A single security incident can destroy trust built over years, making security
a key part of a company’s brand and value proposition.
— Growing Sophistication of Threats Attackers today are well-funded,
leverage automated tools, and exploit both human and technical vulnerabil-
ities. Passive or reactive security is no longer effective; organisations must
adopt proactive, integrated, and automated security practices.
1.2. PROJECT CONTEXT 10

1.1.2 The Shift Toward “Secure by Design”


The modern approach to security embraces a “Secure by Design” philosophy
in which security is woven into software architecture and infrastructure planning:
1. Security requirements are defined during the design phase.
2. Development teams follow secure-coding practices.
3. Automated testing and scanning are incorporated into the CI/CD pipeline.
4. Infrastructure is hardened and continuously monitored.

1.2 Project Context


In an era of continuous delivery and cloud-native applications, software de-
velopment has become faster and more complex than ever before. However, this
increased speed often comes at the expense of security. Traditional security mod-
els, which involve manual reviews and late-stage vulnerability scanning, can no
longer keep up with rapid deployment cycles. This gap has given rise to the con-
cept of DevSecOps — integrating security directly into the DevOps pipeline to
ensure that applications are built, tested, and deployed with security as a core
component.
The purpose of this project is to design and implement an automated CI/CD
pipeline with integrated security checks, using Jenkins as the orchestration
tool. The pipeline incorporates multiple security stages — including secret detec-
tion, static and dynamic analysis, software composition analysis (SCA), container
image scanning, and Software Bill of Materials (SBOM) generation — to provide
a comprehensive DevSecOps workflow.

Business and Technical Motivations


— Accelerate secure software delivery: Deliver software features rapidly
while ensuring robust security at every stage of the SDLC.
— Prevent security regressions: Integrate security gates early in the pipeline
to catch misconfigurations, secrets, and vulnerabilities before production.
— Enable continuous compliance: Automatically generate and archive se-
curity reports and SBOMs for audit and regulatory purposes.
— Promote a security-first culture: Empower development teams to take
ownership of security by providing fast and actionable feedback within the
pipeline.

Challenges Addressed
Prior to this project, the development and deployment process had the follow-
ing issues:
— Security tools were used inconsistently or manually, increasing the risk of
human error.

SICS2 - ENSAO Année 2024 - 2025 DevSecOps


11 CHAPTER 1. PROJECT CONTEXT AND BACKGROUND

— Vulnerability detection occurred too late, often during staging or post-deployment.


— Image provenance was unclear, with containers sometimes built and deployed
outside the CI system.
— Security approvals lacked automation and audit trails.
This project solves these issues by implementing a centralized, automated, and
auditable Jenkins pipeline that applies security controls as code throughout
the software lifecycle.

Overview of the Pipeline


The Jenkins pipeline performs the following key security tasks:
— Secret Scanning: Using Talisman and TruffleHog to detect sensitive data
in source code and Git history.
— Dependency Scanning: Using OWASP Dependency-Check and Snyk to
identify vulnerabilities in third-party libraries.
— Container Scanning: Running Trivy to detect vulnerabilities in Docker
images and infrastructure-as-code files.
— SBOM Generation: Using Syft to produce CycloneDX-format SBOMs for
inventory and compliance tracking.
— Dynamic Testing: Deploying the application for a ZAP scan to simulate
attacks and detect runtime issues.
— Monitoring-Ready Deployment: Final deployment includes integration
with a WAF and readiness for alerting systems.
All stages in this project from code checkout to secret scanning, software compo-
sition analysis, SAST, container scanning, DAST, and deployment are fully auto-
mated and declared in a single Jenkinsfile. This approach follows the paradigm,
ensuring transparency, reproducibility, and centralized control of security policies
within the CI/CD lifecycle.

1.3 Software Development Life Cycle (SDLC)


The Software Development Life Cycle (SDLC) is a cost-effective and
time-efficient process that development teams use to design and build high-quality
software. The development process goes through several stages, with developers
adding new features and fixing bugs in the software. The exact details of the SDLC
process may vary between teams, but the most common phases are outlined below:
— Planning: This is the first phase, where project goals are defined, costs and
resources are estimated, risks are identified, and a project plan is established.
— Requirements Analysis: In this phase, software requirements are gathered
and a detailed specification document is created to ensure a clear and shared
understanding of what needs to be built.

SICS2 - ENSAO Année 2024 - 2025 DevSecOps


1.4. DEVOPS 12

— Design: The design phase involves creating the architecture of the soft-
ware system. Requirements from the planning phase are translated into a
blueprint for development.
— Development: During this phase, developers write the code, implement
features, and follow best practices in programming.
— Testing: In this phase, the software is tested to ensure that everything
functions correctly, bugs are fixed, and the product is validated by users or
quality teams.
— Deployment: This phase involves releasing the software to the production
environment where it becomes accessible to end users. It includes all activi-
ties needed to make the software operational.
— Maintenance: This final phase involves ongoing support and updates to
ensure the software continues to perform well and remains relevant over
time.

The SDLC methodology provides a structured management framework


with specific deliverables at each stage of the software development process. It
ensures that all stakeholders agree on the objectives and requirements from the
beginning and have a clear roadmap for achieving them.

1.4 DevOps
1.4.1 What is DevOps
DevOps is a set of practices, principles, and tools that aim to unify software
development (Dev ) and IT operations (Ops). The goal of DevOps is to shorten
the software development life cycle and enable continuous delivery of high-quality
software through automation, collaboration, and monitoring.
Traditional software delivery often involved separate teams for development
and operations, resulting in slower releases, integration challenges, and inefficient
workflows. DevOps addresses these challenges by fostering a culture of collab-
oration and automating many aspects of the software build, test, release, and
deployment processes.

SICS2 - ENSAO Année 2024 - 2025 DevSecOps


13 CHAPTER 1. PROJECT CONTEXT AND BACKGROUND

— Collaboration: Encourages shared ownership between developers and op-


erations teams.
— Automation: Automates repetitive tasks such as testing, integration, de-
ployment, and infrastructure provisioning.
— Continuous Integration / Continuous Delivery (CI/CD): Enables
frequent and reliable software releases with minimal manual effort.
— Monitoring and Feedback: Promotes real-time monitoring and perfor-
mance feedback to identify issues quickly.

While DevOps focuses on the delivery and operations side of software, the
Software Development Life Cycle (SDLC) is a broader framework that covers
the entire life cycle of a software product — from planning to maintenance.

1.4.2 Difference Between SDLC and DevOps

Aspect SDLC DevOps


Definition A process framework for man- A set of practices and tools
aging the development of soft- that automate and integrate
ware systems. the processes between develop-
ment and operations.
Focus Entire software life cycle: plan- Post-development lifecycle
ning, analysis, design, devel- stages: integration, de-
opment, testing, deployment, ployment, monitoring, and
and maintenance. feedback.
Methodologies Waterfall, Agile, Spiral, V- Often complements Agile; re-
Model, etc. lies heavily on CI/CD, IaC,
and automation.
Teams Involved Developers, business analysts, Developers and IT operations
testers. working together.
Automation May be partially automated Strong emphasis on full au-
(especially in testing). tomation of builds, deploy-
ments, and monitoring.
Goal Deliver a functional software Deliver software updates
product that meets require- rapidly, reliably, and securely.
ments.
Feedback Cycle Feedback often occurs at pre- Continuous feedback through
defined stages. monitoring and user metrics.

SICS2 - ENSAO Année 2024 - 2025 DevSecOps


1.5. DEVSECOPS: ADDRESSING SECURITY AND BUSINESS
CONTINUITY CHALLENGES IN DEVOPS 14

In summary, SDLC provides the overall structure for developing software, while
DevOps enhances and accelerates the latter stages of this cycle through au-
tomation and collaboration. DevOps does not replace SDLC but complements
it—especially in fast-paced, modern development environments.

1.5 DevSecOps: Addressing Security and Business


Continuity Challenges in DevOps
While DevOps revolutionized software delivery by improving speed, collabora-
tion, and automation, it introduced new challenges—especially around security,
compliance, and business continuity. Security was often treated as a separate
concern, leading to a gap between development velocity and risk management.
This is where DevSecOps comes in.
DevSecOps extends the principles of DevOps by shifting security to the
left—integrating it throughout the software development lifecycle (SDLC), not
just at the end.

Challenges in Traditional DevOps


— Security as a Late-stage Activity: In many DevOps workflows, security
testing is performed after development, which delays detection of vulnera-
bilities and increases remediation costs.
— Manual and Inconsistent Security Practices: Security scans (e.g., for
secrets, vulnerabilities, or misconfigurations) are often performed manually
or inconsistently, leading to gaps in coverage.
— Lack of Compliance Visibility: DevOps teams may lack the tools to
generate compliance evidence such as audit logs, SBOMs (Software Bill of
Materials), or documented security gates.
— Risk of Supply Chain Attacks: Increasing reliance on open-source and
third-party components raises the risk of injecting vulnerable dependencies
into production environments.
— Delayed Incident Response: Without integrated monitoring and alerting,
detecting and responding to attacks or failures is often reactive and slow.

How DevSecOps Provides a Solution


DevSecOps incorporates security as a shared responsibility across develop-
ment, operations, and security teams. Key benefits include:
— Automated Security Gates: Integrates tools like static analysis (SAST),
software composition analysis (SCA), secret detection, and infrastructure
scans directly into the CI/CD pipeline.
— Early Vulnerability Detection: Detects misconfigurations, known vul-
nerabilities, or leaked credentials as code is written, reducing time and cost
to fix.

SICS2 - ENSAO Année 2024 - 2025 DevSecOps


15 CHAPTER 1. PROJECT CONTEXT AND BACKGROUND

— Continuous Compliance: Automatically generates and archives reports


(e.g., dependency-check, Trivy, ZAP) and SBOMs to demonstrate compli-
ance with standards such as ISO 27001, GDPR, or PCI-DSS.
— Improved Business Continuity: By catching critical risks early and au-
tomating patching workflows, DevSecOps reduces downtime and increases
system resilience.
— Culture of Shared Responsibility: Encourages developers to take own-
ership of security without slowing innovation, enabling secure coding and
faster feedback loops.

1.6 DevSecOps: Culture, Best Practices, and Pipeline


Stages
DevSecOps (Development, Security, and Operations) is a modern approach
to software delivery that embeds security into every phase of the DevOps pipeline.
Rather than treating security as a separate, late-stage task, DevSecOps integrates
automated security controls from development through to production. It com-
bines the agility of DevOps with the proactive discipline of security engineering,
promoting a culture of shared responsibility.

1.6.1 The Culture of DevSecOps


DevSecOps is not just about tools — it is a cultural transformation. It
requires development, operations, and security teams to collaborate and take col-
lective responsibility for building secure software.
Key cultural pillars include:
— Security is Everyone’s Responsibility: From developers writing code
to ops managing infrastructure, all team members participate in enforcing
security.
— Shift Left Philosophy: Security checks happen as early as possible in the
software lifecycle — during coding, not just post-deployment.
— Automation over Manual Processes: Security gates must be automated
to match the speed of DevOps.
— Continuous Learning: Teams continually improve based on feedback from
incidents, audits, and vulnerability reports.

SICS2 - ENSAO Année 2024 - 2025 DevSecOps


1.6. DEVSECOPS: CULTURE, BEST PRACTICES, AND PIPELINE STAGES
16

1.6.2 Best Practices of DevSecOps


Implementing DevSecOps successfully requires the adoption of key best prac-
tices:

— Integrate Security Tools into CI/CD: Use tools like Talisman, Trivy,
Snyk, ZAP, and Dependency-Check directly in your Jenkins pipeline.
— Automate Early and Often: Automate static code analysis, secret detec-
tion, SCA, and container scanning on every commit or pull request.
— Maintain an SBOM (Software Bill of Materials): Generate SBOMs
using tools like Syft to keep track of dependencies and ensure supply chain
security.
— Enforce Least Privilege and IAM Policies: Limit access to resources
and ensure that credentials and tokens are securely managed.
— Fail the Build on Critical Vulnerabilities: Ensure the pipeline blocks
deployments with critical risks by enforcing thresholds in tools like Trivy.
— Archive and Monitor Reports: Keep historical vulnerability reports and
enable alerts for compliance and traceability.
— Educate Developers on Secure Coding: Train developers on common
risks (e.g., OWASP Top 10) and how to avoid them.

1.6.1 DevSecops Stages


— CI Security Stages:

— Secret Scan
— Dependency-Check
— SCA: Source Composition Analysis
— Check-Git-Secrets
— SBOM(Software Bill of Materiel) Generation
— Build the Application

— CD Security Stages:

— IaaC(Infrastructure as a Code) Scanning and Image Scanning


— Push to DockerHub
— Deployment for DAST Testing (Docker Compose)
— DAST
— Approval Gate
— Final Deployment with WAF, monitoring, alerting and logging
system

SICS2 - ENSAO Année 2024 - 2025 DevSecOps


17 CHAPTER 1. PROJECT CONTEXT AND BACKGROUND

1.7 Conclusion
This first chapter has laid the foundation for understanding the motivation,
context, and methodology behind our project. In an era where speed and agility
are critical, ensuring robust security throughout the software lifecycle has become
both a technical and business imperative.

We began by highlighting the growing importance of security in IT, especially


in the face of complex threats, regulatory requirements, and the increasing reliance
on cloud-native architectures. We then positioned our project within this context
— as a DevSecOps-driven solution that fully integrates security controls into an
automated CI/CD pipeline.

Through a comparison of SDLC, DevOps, and DevSecOps, we illustrated the


evolution from traditional development approaches to modern, security-aware de-
livery models. DevSecOps was presented not just as a set of tools or techniques,
but as a cultural transformation that embeds security into the mindset and work-
flow of every team involved. We addressed the limitations of traditional DevOps

in terms of security visibility, late-stage testing, and manual risk management —


and demonstrated how DevSecOps overcomes these limitations through automa-
tion, early detection, and continuous compliance.

Finally, we introduced the structure of our pipeline and the stages involved,
both in Continuous Integration and Continuous Delivery. Each stage has been
implemented to enforce security policies, detect vulnerabilities early, and ensure
the reliability of the deployment process.

In the following chapters, we will explore each stage of the pipeline


in detail, discussing the objectives, implementation strategies, and the
specific technologies applied to ensure secure and automated software
delivery.

SICS2 - ENSAO Année 2024 - 2025 DevSecOps


Chapter 2

Continuous Integration/Continuous
Delivery

2.1 What is Continuous Integration (CI)


Continuous Integration (CI) is a development practice that encourages de-
velopers to integrate code into a shared repository multiple times a day. Each
integration is automatically verified by tests and build processes, allowing teams
to detect errors quickly and reduce integration problems.
The main objective of CI is to identify and resolve bugs as early as possible in
the development lifecycle. By automating the process of code merging, building,
and testing, CI improves software quality and shortens the feedback loop between
developers.
CI promotes a culture of shared responsibility, where all changes are contin-
uously validated to ensure that the application remains in a healthy, functional
state. This leads to faster development cycles and better collaboration among
team members.

2.2 What is Continuous Delivery (CD)


Continuous Delivery (CD) is an extension of Continuous Integration that auto-
mates the release process so that software can be deployed to production — or any
target environment — at any time. CD ensures that every change is automatically
tested, packaged, and made ready for release with minimal manual intervention.
The goal of CD is to make deployments predictable, repeatable, and low-risk.
It achieves this by enforcing quality gates such as automated tests, security scans,
and environment validation at each stage of the delivery pipeline.
By implementing CD, teams can deliver features and fixes to users more fre-
quently and reliably. It also reduces the cost and complexity of releases, making
it easier to respond quickly to changing requirements or security concerns.
19 CHAPTER 2. CONTINUOUS INTEGRATION/CONTINUOUS DELIVERY

2.3 What is a CI/CD Server


A CI/CD server is a central component in modern software delivery pipelines.
It orchestrates the process of building, testing, and deploying code automatically
whenever changes are made. The server monitors the version control system and
triggers workflows defined by the development team.
The CI/CD server acts as the automation engine that enforces consistency
across environments and ensures that every code change passes through a series
of defined steps before reaching production. These steps may include compila-
tion, unit testing, integration testing, packaging, security checks, and deployment
routines.
A well-configured CI/CD server helps teams reduce manual work, eliminate hu-
man errors, and maintain a rapid, stable delivery pace. It also improves traceability
by keeping logs, reports, and artefacts for every execution, making debugging and
compliance audits more manageable.

2.4 Jenkins as a CI/CD Server


Jenkins – an open source automation server which enables developers around
the world to reliably build, test, and deploy their software. Jenkins is a powerful
automation server that supports Continuous Integration (CI) and Continuous De-
livery (CD). In this project, Jenkins was configured to automate the entire pipeline
— from source code integration and testing to deployment and monitoring — en-
suring that security, performance, and reliability are enforced at each stage.

Jenkins is a widely adopted open-source automation server that offers a rich set
of features for implementing Continuous Integration (CI) and Continuous Delivery
(CD) pipelines. Its flexibility, plugin architecture, and community support make
it a central tool in many DevOps and DevSecOps environments.

1. Pipeline as Code
Jenkins supports defining entire build and deployment workflows using code —
typically via a file called Jenkinsfile. This "Pipeline as Code" approach brings

SICS2 - ENSAO Année 2024 - 2025 DevSecOps


2.4. JENKINS AS A CI/CD SERVER 20

version control, traceability, and consistency to automation logic.

2. Extensible Plugin Ecosystem

One of Jenkins’ most powerful features is its plugin-based architecture. With


thousands of plugins available, Jenkins can be extended to integrate with source
control systems, testing frameworks, deployment platforms, security tools, and
notification services.

3. SCM Integration

Jenkins integrates seamlessly with Source Code Management (SCM) systems


such as Git. It can monitor repositories for changes, trigger builds automatically,
and track commit histories.

4. Parallel and Distributed Execution

Jenkins supports running jobs in parallel and distributing workloads across


multiple agents (also called nodes). This improves performance and allows for
scaling complex pipelines efficiently.

5. Build Triggers

Jenkins provides various triggering options — including SCM polling, webhook-


based triggers, scheduled jobs (cron-like syntax), and manual triggers — enabling
highly flexible automation.

6. Credential Management

To securely handle sensitive data such as passwords, API tokens, and SSH
keys, Jenkins provides a built-in credentials manager. These credentials can be
used within pipelines without exposing them in logs or code.

7. Real-Time Feedback and Reporting

Jenkins provides detailed logs, real-time status dashboards, and historical data
on pipeline executions. This visibility helps teams quickly detect issues and vali-
date the success of integration and delivery processes.

8. Integration with Testing and Security Tools

Jenkins can integrate with testing frameworks for unit, integration, and perfor-
mance testing, as well as tools for static analysis, dynamic scanning, and software
composition analysis. This makes it an ideal platform for implementing DevSec-
Ops practices.

SICS2 - ENSAO Année 2024 - 2025 DevSecOps


21 CHAPTER 2. CONTINUOUS INTEGRATION/CONTINUOUS DELIVERY

9. Open Source and Community Driven


Being open-source, Jenkins benefits from a strong global community that con-
tinuously contributes plugins, improvements, and support resources. This community-
driven model ensures rapid adaptation to new technologies and methodologies.

10. Web-Based User Interface


Jenkins features a web-based UI for configuring jobs, viewing build histories,
managing plugins, and monitoring execution pipelines. This simplifies adoption
and allows collaboration across teams with varying technical skills.

2.4.1 What is Source Code Management (SCM)?


Source Code Management (SCM) refers to systems and practices that
enable teams to manage changes in source code efficiently and collaboratively.
An SCM system provides a central repository where code is stored, versioned,
and synchronized, allowing multiple developers to work on the same codebase
simultaneously without conflict.

2.4.1.1 Examples of Common SCM Tools


Some widely used SCM systems include:
— Git – a distributed version control system commonly used with platforms
like GitHub, GitLab, and Bitbucket.
— Subversion (SVN) – a centralized version control system.
— Mercurial – a distributed SCM system similar to Git.
— Perforce – used in large-scale enterprise and gaming projects.
These tools help manage code branches, track history, merge changes, and
revert to previous states if necessary. In Continuous Integration (CI) pipelines,
SCM acts as the trigger: when new code is pushed to a repository, Jenkins can
detect the change and automatically start the pipeline.

2.4.2 What is a Pipeline?


In software engineering, a pipeline refers to a structured set of automated
processes that are executed sequentially or in parallel to build, test, and deploy
software. Pipelines are an essential component of modern DevOps practices, en-
abling developers to deliver updates more reliably and frequently.
A pipeline consists of distinct stages or steps, each designed to perform a
specific task. These may include code compilation, running tests, scanning for vul-
nerabilities, packaging artifacts, and deploying to target environments. Pipelines
ensure repeatability, reduce manual errors, and provide visibility over the entire
delivery process.

SICS2 - ENSAO Année 2024 - 2025 DevSecOps


2.4. JENKINS AS A CI/CD SERVER 22

2.4.3 What is a CI/CD Pipeline?


A CI/CD pipeline (Continuous Integration / Continuous Delivery or Deploy-
ment) is a specific type of pipeline that automates the software delivery lifecycle
from code integration to production deployment.
— Continuous Integration (CI) involves automatically building and testing
code every time a developer pushes changes to a shared repository. This
ensures that errors are detected early and integration issues are minimized.
— Continuous Delivery (CD) extends CI by automating the release process
up to the staging environment, allowing teams to deploy software at any time
with confidence.
— Continuous Deployment (also CD) goes one step further by automati-
cally deploying every validated change directly to production, without man-
ual approval.
A well-implemented CI/CD pipeline reduces the time between writing code
and delivering it to users, while maintaining high quality, security, and stability.
It is a foundational practice in DevOps and DevSecOps methodologies.

2.4.4 What is a Jenkinsfile?


A Jenkinsfile is a configuration file written in Groovy syntax that defines the
entire pipeline — from build to test, scan, and deployment. It enables teams to
implement Pipeline as Code, allowing the pipeline definition to be stored directly
inside the project repository.

2.4.4.1 Forms of Jenkinsfile Syntax


There are two primary ways to write a Jenkinsfile:
— Declarative Syntax: The most common and recommended format. It uses
a structured and readable format with predefined keywords like pipeline,
agent, stages, and steps. This format is easier to write and maintain,
especially for teams with less scripting experience.
— Scripted Syntax: A more flexible but complex form. It uses Groovy pro-
gramming constructs such as conditionals, loops, and functions. This format
gives greater control over pipeline logic but requires programming knowledge.

2.4.4.2 Advantages of Using a Jenkinsfile


— Version Control: Stored in the same SCM as the application code, enabling
traceability and rollback.
— Transparency: Everyone on the team can view and audit the build and
deployment logic.
— Reusability: Shared libraries or templates can be used to standardize work-
flows across projects.

SICS2 - ENSAO Année 2024 - 2025 DevSecOps


23 CHAPTER 2. CONTINUOUS INTEGRATION/CONTINUOUS DELIVERY

— Automation: Jenkins automatically detects and executes the pipeline from


the Jenkinsfile with every change.
Typically, the Jenkinsfile is placed in the root directory of the repository. It
defines each phase of the CI/CD pipeline as a set of stages and steps that Jenkins
executes automatically — making the delivery process repeatable, testable, and
secure.
pipeline {
agent any
stages {
stage ( ’ Build ’) {
steps {
sh ’ echo " Building the project ..." ’
sh ’ python setup . py build ’ // Example build command
}
}
stage ( ’ Test ’) {
steps {
sh ’ echo " Running tests ..." ’
sh ’ python -m unittest discover ’ // Test command
}
}
stage ( ’ Deploy ’) {
steps {
sh ’ echo " Deploying the application ..." ’
sh ’ scp -r build / [ user ] @ [ server ]:[ path ] ’ // Deployment
}
}
}
}

Listing 2.1 – Example of a declarative Jenkinsfile written in Groovy for a basic


CI/CD pipeline.

2.4.5 Set Up Jenkins


2.4.5.1 Installing and Configuring Jenkins
To automate our CI/CD pipeline, Jenkins was installed on a virtual server hosted
on a DigitalOcean droplet. This approach provided full administrative access,
control over the environment, and persistent access to a web-based Jenkins in-
stance.
We used a GitHub repository as our Source Code Management (SCM) source.
Jenkins was configured to pull code from this repository and execute a pipeline
defined in a Groovy-based file named Jenkinsfile, stored at the root of the
repository.
1. Provision a New Droplet: Log in to the DigitalOcean dashboard and
create a new droplet using Ubuntu (e.g., Ubuntu 22.04 LTS). Choose a plan
and set up SSH access.
2. Connect via SSH: Use the terminal to access the droplet(droplet-ip is
104.248.252.219)
$ ssh -i ssh_key root@104 .248.252.219

3. Update the System:

SICS2 - ENSAO Année 2024 - 2025 DevSecOps


2.4. JENKINS AS A CI/CD SERVER 24

$ sudo apt update && sudo apt upgrade -y

4. Install Java (Required by Jenkins):


$ sudo apt install openjdk -17 - jdk -y

5. Install Jenkins War file:


$ wget https :// updates . jenkins . io / download / war /2.516/
jenkins . war

6. Start Jenkins:
$ java jenkins . war -- httpPort =8088

7. Access Jenkins Web Interface: Open a browser and navigate to


http://104.248.252.219:8088
8. Unlock Jenkins: Use the initial password found with:
$ cat / var / lib / jenkins / secrets /
initialAdminPassword

9. Install Plugins:
After unlocking Jenkins, the Customize Jenkins page appears. At this
stage, you can install any number of useful plugins as part of the initial
setup. Jenkins offers two options for plugin installation:
— Install suggested plugins – Installs a recommended set of plugins
based on the most common use cases.
— Select plugins to install – Allows you to manually choose which
plugins to install. When you first access this page, the suggested plugins
are selected by default, but you can modify the selection according to
your needs.
instead of selecting Install Suggested Plugins, we chose to install only the
essential plugins manually. At this stage, we installed the Blue Ocean
plugin, which provides a modern and user-friendly interface for managing
and visualizing Jenkins pipelines. Other plugins were deferred and installed
incrementally as the project progressed and as specific stages required them.

10. Creating the First Administrator User After installing Jenkins and
completing the plugin setup, the interface prompts the creation of the first
administrator user.
– When the Create First Admin User page appears, fill in the required
fields with the username, password, full name, and email address. Then,
click Save and Finish.
– When the Jenkins is Ready page appears, click Start using Jenkins
to access the dashboard.

SICS2 - ENSAO Année 2024 - 2025 DevSecOps


25 CHAPTER 2. CONTINUOUS INTEGRATION/CONTINUOUS DELIVERY

– Sometimes, the page may display the message: Jenkins is almost ready!. In
that case, click Restart to continue. –The page may automatically refresh
after a minute. If it does not, manually refresh the page in your web browser.
– Log in to Jenkins using the credentials of the user you just cre-
ated. You are now ready to start using Jenkins.

2.4.6 Creating and Configuring a Pipeline Job in Jenkins


Once Jenkins was successfully installed and accessible through the web interface,
the next step was to create a new pipeline job that would automate our CI/CD
workflow. This subsection outlines the detailed steps followed during the pipeline
setup, using GitHub as the SCM source and a Groovy-based Jenkinsfile for
configuration.

Step 1: Creating a New Pipeline Item


From the Jenkins dashboard, we clicked on New Item, entered a descriptive name
for the pipeline (e.g., report), and selected the Pipeline project type. This allows
Jenkins to interpret and execute build logic defined using pipeline syntax.

Step 2: General Configuration


In the project configuration screen, under the General tab, we enabled the op-
tion GitHub project, indicating that the source code of our web application and
pipeline definition reside in a GitHub repository, specify the GitHub repository
URL with .git at the end. Other optional settings such as concurrent build han-
dling or build discarding were left at default values.

SICS2 - ENSAO Année 2024 - 2025 DevSecOps


2.5. CONCLUSION 26

Step 3: Configuring Build Triggers


Under the Triggers section, we configured Jenkins to react to changes in the
GitHub repository. Specifically:
— GitHub hook trigger for GITScm polling was enabled to allow
webhook-based build triggers.
— Poll SCM was also checked, with the schedule * * * * * indicating that
Jenkins checks for changes every minute.

Step 4: Defining the Pipeline


In the Pipeline section, we selected Pipeline script from SCM as the definition
method. This allows Jenkins to pull pipeline instructions directly from a source
control system.
— SCM: Git was selected as the source control system.
— Repository URL: The full HTTPS link to the GitHub repository was
provided (e.g., https://github.com/R4z1o/webapp.git).
— Branch to Build: */master was specified to target the master branch.
— Script Path: The pipeline script filename was defined as Jenkinsfile,
matching the Groovy-based file in the repository root.
— Lightweight Checkout: This was enabled to reduce resource consumption.

Step 5: Save and View the Pipeline


Once all fields were correctly filled in, we clicked Save to confirm and apply the
configuration.
After saving, Jenkins redirected us to the project dashboard. From there, we could
immediately see our newly created pipeline listed, along with options to trigger a
build manually, monitor recent activity, or inspect build history and logs.
The pipeline is now fully configured and ready to execute automatically whenever
changes are pushed to the specified GitHub branch, or manually triggered from
the Jenkins interface.

2.5 Conclusion
This chapter introduced the foundations of Continuous Integration and Continu-
ous Delivery (CI/CD), emphasizing the use of Jenkins as a flexible, open-source
automation server integrated with GitHub. We explained how Jenkins pipelines,
defined via Jenkinsfile, automate software delivery through version-controlled
workflows.
We also demonstrated the deployment and configuration of Jenkins on a cloud
server, enabling automated builds triggered by source code changes. By the end
of this chapter, a fully operational CI/CD pipeline was established, supporting
secure and traceable execution of our DevSecOps workflow.

SICS2 - ENSAO Année 2024 - 2025 DevSecOps


Chapter 3

Continuous Integration

3.1 Secret Scanning


3.1.1 What is Secret Scanning?
Secret scanning is the process of automatically detecting sensitive data—such
as API keys, passwords, tokens, or private certificates—accidentally committed
into source code repositories. Exposing such secrets can lead to serious security
breaches, including unauthorized access to services, data leaks, and exploitation
by malicious actors.

3.1.2 Why Secret Scanning is Important


— Prevent Credential Leaks: Developers may unknowingly commit API
keys or passwords to version control systems. Secret scanning helps prevent
these leaks before they reach production.
— Compliance and Auditing: Many regulatory standards (e.g., GDPR,
HIPAA, ISO 27001) require sensitive data protection. Secret scanning pro-
vides traceability and compliance support.
— Shift-Left Security: Detecting secrets early in the CI/CD pipeline mini-
mizes remediation effort and reduces exposure.
— Prevent Supply Chain Attacks: Leaked secrets can be exploited by at-
tackers to inject malicious code or gain unauthorized access to external ser-
vices.

3.1.3 Open Source Tools for Secret Scanning


Several open-source tools are available for secret scanning:
— Talisman: Developed by ThoughtWorks, it scans for secrets and sensitive
patterns in Git commits.
— TruffleHog: Searches through Git histories to find high-entropy strings and
secrets.
— Gitleaks: Fast and customizable scanning tool for Git repositories.

27
3.1. SECRET SCANNING 28

— GitGuardian CLI: Offers powerful pattern detection and entropy-based


scanning.
In our pipeline, we use Talisman for automated secret detection at the CI stage.

3.1.4 Introducing Talisman


Talisman is a lightweight, open-source tool by ThoughtWorks that prevents de-
velopers from committing secrets and sensitive information into source code repos-
itories.

Features of Talisman
— Scans files and Git history for secrets and credential patterns.
— Detects entropy-based anomalies that may indicate hardcoded keys.
— Customizable denylist and policy enforcement.
— Can be integrated into pre-commit hooks or CI pipelines.
— Supports generating reports in JSON for further processing or compliance.

3.1.5 Pipeline Configuration for Secret Scanning with Tal-


isman
To integrate Talisman into our CI pipeline, we added a dedicated stage in the
Jenkinsfile. The purpose of this stage is to automatically scan the source code for
secrets during the Continuous Integration process and to generate readable reports
that can be stored as build artifacts.
In addition to running Talisman, we created a custom Bash script named
talisman-to-html.sh. This script converts the default JSON output generated
by Talisman into a human-readable HTML format, which makes it easier for de-
velopers and auditors to interpret scan results directly from the Jenkins UI.

Below is the relevant Jenkins pipeline stage used to perform secret scanning and
reporting: see the A.3 Secret Scan with Talisman Annex in the Annexes chapter
at the end of the document.

SICS2 - ENSAO Année 2024 - 2025 DevSecOps


29 CHAPTER 3. CONTINUOUS INTEGRATION

Details of the Custom Bash Script


— Input: The script takes the path to the original JSON report generated by
Talisman.
— Transformation: It parses the JSON data and renders it into a structured
HTML document using embedded HTML/CSS formatting.
— Output: The final HTML file (e.g., talisman-report.html) is placed in
the artifact directory, making it accessible via the Jenkins build dashboard.

This approach improves the visibility of security scans and encourages fast feedback
and action from the development team.
For converting JSON Talisman scan output to HTML, see the A.4 Bash script
JSON to HTML Annex in the Annexes chapter at the end of the document.

By archiving both the JSON and HTML reports, this pipeline stage enables ef-
fective auditing and continuous visibility into secret scanning activities. These
reports are made available in the artifacts section of each Jenkins pipeline build
and can be accessed easily from the Jenkins dashboard for further review.

Below is the Output of Talisman scan on the Web Application we are working on

3.2 Software Composition Analysis (SCA)


3.2.1 What is SCA
Software Composition Analysis (SCA) is an automated process that iden-
tifies the open-source components within a codebase. It helps evaluate security
vulnerabilities, license compliance, and code quality.
Modern applications heavily rely on open-source libraries. However, these compo-
nents can introduce hidden risks—such as critical vulnerabilities like the Log4Shell

SICS2 - ENSAO Année 2024 - 2025 DevSecOps


3.2. SOFTWARE COMPOSITION ANALYSIS (SCA) 30

exploit—if not properly tracked and monitored. SCA tools scan these dependen-
cies and cross-reference them with public vulnerability databases to detect issues
early in the development lifecycle.
Popular tools like OWASP Dependency-Check and Snyk are commonly used
to perform SCA in CI/CD pipelines, enabling continuous security checks on open-
source packages.

3.2.2 Why It’s Important


As the use of open-source software has increased, so has the complexity of manag-
ing its risks. Manually tracking license obligations and security vulnerabilities in
open-source packages is time-consuming and often ineffective. This is where SCA
becomes essential.
SCA tools automate the detection of:
— Security vulnerabilities in open-source components
— License risks and compliance violations
— Code quality concerns
In a DevOps or DevSecOps environment, SCA plays a vital role in shifting
security “left”—bringing it earlier in the software development lifecycle (SDLC).
Early and continuous scanning helps developers and security teams address risks
proactively without slowing down development.
Modern SCA tools are designed to be developer-friendly, integrating seamlessly
with development workflows while giving security teams the visibility they need
to guide and enforce best practices.
Ultimately, SCA is a cornerstone of any robust application security strategy,
ensuring the health and safety of the software supply chain.

3.2.3 OWASP Dependency-Check


Dependency checks involve evaluating the external components, like libraries and
frameworks, used in software development to identify security vulnerabilities, en-
sure compliance with policies, and mitigate risks.
OWASP Dependency-Check is a Software Composition Analysis (SCA) tool
that attempts to detect publicly disclosed vulnerabilities contained within a
project’s dependencies. It does this by determining if there is a Common Platform
Enumeration (CPE) identifier for a given dependency. If found, it will generate a
report linking to the associated CVE entries.

3.2.4 Snyk
Snyk Open Source helps organizations like Salesforce, Google, and Facebook
enhance application security by enabling development teams to automatically find,
prioritize and fix security vulnerabilities and license issues in their open source
dependencies and containers early in and across the SDLC.

SICS2 - ENSAO Année 2024 - 2025 DevSecOps


31 CHAPTER 3. CONTINUOUS INTEGRATION

Unlike other security solutions in the market, Snyk Open Source is a developer-
friendly tool that integrates seamlessly into development workflows, providing au-
tomated remediation and actionable security insight to help organizations identify
and mitigate risk efficiently.
Snyk analyzes the entire software composition, including all third-party libraries,
frameworks, and open-source components used in a project. It provides a holistic
view of the software’s dependencies. With these abilities, running an SCA can
replace or complement some earlier manual steps in the security pipeline.

3.2.5 Configuring OWASP Dependency-Check in the


pipeline
To integrate OWASP Dependency-Check into our CI pipeline, we follow a series
of steps to install the necessary plugin, configure the tool, and declare a scanning
stage in the Jenkinsfile. This enables automated detection of vulnerabilities in
project dependencies.

Step 1: Install the Dependency-Check Plugin


1. Navigate to Manage Jenkins → Manage Plugins.
2. Go to the Available tab and search for Dependency-Check Plugin.
3. Select and install the plugin, then restart Jenkins if required.

Step 2: Configure the Dependency-Check Tool


1. Go to Manage Jenkins → Global Tool Configuration.
2. Locate the section titled Dependency-Check installations.
3. Click on Add Dependency-Check.
4. Check Install automatically and select Install from github.com.
5. Choose the latest stable version and assign a name (e.g., dep-check-auto).
6. Save the configuration.

Step 3: Add a Jenkinsfile Stage for Scanning


In the Jenkinsfile, add a dedicated stage to run Dependency-Check and archive
its reports:

SICS2 - ENSAO Année 2024 - 2025 DevSecOps


3.2. SOFTWARE COMPOSITION ANALYSIS (SCA) 32

stage ( ’ OWASP - dependency - check ’) {


steps {
echo ’ dependency check using OWASP ’
d ep en d en c yC he c k a d d i t i o n a l A r g u m e n t s : ’’, od cI n st al l at io n : ’ dependency -
check ’
d e p e n d e n c y C h e c k P u b l i s h e r pattern : ’ ’
a r c h i v e A r t i f a c t s a l l o w E m p t y A r c h i v e : true , artifacts : ’ dependency - check -
report . xml ’ , fingerprint : true , fo llowSy mlinks : false ,
o n l y I f S u c c e s s f u l : true
sh ’ rm - rf dependency - check - report . xml * ’
}
}

Listing 3.1 – OWASPDependency-Check stage in Jenkinsfile.

Step 4: View Results in Jenkins


After a pipeline run:
— A new Dependency-Check section will appear on the pipeline’s dashboard.
— Reports will also be available under the Artifacts link of the build.
— The XML or HTML report can be downloaded and reviewed for known
CVEs.

This configuration inConfiguringtegrates OWASP Dependency-Check directly into


the Jenkins pipeline, enabling continuous analysis of third-party libraries for pub-
licly known vulnerabilities.

3.2.6 Configuring Snyk for SCA in the pipeline


To integrate Snyk into our CI pipeline for continuous open-source vulnerability
analysis, follow these steps:

Step 1: Install the Snyk Plugin


1. Navigate to Manage Jenkins → Plugins.
2. Select the Available tab, search for “snyk”, and install the official Snyk
plugin.
3. Restart Jenkins if prompted.

Step 2: Create a Snyk Account & API Token


1. Go to https://snyk.io/ and sign up for a free account (or log in).
2. In your account settings, generate an API token.

Step 3: Configure Snyk Installation in Jenkins


1. Go to Manage Jenkins → Global Tool Configuration (or Tools).
2. Locate the Snyk installations section.
3. Click Add Snyk and select Install automatically to manage the Snyk
CLI.

SICS2 - ENSAO Année 2024 - 2025 DevSecOps


33 CHAPTER 3. CONTINUOUS INTEGRATION

4. Optionally provide a name (e.g., snyk).


5. Save the configuration.

Step 4: Add API Token to Jenkins Credentials


1. Go to Manage Jenkins → Credentials → System → Global credentials
(unrestricted).
2. Add a new credential:
— Kind: Snyk API token.
— Token: Paste the API token from Snyk.
— ID: Assign a recognizable ID (Jenkins will generate one if left blank).
3. Save the credentials.

Step 5: Add a Pipeline Stage for Snyk


In your Jenkinsfile, add the following stage:
stage ( ’ SCA using snyk ’) {
steps {
snykSecurity (
s n y k I n s t a l l a t i o n : ’ snyk ’ ,
snykTokenId : ’79230 cba -8022 -423 d -80 b0 -1 c625dc7b13a ’
)
}
}

Listing 3.2 – SCA using Snyk Stage in jenkinsfile.


— snykInstallation should match your tool name in Global Tool Configura-
tion.
— snykTokenId is the Jenkins credential ID for the API token.
— failOnIssues: false allows the build to complete even if vulnerabilities
are found (configure as needed).

Step 6: Inspecting Snyk Output


— No explicit archive step is needed—Snyk plugin automatically captures and
displays scan data.
— After running the pipeline, a Snyk Security Report section will appear in
the Jenkins build UI.
— The detailed report includes vulnerability breakdowns and remediation ad-
vice.

These steps set up seamless integration of Snyk into your Jenkins pipeline, en-
abling continuous monitoring of open-source dependencies and clear visibility into
security issues directly within the build dashboard.

SICS2 - ENSAO Année 2024 - 2025 DevSecOps


3.3. SOFTWARE BILL OF MATERIALS (SBOM) 34

3.3 Software Bill of Materials (SBOM)


A Software Bill of Materials (SBOM) is a comprehensive inventory that de-
tails every ingredient that goes into building software. In modern software devel-
opment and security practices, an SBOM is indispensable for several reasons.
It provides transparency into the software components, aids in tracking and man-
aging vulnerabilities, and ensures compliance with security standards and regu-
lations. As software systems grow more complex and integrated, the role of an
SBOM becomes critical in managing risk and protecting software from potential
security breaches.
Given the diversity of software applications and the industries they serve, different
types of SBOMs have been developed to cater to specific needs. These tailored
SBOMs help in addressing the unique challenges posed by different technological
environments, ranging from automotive systems and medical devices to large-scale
enterprise applications.
Understanding the appropriate type of SBOM to deploy can significantly enhance
an organization’s ability to manage its software supply chain securely and effi-
ciently.

3.3.1 What is SBOM?


An SBOM is essentially a detailed list that captures all the components, libraries,
dependencies, and licenses involved in the construction of software. This document
serves as a critical tool for numerous stakeholders, including developers, security
professionals, and compliance officers.
For developers, an SBOM provides a clear map of the software’s architecture,
making it easier to update or troubleshoot the product. Security professionals
use SBOMs to quickly identify potential vulnerabilities, especially those linked to
third-party components. Compliance officers rely on SBOMs to ensure that the
software meets regulatory standards, particularly those concerning cybersecurity
and privacy.
The necessity of SBOMs extends beyond just operational or compliance require-
ments. It is a foundational element in building trust and assurance in software
integrity, especially in sectors where safety and reliability are paramount. As
software continues to eat the world, the ability to scrutinize and verify every com-
ponent through an SBOM becomes not just beneficial but essential for sustaining
the security and functionality of digital infrastructures.

3.3.2 Importance of SBOM Across Various Stages of the


Software Supply Chain
The SBOM plays a vital role throughout the software supply chain—from devel-
opment to deployment and maintenance. During the development phase, it helps
engineers understand the structure and dependencies of their application, facili-
tating more informed decisions.
In the deployment stage, an SBOM allows security teams to perform thorough risk

SICS2 - ENSAO Année 2024 - 2025 DevSecOps


35 CHAPTER 3. CONTINUOUS INTEGRATION

assessments, ensuring that only secure and compliant components are released into
production. During maintenance, it serves as a critical resource for quickly address-
ing new vulnerabilities as they are disclosed, by identifying which applications are
affected by a particular issue.
SBOM enhances transparency in the software supply chain, allowing stakeholders
to understand component composition and origins. This transparency aids in
effective risk management by identifying and prioritizing security risks. SBOM
also ensures compliance with regulatory standards.
Furthermore, SBOM facilitates efficient vulnerability management, helping teams
quickly address security issues. In the event of a security incident, having an SBOM
accelerates response efforts, and its integration into the DevSecOps pipeline enables
continuous monitoring for ongoing security assessment throughout the software
development lifecycle.

3.3.3 Types of SBOM


— Flat Format SBOM: This type presents all components in a single list
without detailing their relationships. It’s useful for simple applications where
dependencies are minimal and straightforward.
— Hierarchical Format SBOM: This format shows components in a tree-like
structure, depicting how each component is related to and dependent on oth-
ers. It’s ideal for complex applications where understanding the dependency
structure is crucial for security and maintenance.
— Relationship Format SBOM: The most detailed type, this format in-
cludes not only hierarchical information but also the relationships and inter-
actions between components, such as dynamic link libraries and API calls.
This format is suited for applications in dynamic environments where com-
ponents interact in complex ways, such as in large integrated systems.

3.3.4 Automating SBOM Generation in CI/CD Pipelines


1. Integration into CI/CD is straightforward with the right approach.
2. Add SBOM generation as a build step using tools like Syft, Trivy, or the
CycloneDX Maven plugin. Store the SBOMs alongside build artifacts in
your repository.
3. Set up verification gates that check SBOMs for policy violations before de-
ployment.

3.3.5 Integration of SBOM in Jenkins Pipeline


Installation of Syft
do the following command in from droplet terminal
$ curl - sSfL https :// raw . githubusercontent . com / anchore /
syft / main / install . sh | sh -s -- -b / usr / local / bin

SICS2 - ENSAO Année 2024 - 2025 DevSecOps


3.4. STATIC APPLICATION SECURITY TESTING (SAST) 36

Listing 3.3 – Installing Syft.

Jenkins Pipeline Stage for SBOM Generation

stage ( ’ Generate SBOM ’) {


steps {
sh ’’’
syft scan dir :. -- output cyclonedx - json = sbom . json
’’’
a r c h i v e A r t i f a c t s a l l o w E m p t y A r c h i v e : true ,
artifacts : ’ sbom * ’ ,
fingerprint : true ,
foll owSyml inks : false ,
o n l y I f S u c c e s s f u l : true
sh ’ rm - rf sbom * ’
}
}

Listing 3.4 – SBOM Stage in jenkinsfile.

3.4 Static Application Security Testing (SAST)


3.4.1 Introduction
As software development becomes increasingly complex and fast-paced, ensuring
the security of code is more important than ever. Static Application Security
Testing (SAST) is one of the key tools for achieving this goal.

3.4.2 What is SAST?


Static Application Security Testing (SAST) involves analyzing an application’s
source code very early in the Software Development Life Cycle (SDLC). The SAST
analysis specifically looks for coding and design vulnerabilities that make an orga-
nization’s applications susceptible to attack.
Also known as white box testing, SAST analyzes an application from the “inside
out” when it is in a non-running state, trying to gauge its security strength.
SAST solutions help prevent security issues before they move to later stages in
the development cycle by scanning the entire codebase. This process is performed
without executing the application and requires syntactic awareness of the code’s
structure and mechanisms, such as:
— Programming language constructs
— External dependencies
— Method calls and execution order
When correctly implemented, SAST can protect against many vulnerabilities listed
in the OWASP Top 10, such as:
— Memory leaks
— Cross-site scripting (XSS)
— SQL injection

SICS2 - ENSAO Année 2024 - 2025 DevSecOps


37 CHAPTER 3. CONTINUOUS INTEGRATION

— Authentication and access control issues


SAST tools are often integrated into modern Integrated Development Environ-
ments (IDEs) or offered as plugins. This allows developers to receive immediate
feedback while writing code, reducing the time and cost of fixing vulnerabilities.
However, SAST tools can also produce a high number of false positives, potentially
leading to alert fatigue. Therefore, manual review by security experts is often
necessary to validate findings and filter out irrelevant alerts.

3.4.3 Types of SAST Testing


There are three fundamental types of SAST testing:
1. Source Code Analysis — analyzes the original source code written by
developers.
2. Bytecode Analysis — examines compiled code, often used when source
code is not available.
3. Binary Code Analysis — analyzes executable binaries, suitable for third-
party or legacy applications.
SAST solutions can be seamlessly integrated into the development workflow, en-
abling developers to monitor their code continuously. This integration allows
developers to detect and remediate vulnerabilities in real time, before the code
progresses to later stages in the development cycle.
By incorporating SAST tools early, organizations can address security concerns
proactively and improve the overall security posture of their software products.

3.4.4 What are the tools used in SAST


3.4.4.1 List of SAST Tools Used
Below is a summarized list of the Static Application Security Testing (SAST) tools
presented in the previous table:
— SonarQube – Open-source tool for code quality and security analysis across
multiple languages.
— Checkmarx CxSAST – Enterprise-grade SAST solution with CI/CD in-
tegration.
— Fortify Static Code Analyzer – Enterprise tool supporting a wide range
of languages for vulnerability detection.
— Veracode Static Analysis – Cloud-based SAST offering easy integration
and detailed scans.
— CodeSonar – Advanced static analysis tool that also provides DAST capa-
bilities.
— ESLint – Lightweight static analyzer for JavaScript, enforcing code quality
rules.
— Infer – Open-source mobile-focused SAST for Objective-C and Swift.

SICS2 - ENSAO Année 2024 - 2025 DevSecOps


3.4. STATIC APPLICATION SECURITY TESTING (SAST) 38

3.4.5 how SAST works

Figure 3.1 – How SAST works

3.4.6 Setting up SAST for Our CI/CD Pipeline


To enhance static application security testing (SAST) coverage in our CI/CD
pipeline, we integrated two powerful tools: SonarQube and Semgrep. This
combination offers both code quality analysis and deep customizable vulnerability
scanning.

3.4.6.1 SonarQube
SonarQube is an open-source platform developed by SonarSource for continuous
inspection of code quality. It performs automatic reviews using static analysis to
detect bugs and code smells across more than 30 programming languages.
SonarQube was selected over other SAST tools due to its:
— Extensive documentation
— Rich plugin ecosystem
— Seamless integration with Jenkins
— Excellent support for Java projects

SICS2 - ENSAO Année 2024 - 2025 DevSecOps


39 CHAPTER 3. CONTINUOUS INTEGRATION

Integrating SonarQube with Jenkins


Install the SonarQube Plugin
1. Go to Manage Jenkins > Plugins > Available plugins.
2. Search for sonarqube and install the SonarQube Scanner plugin.
3. Check the box to restart Jenkins once installation completes.

Install the SonarQube Server You can use either a cloud VM or local
Docker setup. For simplicity, a local Docker installation is recommended. Fol-
low the official steps at https://docs.sonarsource.com/sonarqube/latest/
try-out-sonarqube/.
Create persistent volumes:
$ docker volume create sonarqube_data
$ docker volume create sonarqube_logs
$ docker volume create sonarqube_extensions

Run SonarQube in a Docker container:


$ docker run -d \
-- name sonarqube \
-p 9000:9000 \
-e S O N A R _ E S _ B O O T S T R A P _ C H E C K S _ D I S A B L E = true \
-v sonarqube_data :/ opt / sonarqube / data \
-v sonarqube_logs :/ opt / sonarqube / logs \
-v sonarqube_extensions :/ opt / sonarqube / extensions \
sonarqube : latest

Access the web UI at http://localhost:9000, log in with default credentials,


and update your password.

Integration with Jenkins


1. Create a new project in SonarQube (e.g., jenkinsPipeline).
2. Generate a global analysis token from My Account > Security and save it.
3. In Jenkins, go to Manage Jenkins > Credentials > Global and:
— Set Kind to Secret Text
— Paste your token as the secret
4. Now, go to Manage Jenkins > Configure System > SonarQube servers.
5. Add a new server with:
— Name: sonarQube
— URL: http://localhost:9000
— Token: Select your stored secret text credential

SICS2 - ENSAO Année 2024 - 2025 DevSecOps


3.4. STATIC APPLICATION SECURITY TESTING (SAST) 40

Pipeline Integration Finally, add the following to your Jenkinsfile:


stage ( ’ SonarQube Analysis ’) {
steps {
w i t h S o n a r Q u b e E n v ( i n s t a l l a t i o n N a m e : ’ sonarQube ’) {
sh " mvn clean verify sonar : sonar - Dsonar . projectKey = j en k in sP i pe l in e -
Dsonar . projectName = ’ jenkinsPipeline ’"
}
}
}

Listing 3.5 – SonarQube Stage in Jenkins Pipeline

3.4.6.2 Semgrep
Semgrep is a fast and powerful SAST tool that allows teams to detect security
issues, enforce code standards, and write custom or community-driven rules.

For this project, we used Semgrep enhanced by AI to analyze Java and YAML
codebases.

Local Setup
1. Sign up at https://semgrep.dev/login using GitHub or GitLab.
2. Create an organization and select Run on CLI.
3. Follow CLI setup instructions at https://semgrep.dev/onboarding/scan.
4. Run your first scan using:
$ semgrep ci

5. After scanning, visit the Dashboard at https://semgrep.dev/orgs/ to in-


spect vulnerabilities.

Jenkins Integration
1. Store your Semgrep API token as a Jenkins credential:
— Kind: Secret Text
— ID: SEMGREP_APP_TOKEN
2. Add Semgrep to the Jenkins pipeline:

SICS2 - ENSAO Année 2024 - 2025 DevSecOps


41 CHAPTER 3. CONTINUOUS INTEGRATION

stage ( ’ Semgrep - Scan ’) {


steps {
timeout ( time : 10 , unit : ’ MINUTES ’) {
sh ’’’
python3 -m venv venv
. venv / bin / activate
pip3 install semgrep
semgrep ci
’’’
// Note : remove the -- disable - pro flag when we add more memory to the
Jenkins server
}
}
}

Listing 3.6 – Semgrep CI/CD Integration


Below is the Jenkins environment block used to configure global environment
variables for the pipeline:
environment {
// S E M G R E P _ B A S E L I N E _ R E F = ""

S E M G R E P _ A P P _ T O K E N = credentials ( ’ SEMGREP_APP_TOKEN ’)
SEMGREP_PR_ID = " $ { env . CHANGE_ID }"

// SEM GR E P_ TI M EO UT = "300"
}

Listing 3.7 – Environment Block in Jenkinsfile

Testing and Reviewing Semgrep Findings


After running the pipeline, visit https://semgrep.dev/orgs/ to view findings in
the dashboard. Investigate each issue to determine if it’s a false positive. If the
issue is valid, apply the fix and rerun the scan.

3.4.7 SAST Strengths and Weaknesses


Static Application Security Testing (SAST) is a valuable tool in a comprehensive
security program, but like any security tool, it has its strengths and weaknesses.
Understanding the strengths and weaknesses of SAST is crucial in using it effec-
tively in your security program.
— S (Strengths): Static analysis is easy to get started with and usually
straightforward. It can perform both data flow and control flow analysis.
— W (Weaknesses): By its nature, static analysis is prone to high levels
of false positives and takes considerable time. It cannot find runtime and
business logic bugs.
— O (Opportunities): Provides fast feedback if configured properly. Can
be tuned to reduce false positives. Many tools support a wide range of
programming languages.
— T (Threats): Many tools do not integrate well with modern CI/CD
pipelines. The inability to manage false positives locally can be a major
limitation.

SICS2 - ENSAO Année 2024 - 2025 DevSecOps


3.5. BUILD 42

The table below summarizes key strengths and weaknesses of SAST tools:

Strengths Weaknesses
Can analyze code at any stage of de- Can produce false positives and false
velopment negatives
Can analyze large codebases quickly May not detect vulnerabilities that
require runtime data
Can identify common vulnerabilities Cannot detect certain types of vul-
and coding errors nerabilities, such as design flaws
Can be integrated into development May require specialized knowledge to
processes for early vulnerability iden- configure and use effectively
tification
Can help enforce coding standards May not be effective at identifying
and best practices complex vulnerabilities or those that
require a deep understanding of the
codebase
Can be automated and integrated May not be suitable for all types
into CI/CD pipelines of applications or programming lan-
guages

Table 3.1 – Summary of SAST Strengths and Weaknesses

3.5 Build
3.5.1 What is Build?
In DevOps and DevSecOps pipelines, the build phase is essential for ensuring
that recent code changes do not break the application. If the project is successfully
built, it moves on to the testing phase. Once tests pass, the pipeline proceeds to
the final deployment stage.
In our case, since this is a DevSecOps pipeline focused primarily on performing
security scans and identifying vulnerabilities, we are mainly concerned with the
testing aspects of the pipeline for now. Building and deploying the application
are not the final goals—yet—but are still necessary steps for the tools we use.
Almost every security scanning tool in a DevSecOps pipeline requires a built ver-
sion of the project to analyze it effectively. While some tools build the project
internally, many do not. Therefore, adding a dedicated build stage helps ensure
smooth and accurate scanning and adds minimal overhead to the pipeline.
This particular project is a Maven-based Java application, so we use the following
command to compile and package it:
mvn clean package

3.5.2 Containerizing the Application


In the build phase, we also use a Dockerfile to containerize the application. This
enables us to package the source code, its dependencies, and runtime configuration
into a Docker image.

SICS2 - ENSAO Année 2024 - 2025 DevSecOps


43 CHAPTER 3. CONTINUOUS INTEGRATION

Benefits of containerization:
— Consistency across environments
— Isolation from host system dependencies
— Portability to different platforms and clouds
— Faster and reliable deployments
— Integration with other pipeline stages (scanning, testing, etc.)

See Appendix A.5 for the Dockerfile

3.5.3 Jenkinsfile: Build Stage


The following snippet shows how the build stage is implemented in the Jenkins
pipeline:
stage ( ’ build ’) {
steps {
echo ’ Building the application ... ’
sh """
docker rmi $ { DOCKER_IMAGE } || true
docker build -t $ { DOCKER_IMAGE } .
"""
}
}

Listing 3.8 – Build Stage in jenkinsfile


Below is the Jenkins environment block used to configure global environment
variables for the pipeline:
environment {
DOCKER_IMAGE = ’ uwinchester / pfa_app ’
}

Listing 3.9 – Environment Block in Jenkinsfile


Explanation:
— Removes any previously built Docker image with the same name to ensure
a clean build.
— Uses the Dockerfile to build a fresh image, executing all RUN commands
within it (including building the project and installing Tomcat).
— The resulting image is ready for deployment and can be used in later stages,
such as dynamic application security testing (DAST).
By including this step, we ensure that our application is packaged and ready
for both deployment and security testing within a consistent, isolated, and
reproducible environment.

SICS2 - ENSAO Année 2024 - 2025 DevSecOps


3.6. PUSH STAGE 44

3.6 Push Stage


3.6.1 What is the Push Stage?
The push stage in a CI/CD pipeline is responsible for uploading the built Docker
image to a remote container registry such as Docker Hub. This step is essential to
ensure that the application can be retrieved and deployed consistently in different
environments—whether for testing, staging, or production.

3.6.2 Why Is the Push Stage Important?


The push stage provides the following benefits:
— Centralized Access: It stores container images in a remote registry acces-
sible to all environments.
— Version Control: Each image can be tagged and versioned, allowing re-
producible builds.
— Enables Deployment: Later pipeline stages (like DAST or production
deployment) rely on pulling this pushed image.
— Team Collaboration: Multiple developers or teams can pull the same
image and work on it.

3.6.3 What is Docker Hub?


Docker Hub is a cloud-based repository provided by Docker for building, sharing,
and managing container images. It acts like GitHub for Docker images. Docker
Hub supports both public and private repositories and integrates directly with
Docker CLI, making it easy to push and pull images.

3.6.4 Adding Docker Hub Credentials to Jenkins


To securely push images to Docker Hub from Jenkins, credentials must be stored
in Jenkins’ secure credential store.
Steps to Add Docker Hub Credentials to Jenkins:
1. Log in to your Jenkins web interface.
2. Navigate to Manage Jenkins > Credentials.
3. Select (global) > Add Credentials.
4. Choose Kind: Username with password.
5. Fill in the fields:

SICS2 - ENSAO Année 2024 - 2025 DevSecOps


45 CHAPTER 3. CONTINUOUS INTEGRATION

— Username: your Docker Hub username


— Password: your Docker Hub password or access token
— ID: dockerhub-creds (this must match the ID in your Jenkinsfile)
— Description: DockerHub credentials for image push
6. Click OK to save.

3.6.5 Jenkinsfile Configuration for Push Stage


The following code block defines the push stage in the Jenkins pipeline. It logs in
to Docker Hub using the securely stored credentials and pushes the Docker image:
stage ( ’ push ’) {
steps {
echo ’ Pushing the image to dockerhub ... ’
w it hC r ed en t ia ls ([ u s e r n a m e P a s s w o r d (
credentialsId : ’ dockerhub - creds ’ ,
u s e r n a m e V a r i a b l e : ’ DOCKER_USER ’ ,
p a s s w o r d V a r i a b l e : ’ DOCKER_PWD ’
) ]) {
sh " docker login -u $ { DOCKER_USER } -p $ { DOCKER_PWD }"
sh " docker push $ { DOCKER_IMAGE }"
}
}
}

Explanation:
— withCredentials securely injects the stored Docker Hub credentials into
environment variables.
— docker login authenticates to Docker Hub using those credentials.
— docker push uploads the Docker image (defined by ${DOCKER_IMAGE}) to
the Docker Hub repository.
This push stage ensures that your Docker image is securely stored and ready for
deployment in any downstream environment.

3.7 Conclusion
This chapter demonstrated how core security activities can be integrated directly
into the Continuous Integration (CI) pipeline, enabling early and automated
detection of vulnerabilities and misconfigurations.

We began with Secret Scanning, using Talisman to identify hardcoded cre-


dentials and secrets in the codebase. This process was enhanced by a custom
JSON-to-HTML conversion script to generate developer-friendly reports, which
were archived and made accessible in Jenkins build artifacts.

Next, we implemented Software Composition Analysis (SCA) using both


OWASP Dependency-Check and Snyk. These tools allowed us to automatically
inspect third-party dependencies for known vulnerabilities and license compliance
issues, contributing to a secure software supply chain.

SICS2 - ENSAO Année 2024 - 2025 DevSecOps


3.7. CONCLUSION 46

We also integrated Static Application Security Testing (SAST) with tools


like SonarQube and Semgrep. This allowed for in-depth analysis of the source
code for insecure patterns, code quality issues, and logic flaws. By embedding
SAST directly into the CI pipeline, developers receive fast feedback and can resolve
vulnerabilities earlier in the SDLC.

To support vulnerability traceability and transparency, we introduced the genera-


tion of a Software Bill of Materials (SBOM) using Syft. The SBOM provides
a complete inventory of software components, enabling efficient vulnerability re-
sponse and license audits.

3 Finally, the chapter concluded with the implementation of a Push Stage, which
uploads the Dockerized application to Docker Hub. This step is crucial for promot-
ing the build to downstream environments and ensures that the container image
is readily available for deployment or further testing stages.

In summary, Chapter 3 laid out a robust CI pipeline that not only builds and
packages the application but also enforces multiple layers of security. Each stage
— from secret scanning and SCA to SAST, SBOM, and image pushing — works
together to support a secure, automated, and developer-friendly workflow aligned
with modern DevSecOps principles.

SICS2 - ENSAO Année 2024 - 2025 DevSecOps


Chapter 4

Continuous Delivery

4.1 Container Security Scanning


4.1.1 What is Container Scanning?
Container image scanning is a critical security practice that analyzes Docker or
OCI-compliant images for known vulnerabilities, misconfigurations, and sensitive
data leaks. These scans help identify Common Vulnerabilities and Exposures
(CVEs) in OS packages and application dependencies before deployment.

4.1.2 Why Container Scanning is Important


— Prevent Vulnerability Propagation: Detects CVEs before containerized
apps are shipped.
— Compliance: Ensures adherence to security baselines (e.g., CIS bench-
marks).
— Shift-Left Security: Embeds security earlier in the CI/CD pipeline.
— Runtime Risk Reduction: Identifies high-risk configurations like root
execution or open ports.

4.1.3 Open Source Tools for Container Scanning


Several tools are available for container vulnerability scanning:
— Trivy: Lightweight and fast scanner with support for OS packages, applica-
tion dependencies, IaC files, and more.
— Clair: Static analysis tool used by CoreOS Quay.
— Anchore Engine: Policy-based container security scanning and compli-
ance.

4.1.4 Introducing Trivy


Trivy, developed by Aqua Security, is a versatile and developer-friendly tool that
supports:

47
4.1. CONTAINER SECURITY SCANNING 48

— Vulnerability scanning for OS and language-specific packages.

— Container image and file system scanning.

— Support for SBOM formats (CycloneDX, SPDX).

— Custom policies and filtering for CI pipelines.

4.1.5 Pipeline Configuration for Container Scanning


To enforce security checks on Docker images, the Jenkins pipeline includes a ded-
icated stage that uses Trivy to scan for CRITICAL and HIGH severity vulner-
abilities. Below is the configuration and breakdown:
See Appendix A.6 for jenkisfile container scanning stage

4.1.6 Key Steps Explained

Step Description
Cleanup Removes previously cached Trivy or scan files to avoid
residue interference.
Image Verification Checks that the Docker image exists locally before scan-
ning.
Trivy Installation Downloads and installs Trivy if it is not already available
in the environment.
Vulnerability DB Up- Downloads the latest vulnerability database for accurate
date and current scanning.
Scan Execution Scans the image using Trivy with severity filters
CRITICAL,HIGH and outputs results in JSON format.
Archiving Reports Stores the scan result as an artifact trivy-report.json
for auditing and compliance.
Security Gate The pipeline fails if critical vulnerabilities are detected,
enforcing a strong security gate.

4.1.7 Conclusion
Container scanning ensures that no critical vulnerabilities are present in container-
ized applications before they reach production. By integrating Trivy into the CI
pipeline, we add an essential security gate aligned with DevSecOps principles. The
pipeline is configured to stop builds with unresolved critical issues, ensuring that
the artifacts built and pushed to Docker Hub remain secure and compliant. This
integration supports both container security and infrastructure hygiene in parallel
with IaC validation.

SICS2 - ENSAO Année 2024 - 2025 DevSecOps


49 CHAPTER 4. CONTINUOUS DELIVERY

4.2 DAST
4.2.1 What is DAST
Dynamic Application Security Testing (DAST) is the process of analyzing a web
application through the front-end to find vulnerabilities through simulated attacks.
This type of approach evaluates the application from the “outside in” by attacking
an application like a malicious user would. After a DAST scanner performs these
attacks, it looks for results that are not part of the expected result set and identifies
security vulnerabilities.
DAST is important because developers don’t have to rely solely on their own
knowledge when building applications. By conducting DAST during the SDLC,
you can catch vulnerabilities in an application before it’s deployed to the public. If
these vulnerabilities are left unchecked and the app is deployed as such, this could
lead to a data breach, resulting in major financial loss and damage to your brand
reputation. Human error will inevitably play a part at some point in the Software
Development Life Cycle (SDLC), and the sooner a vulnerability is caught during
the SDLC, the cheaper it is to fix.
When DAST is included as part of the Continuous Integration/Continuous Devel-
opment (CI/CD) pipeline, this is referred to as “Secure DevOps,” or DevSecOps.
A DAST scanner searches for vulnerabilities in a running application and then
sends automated alerts if it finds flaws that allow for attacks like SQL injections,
Cross-Site Scripting (XSS), and more. Since DAST tools are equipped to function
in a dynamic environment, they can detect runtime flaws which SAST tools can’t
identify.
To use the example of a building, a DAST scanner can be thought of like a security
guard. However, rather than just making sure the doors and windows are locked,
this guard goes a step further by attempting to physically break into the building.
The guard might try to pick the locks on the doors or break windows. After
finishing this examination, the guard could report back to the building manager
and provide an explanation of how he was able to break into the building. A
DAST scanner can be thought of in this same way – it actively attempts to find
vulnerabilities in a running environment so the DevOps team knows where and
how to fix them.

4.2.2 Deployment for Testing


In order to perform dynamic application security testing (DAST) on our
web application, we first need to deploy it in a running environment. For this
purpose, we use Apache Tomcat to serve the application.
However, deploying the application directly to Tomcat exposes the full Tomcat
interface and admin panels, which is a potential security risk. DAST tools
would also scan Tomcat’s default endpoints, which can produce noise and irrelevant
results. Worse, in a production scenario, exposing Tomcat’s internals can give
attackers valuable information about the infrastructure.

SICS2 - ENSAO Année 2024 - 2025 DevSecOps


4.2. DAST 50

Solution: Reverse Proxy with Nginx

To solve this, we introduce an Nginx reverse proxy. The idea is to:

— Hide Tomcat’s default UI and metadata from external users.


— Only expose the actual application under a specific path, e.g.,
http://<ip>:80/WebApp.
— Prepare the deployment architecture for production-hardening (least privi-
lege, minimal exposure).
— Apply the same structure in both test and production environments to
ensure consistency.

This setup helps to ensure that only the necessary endpoint is exposed to
the DAST scanner, mimicking real-world conditions while keeping the underlying
infrastructure safe. To orchestrate both the image we build in the build stage
uwinchester/pfaapp and the Nginx proxy , we use Docker Compose. Here’s the
configuration:
services :
tomcat :
image : uwinchester / pfa_app
cont ainer_ name : tomcat - devsecops
ports :
- "8080"
networks :
- devsecops - net

nginx :
image : nginx : alpine
cont ainer_ name : nginx - devsecops
depends_on :
- tomcat
volumes :
- ./ default . conf :/ etc / nginx / conf . d / default . conf
ports :
- "80:80"
networks :
- devsecops - net

networks :
devsecops - net :
driver : bridge

Listing 4.1 – docker-compose.yml file


The file default.conf configures the reverse proxy to forward all requests to the
Tomcat container:
server {
listen 80;
location / {
proxy_pass http :// tomcat :8080/ WebApp /;
}
}

Listing 4.2 – default.conf file


This configuration ensures that only the /WebApp endpoint is accessible, preventing
external access to other Tomcat endpoints.

SICS2 - ENSAO Année 2024 - 2025 DevSecOps


51 CHAPTER 4. CONTINUOUS DELIVERY

4.2.3 OWASP ZAP


Zed Attack Proxy (ZAP) by Checkmarx is a free, open-source penetration testing
tool. ZAP is designed specifically for testing web applications and is both flexible
and extensible.

At its core, ZAP is what is known as a “manipulator-in-the-middle proxy.” It stands


between the tester’s browser and the web application so that it can intercept and
inspect messages sent between browser and web application, modify the contents
if needed, and then forward those packets on to the destination. It can be used as
a stand-alone application, and as a daemon process.

4.2.4 Jenkins Configuration for DAST Scanning


The following pipeline stage performs a DAST scan using OWASP ZAP and
archives the results:
stage ( ’ DAST ’) {
steps {
script {
sh ’ mkdir -p zap - reports ’
sh ’’’
docker pull zaproxy / zap - stable
docker run -- rm \
-v " $WORKSPACE / zap - reports :/ zap / wrk " \
-u $ ( id -u ) : $ ( id -g ) \
-t zaproxy / zap - stable \
zap - full - scan . py \
-t http : / / 1 0 4 . 2 4 8 . 2 5 2 . 2 1 9 / \
-r zap - report . html
’’’
}
echo "[ INFO ] ZAP scan completed . Check the report if the build fails ."
a r c h i v e A r t i f a c t s ’zap - reports / zap - report . html ’
}
}

Listing 4.3 – DAST stage in the jenkinsfile pipeline

4.2.5 Displaying the ZAP Report in Jenkins


To show the HTML report in the Jenkins interface using Jenkins Publish HTMl
Plugin and clean up the workspace

SICS2 - ENSAO Année 2024 - 2025 DevSecOps


4.3. DEPLOYMENT WITH ALERTING AND MONITORING 52

post {
always {
publishHTML target : [
allowMissing : true ,
reportDir : ’./ zap - reports / ’ ,
reportFiles : ’zap - report . html ’ ,
reportName : ’ zap - reports ’ ,
keepAll : true
]
sh " rm - rf $ { T R IV Y_ C AC HE _ DI R } || true "
}
}

4.3 Deployment with Alerting and Monitoring


4.3.1 What is WAF
A Web Application Firewall (WAF) helps protect web applications by filtering
and monitoring HTTP traffic between a web application and the Internet. It
typically protects web applications from attacks such as cross-site forgery, cross-
site-scripting (XSS), file inclusion, and SQL injection.
A WAF is a protocol layer 7 defense (in the OSI model) and is not designed to
defend against all types of attacks. This method of attack mitigation is usually
part of a suite of tools which together create a holistic defense against a range of
attack vectors.
By deploying a WAF in front of a web application, a shield is placed between the
web application and the Internet. A WAF is a type of reverse-proxy, protecting
the server from exposure by having clients pass through the WAF before reaching
the server.
A WAF operates through a set of rules, or policies, which aim to protect against
vulnerabilities in the application by filtering out malicious traffic. The value of a
WAF comes in part from the speed and ease with which policy modification can
be implemented, allowing for faster responses to varying attack vectors. During a
DDoS attack, for example, rate limiting can be quickly implemented.

4.3.2 ModSecurity
ModSecurity is an open source, free web application firewall (WAF). It establishes
an external security layer to detect and prevent attacks before they reach web
applications.

4.3.2.1 Key Features


— Real-time traffic inspection based on customizable rules
— Protection against SQL Injection, XSS, and Command Injection
— Virtual patching without modifying application code
— Logging and audit trails for HTTP transactions
— Integration with OWASP Core Rule Set (CRS)

SICS2 - ENSAO Année 2024 - 2025 DevSecOps


53 CHAPTER 4. CONTINUOUS DELIVERY

4.3.2.2 How It Works


ModSecurity uses a set of rules and patterns to detect malicious requests. When
a request matches a known attack signature, it can block the request, log it, or
send alerts.

4.3.3 ELK Stack


The ELK Stack is a powerful open-source log management and data analysis plat-
form composed of:
— Elasticsearch: Indexes and stores logs in near real time
— Logstash: Parses and processes logs
— Kibana: Visualizes and analyzes log data

4.3.3.1 Use in DevSecOps


— Aggregates logs from apps, containers, and servers
— Monitors system and app behavior
— Detects security events
— Provides audit trails and supports incident investigation
— Offers alerting and dashboards

4.3.4 Integration of ELK Stack with ModSecurity


Logs generated by ModSecurity can be forwarded to the ELK stack for advanced
analysis and visualization. This integration provides:

SICS2 - ENSAO Année 2024 - 2025 DevSecOps


4.3. DEPLOYMENT WITH ALERTING AND MONITORING 54

— Real-time dashboards
— Searchable forensic logs
— Alerting mechanisms
— Centralized log storage for compliance

4.3.5 Deployment Setup


To simulate a secure deployment, we deploy the web app behind a reverse proxy
(Nginx) integrated with ModSecurity and centralize logging via the ELK stack.

4.3.5.1 Architecture Overview


— Tomcat runs the Java application
— Nginx with ModSecurity acts as WAF
— Logstash parses ModSecurity logs
— Elasticsearch indexes logs
— Kibana visualizes data

4.3.5.2 Configuration Files


1. default.conf

Purpose: Acts as a reverse proxy. It forwards external HTTP requests to the


Tomcat container running the application, while hiding Tomcat’s default metadata
and admin interface. This improves security and ensures that only the application
endpoint (e.g., /WebApp) is exposed to users or scanners.
Hides Tomcat’s admin endpoints and other metadata
server {
listen 80;
location / {
proxy_pass http :// tomcat :8080/ WebApp /;
}
}

Listing 4.4 – nginx configuration file


2. modsecurity.conf

Purpose: Enables the ModSecurity Web Application Firewall (WAF) inside the
Nginx server and links it to a rules configuration file. This allows Nginx to perform
real-time traffic inspection using the defined ModSecurity rules.
modsecurity on ;
m o d s e c u r i t y _ r u l e s _ f i l e / etc / nginx / modsecurity . d / setup . conf ;

Listing 4.5 – modsecurity configuration file


3. setup.conf

SICS2 - ENSAO Année 2024 - 2025 DevSecOps


55 CHAPTER 4. CONTINUOUS DELIVERY

Purpose: Loads the OWASP Core Rule Set (CRS) to detect common web at-
tacks such as SQL injection and XSS. It also defines how and where ModSecurity
logs its audit data, including the log format (JSON) required for integration with
log analysis tools like Logstash.

# Note : the plugin rules will be uncommented when the container starts ,
# depending on whether the respective files exist . This works around
# the issue that ModSecurity doesn ’ t support optional includes on NGiNX .

# Allow custom rules to be specified in :


# / opt / modsecurity / rules /{ before , after } - crs /*. conf

Include / etc / modsecurity . d / modsecurity . conf


Include / etc / modsecurity . d / modsecurity - override . conf

Include / etc / modsecurity . d / owasp - crs / crs - setup . conf

Include / etc / modsecurity . d / owasp - crs / plugins /* - config . conf


Include / etc / modsecurity . d / owasp - crs / plugins /* - before . conf

Include / etc / modsecurity . d / owasp - crs / rules /*. conf

Include / etc / modsecurity . d / owasp - crs / plugins /* - after . conf


S e c A u d i t L o g P a r t s ABDEFHIJZ
S ec Au d it Lo g Ty pe Serial
SecAuditLog / tmp / modsec_audit . json
S e c A u d i t L o g F o r m a t JSON

Listing 4.6 – setup configuration file


3. logstash.conf

Purpose: Configures Logstash to read the WAF audit logs generated by Mod-
Security, parse them as JSON, and forward the structured data to Elasticsearch.
This enables real-time monitoring, alerting, and visualization through Kibana.

input {
file {
type = > " modsecurity "
path = > ["/ tmp / modsec_audit . json "]
sta rt_pos ition = > beginning
}

}
filter {
json {
source = > " message "
}
}

output {
elasticsearch {
hosts = > [" http :// elasticsearch :9200"]
index = > " logstash . json "
}

stdout {
codec = > rubydebug
}
}

Listing 4.7 – logstash configuration file

SICS2 - ENSAO Année 2024 - 2025 DevSecOps


4.4. CONCLUSION 56

4.3.5.3 docker-compose-waf.yml
The docker-compose-waf.yml file is used to orchestrate the application along
with a Web Application Firewall (WAF) for runtime protection and testing. You
can find the full CI/CD pipeline definition in the docker-compose-waf.yml pro-
vided in Annex docker-compose-waf.yml annex.

4.3.6 Stage in Jenkins Pipeline


The following Jenkins pipeline stage automates the deployment of the secure en-
vironment with monitoring and alerting.
stage ( ’ Deployment with monotoring and alerting integrated ’) {
steps {
echo ’ deployment ’
sh ’ docker - compose down -- rmi local -- volumes -- remove - orphans || true ’
sh ’ docker rm -f tomcat - devsecops - waf ’
sh ’ docker rm -f nginx - devsecops - waf ’
sh " docker rm -f $ { DOCKER_IMAGE }"
sh ’ docker - compose -f docker - compose - waf . yml up -d ’
}
}

Listing 4.8 – Deployment stage in jenkinsfile pipeline


This stage ensures a clean environment by removing any existing containers, and
then spins up the full monitoring stack using Docker Compose.

4.4 Conclusion
Chapter 4 provides a comprehensive guide to integrating security into the Continu-
ous Delivery (CD) pipeline through automated and layered defense mechanisms. It
begins by illustrating how container scanning with Trivy ensures that Docker
images are free from critical vulnerabilities before deployment. This acts as an
essential security gate, preventing vulnerable artifacts from reaching production
environments.
The chapter then introduces Dynamic Application Security Testing (DAST)
using OWASP ZAP, emphasizing its role in simulating real-world attacks on de-
ployed applications. By scanning the application behind a secure reverse proxy
(Nginx), the setup mimics production conditions while keeping internal compo-
nents like Tomcat hidden and secure.
Next, the deployment extends into advanced monitoring and alerting by in-
corporating a Web Application Firewall (WAF) using ModSecurity, and
centralizing logs with the ELK Stack (Elasticsearch, Logstash, Kibana). This
setup enhances visibility, provides real-time threat detection, and facilitates foren-
sic analysis.
The integration of all these components into the Jenkins CI/CD pipeline illustrates
a mature DevSecOps approach, embedding security testing, enforcement, and
monitoring at every stage of software delivery. This ensures that not only is the
code secure, but the running environment is actively protected and observable,
fostering both compliance and resilience in modern application deployments.

SICS2 - ENSAO Année 2024 - 2025 DevSecOps


Chapter 5

DevSecOps in Action

5.1 Objective
The objective of this final chapter is to validate the effectiveness and robustness
of the implemented DevSecOps pipeline by applying it to a real-world application.
This includes integrating a deliberately vulnerable or realistic web application into
the CI/CD pipeline and executing all configured security stages in a real-world
simulation.
Through this experiment, we aim to:
— Test the automation and accuracy of secret scanning, SAST, SCA, SBOM
generation, DAST, and container security checks.
— Analyze the detection and reporting mechanisms provided by each security
tool.
— Evaluate the pipeline’s ability to block insecure code or deployments based
on predefined security policies.
— Demonstrate the monitoring and alerting system in a production-like deploy-
ment environment.
— Identify the pipeline’s strengths and potential areas for improvement when
applied to a real application.
This final validation serves as a proof of concept, showcasing the practical benefits
of integrating security into every phase of the software delivery lifecycle.

5.2 Project Repository Structure


To validate the pipeline, we integrated our CI/CD configuration with a GitHub-
hosted project repository containing the source code, configurations, and pipeline
scripts. This structure was maintained across multiple branches to simulate re-
alistic team development scenarios, ensuring proper versioning and separation of
development, testing, and production stages.
Below is a detailed breakdown of the repository contents with annotated explana-
tions:

57
5.3. ENTIRE JENKINSFILE 58

- default . conf # Nginx default config for reverse proxy


and WAF setup
- docker - compose - waf . yml # Docker Compose setup including
ModSecurity container
- docker - compose . yml # Main Docker Compose file for local services
( app , DB , etc .)
- Dockerfile # Builds the Java app into a Docker image
for deployment
- Jenkinsfile # Jenkins pipeline ( CI / CD stages : SAST ,
SCA , DAST , etc .)
- Jenkinsfile . old # Backup or experimental version of
Jenkinsfile
- logstash . conf # Configuration for Logstash ( log parser for ELK stack )
- modsecurity . conf # WAF rule definitions for ModSecurity
- pom . xml # Maven build descriptor ( dependencies ,
plugins , metadata )
- README . md # GitHub README with usage , build , and
contribution info
- setup . conf # Custom configuration for modsecurity
- src / # Root directory containing the source code
and resources .
- test / java / com / datadoghq / workshops / sampleapp # Unit test file for
application verification
- main / java / com / datadoghq / workshops / sampleapp # Java package structure for
the application .

Listing 5.1 – Project Repository Structure

5.3 Entire jenkinsfile


You can see the full Jenkins pipeline configuration in the A.2 Jenkinsfile Annex in
the Annex chapter at the end of the document.

5.4 Pipeline setup


To validate the complete DevSecOps pipeline, we created a new Jenkins pipeline
dedicated specifically to testing. This pipeline was fully automated and configured
to use our GitHub repository as the Source Code Management (SCM) system.

5.4.1 SCM Integration


The pipeline was linked to a GitHub repository that contained:
— The complete source code of the web application.
— All necessary pipeline configuration files (e.g., Jenkinsfile, Dockerfile,
Compose files, WAF configs).
— Static assets and environment configuration required for deployment and
testing.
The repository followed a clean structure, as described in the Project Repository
Structure section, allowing the pipeline to clone the full application and execute
all required stages seamlessly.
We used Git best practices, maintaining separate branches for development, test-
ing, and production simulation. For the validation test, a dedicated test branch
was used to isolate the experiment and avoid polluting the mainline.

SICS2 - ENSAO Année 2024 - 2025 DevSecOps


59 CHAPTER 5. DEVSECOPS IN ACTION

5.4.2 Pipeline Setup in Jenkins


A new pipeline job was created in Jenkins using the following configuration:
— Project Type: Pipeline
— Definition: Pipeline script from SCM
— SCM: Git
— Repository URL: https://github.com/R4z1o/webapp
— Branch: */master
— Script Path: Jenkinsfile
This configuration ensured that any push to the master branch would automati-
cally trigger the CI/CD workflow defined in the Jenkinsfile. This file includes
stages for:
— Code checkout
— Secret scanning
— Static and dynamic analysis (SAST, DAST)
— Software Composition Analysis (SCA)
— SBOM generation
— Docker build, image pushing to DockerHub and image scanning
— Staging and final deployment with WAF and monitoring

5.5 Interface and Reports


Once the pipeline is configured and linked to the GitHub repository, we trigger
the validation by selecting the Build Now option in Jenkins. This action initiated
the execution of the pipeline defined in jenkinsfile, fetching the latest code and
configuration from the master branch.

SICS2 - ENSAO Année 2024 - 2025 DevSecOps


5.5. INTERFACE AND REPORTS 60

5.5.1 Execution Monitoring


Jenkins provides two primary interfaces for observing the build process in real-
time:
— Console Output: Displays raw logs and step-by-step execution traces for
each pipeline stage. This is useful for debugging and validating execution
order.
— Blue Ocean Interface: A visual representation of the pipeline stages, show-
ing each phase (e.g., build, scan, test, deploy) as separate visual blocks with
success or failure status.

5.5.2 Build Artifacts and Reports


Upon completion of the pipeline execution, the Jenkins dashboard for the cur-
rent build displays various artifacts and security reports. These are generated by
different tools integrated into the pipeline and are accessible in two ways:
— Build Artifacts Section: Located at the bottom of the build page, this
section contains downloadable or viewable files produced by the build pro-
cess. It includes:
— Snyk vulnerability reports
— OWASP Dependency-Check reports
— Talisman (secret scanning) reports
— Semgrep and SonarQube static analysis results
— OWASP ZAP (DAST) scan reports
— Trivy container image scan reports
— Syft SBOM reports
— Right-Side Report Panel: Jenkins also offers a panel on the right-hand
side of the build dashboard. This panel displays reports that have been
explicitly published using supported Jenkins plugins:
— Structured reports: Tools like Snyk and OWASP Dependency-Check
are integrated using official Jenkins plugins and displayed in a struc-
tured UI with trend graphs.
— HTML reports: OWASP ZAP, utilize the Publish HTML Reports
plugin to show their outputs in rendered HTML format.

5.5.3 Notes on Report Visibility


Not all reports appear in the right-hand report panel. Some are only available in
the artifacts section in the status part in the right side bar of the current build
dashboard. This variation is due to the differing plugin support for each tool
and how they expose or publish their outputs(defrent formats JSON, HTML..) in
Jenkins.
This structured presentation of test results allows for centralized inspection, trace-
ability, and auditing, demonstrating the practical effectiveness of the integrated
DevSecOps pipeline.

SICS2 - ENSAO Année 2024 - 2025 DevSecOps


61 CHAPTER 5. DEVSECOPS IN ACTION

5.6 Pipeline Execution


After triggering the pipeline by clicking Build Now, the execution begins, and the
progress can be monitored in two ways:
— Through the Console Output for raw logs.
— Using Blue Ocean for an intuitive graphical representation of the CI/CD
flow.
In Blue Ocean, each stage of the pipeline is clearly visualized. As shown during
execution, the pipeline pauses at the Deployment Approval stage. This pause
is intentional to allow developers and security engineers to:
— Review all generated security reports.
— Fix any critical or high-severity issues detected in earlier stages.
— Rebuild the application if necessary.
Once all necessary changes are made and verified, the user can resume the pipeline
by approving the deployment (e.g., by clicking Yes). This manual approval gate
enforces a DevSecOps best practice: ensuring security validation before any
production deployment.

SICS2 - ENSAO Année 2024 - 2025 DevSecOps


5.6. PIPELINE EXECUTION 62

5.6.1 Secret Scanning with Talisman


The Talisman scan identified multiple high-severity security risks across var-
ious files, primarily involving the presence of hardcoded secrets, such as
AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY, base64-encoded and hex-
encoded strings, and potential credential patterns. Sensitive data was most notably
found in the .env file, Jenkinsfile, and various frontend/backend source files.
These findings indicate a critical need for better secret management practices, such
as using environment variables or secret managers, and integrating pre-commit
hooks to prevent future secret leaks. Immediate remediation is recommended to
eliminate the exposed secrets and rotate any compromised credentials.

5.6.2 Software Composition Analysis (SCA)


5.6.2.1 OWASP Dependency-Check
The OWASP Dependency-Check tool scanned all project dependencies and iden-
tified the following vulnerabilities:
— 8 Critical
— 20 High
— 17 Medium
— 4 Low
These findings are mapped to corresponding CWE (Common Weakness Enu-
meration) identifiers, which categorize the underlying causes of software weak-
nesses (e.g., usage of vulnerable components, insecure configuration, or improper
validation). These classifications help developers understand not only what went
wrong but also why, promoting long-term secure design practices.
In Jenkins, results are integrated through the OWASP Dependency-Check plugin
and displayed on the left panel of the pipeline UI. The report includes detailed
CVE entries, CWE mappings, severity scores, and remediation recommendations.

SICS2 - ENSAO Année 2024 - 2025 DevSecOps


63 CHAPTER 5. DEVSECOPS IN ACTION

5.6.2.2 Snyk SCA


The Snyk tool provided an in-depth scan of the application’s dependencies and
reported:
— 3 Critical
— 7 High
— 8 Medium
— 6 Low
These vulnerabilities were found across 45 dependencies. Each issue includes:
— CVE identifier and severity score (CVSS)
— Proof of Concept (PoC) when available
— Detailed explanation of the issue
— Recommended upgrade or patch
The Snyk report is accessible in the Jenkins interface via the right-side icon bar
and can also be downloaded from the build artifacts.

SICS2 - ENSAO Année 2024 - 2025 DevSecOps


5.6. PIPELINE EXECUTION 64

5.6.2.3 summury of SCA results

Tool Critical High Medium / Low


OWASP Dependency-Check 8 20 17 Medium, 4 Low
Snyk SCA 3 7 8 Medium, 6 Low

Table 5.1 – Summary of vulnerabilities from SCA tools

Security Posture and Response: Both tools were integrated as automated


stages in the CI/CD pipeline. Builds are flagged if critical or high vulnerabilities
are found, ensuring developers are alerted early and required to remediate issues
before merging or deploying.

5.6.3 SBOM Generation with Syft


An SBOM was generated using the Syft tool with the CycloneDX 1.6 JSON format.
The scan identified the full set of dependencies declared in the project’s pom.xml,
including third-party Java libraries such as logstash-logback-encoder, lombok,
and various spring-boot-starter components. Each component is listed with
metadata such as version, package URL (PURL), and CPE identifiers, which are
useful for vulnerability tracking. The SBOM also defines the dependency relation-
ships, making it easier to assess the project’s software supply chain and potential
risks.

5.6.3.1 List of Packages Identified in SBOM


The following table lists the software components detected in the SBOM generated
by Syft, along with their respective group IDs and versions:

Package Name Group Version


logstash-logback-encoder net.logstash.logback 7.2
lombok org.projectlombok 1.18.32
spring-boot-starter-actuator org.springframework.boot UNKNOWN
spring-boot-starter-test org.springframework.boot UNKNOWN
spring-boot-starter-web org.springframework.boot UNKNOWN
vulnerable-java-application com.datadoghq.workshops 0.0.1-SNAPSHOT

Table 5.2 – Packages Identified by Syft SBOM Scan

this can be integrated with a vulnerability manager such as JIRA for compliance
and risk management.

5.6.4 SAST
5.6.4.1 Semgrep Security Scan Results
The Semgrep scan identified several high-severity security vulnerabilities in the
web application:

SICS2 - ENSAO Année 2024 - 2025 DevSecOps


65 CHAPTER 5. DEVSECOPS IN ACTION

1. XXE Vulnerabilities in XML Parser (CWE-611)


Two related findings in FileService.java (Line 18):
— saxreader-xxe-parameter-entities: Unsafe XML parser configura-
tion allows parameter entity expansion
— saxreader-xxe: General XXE vulnerability through document type
definitions
Both could lead to LFI, RCE, SSRF, or DoS attacks. Recommended fixes
include disabling DTDs with setFeature: (see Appendix A.4 for the corre-
sponding code snippet)
2. Spring Actuator Exposure (CWE-215)
In application.properties (Line 2), sensitive Actuator endpoints (/env,
/heapdump, etc.) are exposed without authentication. Requires Spring Se-
curity configuration. (see Appendix A.5 for the corresponding code snippet)
3. Dockerfile Security Misconfiguration (CWE-250)
The Dockerfile (Line 23) runs processes as root. Recommendation: Add
non-root USER directive to limit container compromise impact.

5.6.4.2 SonarQube Security Scan Results


The SonarQube scan identified a low-severity security issue related to the use
of the system’s PATH environment variable when executing external commands.
The vulnerability arises from relying on the default PATH to locate binaries like
ping, which can lead to command hijacking if an attacker places a malicious
executable earlier in the search path. This could potentially result in unintended
or unsafe behavior, especially in misconfigured or compromised environments. It

SICS2 - ENSAO Année 2024 - 2025 DevSecOps


5.6. PIPELINE EXECUTION 66

is recommended to either specify the full path to the intended binary or validate
and restrict the PATH variable to trusted directories only. (See Appendix A.6 for
the related code snippet.)

5.6.5 Manually Identified Vulnerabilities Not Detected by


Scanners
Despite using Semgrep and SonarQube, a manual review of the code revealed
several critical vulnerabilities that were not flagged by either tool. These include
OS command injection, path traversal, and file inclusion issues, which could be
exploited in real-world attack scenarios.

5.6.5.1 OS Command Injection (CWE-78)


A command injection vulnerability exists in the use of
Runtime.getRuntime().exec(), where unvalidated user input (e.g., domainName)
is concatenated directly into a shell command. This could allow an attacker to
execute arbitrary system commands if they can control the domainName value.
Proper validation or use of safer process-building APIs is recommended. (See
Appendix A.6 )

5.6.5.2 Path Traversal / File Inclusion (CWE-22 / CWE-98)


The readFile method includes a check to ensure the path starts with an al-
lowed prefix, but this alone is insufficient to prevent path traversal or file inclu-
sion attacks. An attacker could use relative path traversal sequences (e.g., ../)
to access unauthorized files, bypassing the prefix check. Additionally, the file is
opened directly using FileReader without canonicalization, which increases the
risk. Stronger validation (e.g., canonical path resolution and strict whitelisting)
should be implemented. (See Appendix A.7 )

5.6.6 Build and Push Stage


During the Build stage of the pipeline, the web application is compiled from its
Java source code using Maven. This process packages the application and builds
a Docker container image that includes the compiled application along with all
its runtime dependencies. The resulting image is prepared for deployment in later
stages. The generated image can be verified locally using:
$ docker images
REPOSITORY TAG IMAGE ID SIZE
uwinchester / pfa_app latest 332116 f7 493 MB

Once the image is successfully built and verified, the pipeline proceeds to the Push
stage, where the image is uploaded to Docker Hub.
Pushing the image to a remote container registry like Docker Hub serves several
purposes:

SICS2 - ENSAO Année 2024 - 2025 DevSecOps


67 CHAPTER 5. DEVSECOPS IN ACTION

— Ensures the image is accessible from different environments (e.g., staging,


production).
— Enables versioning and image traceability for audit and rollback purposes.
This step guarantees that the image used for deployment is identical to the one
built and tested during the pipeline, preserving consistency and reproducibility
across environments.

5.6.7 Container Scanning with Trivy


The Trivy scan on the container image uwinchester/pfa_app (based on Ubuntu
24.04) identified several critical and high-severity vulnerabilities, primarily origi-
nating from outdated dependencies in the application layer. Notably, the embed-
ded tomcat-embed-core library (version 10.1.20) contains multiple known CVEs
— including CVE-2025-24813, a critical remote code execution vulnerability
— along with other high-severity issues such as CVE-2024-50379, CVE-2024-
56337, and CVE-2025-22235.
These vulnerabilities affect key components including:
— Apache Tomcat — susceptible to RCE via PUT, TOCTOU, and session
deserialization flaws.
— Spring Boot and Spring Web MVC — impacted by improper access
control and path traversal vulnerabilities.

5.6.8 Dynamic Application Security Testing (DAST)


After deploying the application to port 8888 using the testing configuration
from the Docker Compose file, the web interface provides two main features:
index.html for testing websites performance and file.html for reading files.

SICS2 - ENSAO Année 2024 - 2025 DevSecOps


5.6. PIPELINE EXECUTION 68

The CI/CD pipeline then proceeds to the Dynamic Application Security Test-
ing (DAST) stage, performed using OWASP ZAP, a tool that scans running
applications to identify security vulnerabilities.

Figure 5.1 – index.html

The OWASP ZAP scanner utilizes fuzzing techniques and spidering to discover
exposed resources and vulnerabilities. During the scan, ZAP identified multiple
publicly accessible subdirectories and files such as:
— sitemap.xml
— robots.txt
— /js/ directory
These files may unintentionally leak internal information about the website struc-
ture, indexing preferences, or client-side logic. Furthermore, the security scan
revealed the following summarized vulnerability results:

Severity Level Number of Findings


High 0
Medium 3
Low 4
Informational 3
False Positives 0

Table 5.3 – Summary of OWASP ZAP findings

The detailed vulnerabilities detected include:


— Content Security Policy (CSP) Header Not Set – Medium (1 instance)
— Missing Anti-clickjacking Header – Medium (1 instance)
— Spring Actuator Information Leak – Medium (1 instance)
— Insufficient Site Isolation Against Spectre Vulnerability – Low (4
instances)

SICS2 - ENSAO Année 2024 - 2025 DevSecOps


69 CHAPTER 5. DEVSECOPS IN ACTION

— Permissions Policy Header Not Set – Low (2 instances)


— Server Leaks Version Information via "Server" Header – Low (4
instances)
— X-Content-Type-Options Header Missing – Low (2 instances)
— Modern Web Application – Informational (1 instance)
— Storable and Cacheable Content – Informational (4 instances)
— User Agent Fuzzer – Informational (36 instances)
The full report is stored in the Jenkins build artifacts and can be previewed using
the OWASP ZAP HTML interface or accessed via the sidebar if the Publish
HTML Reports plugin is enabled. All findings should be evaluated carefully, and
necessary security headers and configurations should be enforced to harden the
application before production deployment.

5.6.9 Final deployment with WAF and ELK stack Integra-


tion
After reviewing and resolving the security issues identified in the previous pipeline
stages, the user can proceed with deployment by clicking “Yes” in the deploy-
ment approval prompt. This action re-triggers the pipeline and, upon successful
execution, deploys the web application to port 80.
In parallel, Kibana is made accessible on port 5601 to provide real-time mon-
itoring, centralized logging, and alerting capabilities. This integration supports
observability best practices, allowing developers and DevOps teams to:
— Monitor system logs and HTTP traffic.
— Track application errors and performance metrics.
— alerts based on WAF detection.
This final deployment step ensures that the application is both production-ready
and observable, supporting ongoing operational security and reliability.

Figure 5.2 – deployed website on port 80(HTTP)

You can see the view in Kibana on port 5601 showing logs and also alerts.

SICS2 - ENSAO Année 2024 - 2025 DevSecOps


5.6. PIPELINE EXECUTION 70

Figure 5.3 – website logs

To verify the proper functioning of the Web Application Firewall (WAF), we con-
ducted XSS injection and OS injection tests using crafted payloads. These tests
aimed to simulate common web-based attacks and assess the WAF’s ability to
detect and block malicious requests in real-time.

Figure 5.4 – OS Injection

SICS2 - ENSAO Année 2024 - 2025 DevSecOps


71 CHAPTER 5. DEVSECOPS IN ACTION

Figure 5.5 – XSS Injection

5.7 Conclusion
This chapter demonstrated the practical implementation of a DevSecOps pipeline
on a real-world Java web application. By integrating security tools throughout
the CI/CD lifecycle, the pipeline enabled early risk detection while maintaining
development agility.
Key highlights include:
— Secret Scanning: Talisman flagged hardcoded secrets, emphasizing the
need for secure coding practices.
— SCA: OWASP Dependency-Check and Snyk identified critical vulnerabilities
in dependencies, enabling proactive remediation.
— SBOM: Syft provided comprehensive component visibility, aiding in com-
pliance and vulnerability tracking.
— SAST: Semgrep and SonarQube revealed logic flaws and insecure configu-
rations addressed pre-deployment.
— Container Scanning: Trivy exposed outdated libraries like Tomcat and
Spring in the container image.
— DAST: OWASP ZAP found exposed endpoints and missing headers via
dynamic analysis.
— Deployment & Monitoring: The app was deployed securely with WAF
protection and real-time monitoring via Kibana.
This project confirms that a well-integrated DevSecOps pipeline strengthens secu-
rity posture while supporting fast, reliable software delivery.

SICS2 - ENSAO Année 2024 - 2025 DevSecOps


Conclusion

This report presents the comprehensive design, implementation, and evaluation


of a full DevSecOps pipeline applied to a vulnerable Java-based web applica-
tion. By embedding security into each stage of the CI/CD lifecycle, the project
demonstrates a shift-left approach where vulnerabilities are detected and addressed
early—minimizing risks before production deployment.
Throughout the execution, multiple layers of security controls were enforced:
— Secret Scanning was performed on all commits using Talisman, identifying
hardcoded secrets and promoting secure version control practices.
— SCA tools (OWASP Dependency-Check and Snyk) flagged critical vulner-
abilities in third-party libraries, with actionable remediation suggestions.
— SBOM generation via Syft provided a transparent inventory of all software
components to support vulnerability tracking and license compliance.
— SAST using Semgrep and SonarQube revealed code-level weaknesses such as
XXE and insecure endpoint exposures, supporting secure coding standards.
— Container security scanning via Trivy helped uncover vulnerabilities in
the Docker image, emphasizing the need for updated base images and de-
pendency hygiene.
— DAST using OWASP ZAP uncovered runtime flaws and exposed endpoints,
simulating attacker behavior and validating application hardening efforts.
— Monitoring and observability through the ELK stack (Elasticsearch,
Logstash, Kibana) ensured real-time visibility, alerting, and traceability post-
deployment.
By integrating Jenkins as the orchestrator, with GitHub as SCM and Docker Hub
for image management, the pipeline supports modern infrastructure-as-code and
scalable CI/CD practices. Additionally, environment-specific branching strategies
and deployment approvals ensured controlled and traceable releases.
Ultimately, this project proves that security can be seamlessly embedded into De-
vOps workflows without slowing down delivery. It serves as both a blueprint and a
proof of concept for secure software delivery pipelines—aligning with industry best
practices and modern cybersecurity standards. Moving forward, this foundation
can be extended with more advanced policy enforcement, automated remediation,
and integration with cloud-native infrastructure platforms.

72
Annexes

A.1 docker-compose-waf.yml

services :
tomcat :
image : uwinchester / pfa_app
cont ainer_ name : tomcat - devsecops - waf
ports :
- "8080"
networks :
- devsecops - net

nginx :
image : owasp / modsecurity - crs :4.14.0 - nginx - alpine -202505250105
cont ainer_ name : nginx - devsecops - waf
depends_on :
- tomcat
volumes :
- ./ default . conf :/ etc / nginx / templates / conf . d / default . conf . template
- ./ setup . conf :/ etc / nginx / templates / modsecurity . d / setup . conf . template
- ./ modsecurity . conf :/ etc / nginx / templates / conf . d / modsecurity . conf . template
- modsec - logs :/ tmp
ports :
- "80:80"
networks :
- devsecops - net

elasticsearch :
image : docker . elastic . co / elasticsearch / elasticsearch :8.13.0
cont ainer_ name : elasticsearch
environment :
- discovery . type = single - node
- xpack . security . enabled = false
ports :
- "9200:9200"
volumes :
- esdata :/ usr / share / elasticsearch / data
networks :
- devsecops - net

kibana :
image : docker . elastic . co / kibana / kibana :8.13.0
cont ainer_ name : kibana
environment :
- E L A S T I C S E A R C H _ H O S T S = http :// elasticsearch :9200
ports :
- "5601:5601"
depends_on :
- elasticsearch
networks :
- devsecops - net

logstash :
image : docker . elastic . co / logstash / logstash :8.13.0
cont ainer_ name : logstash

73
5.7. CONCLUSION 74

depends_on :
- elasticsearch
volumes :
- ./ logstash . conf :/ usr / share / logstash / pipeline / logstash . conf
- modsec - logs :/ tmp
ports :
- "5044:5044"
networks :
- devsecops - net

volumes :
esdata :
modsec - logs :

networks :
devsecops - net :
driver : bridge

Listing 2 – docker-compose file of Deployment with Monotoring Integration

A.2 Jenkinsfile
pipeline {
agent any

environment {
T RI VY _ CA C HE _D I R = ’/ var / trivy - cache ’
DOCKER_IMAGE = ’ uwinchester / pfa_app ’
S E M G R E P _ A P P _ T O K E N = credentials ( ’ SEMGREP_APP_TOKEN ’)
}

stages {
stage ( ’ Secret Scan with Talisman ’) {
steps {
sh ’’’
echo "[ INFO ] Cloning repo for Talisman scan "
rm - rf webapp ta li s ma n_ r ep o rt || true
git clone https :// github . com / R4z1o / webapp . git webapp
cd webapp

echo "[ INFO ] Installing Talisman "


curl -L https :// github . com / thoughtworks / talisman / releases /
download / v1 .37.0/ t a l i s m a n _ l i n u x _ a m d 6 4 -o talisman
chmod + x talisman

ls
pwd
echo "[ INFO ] Running Talisman Scan "
./ talisman -- scan || true

echo "[ INFO ] Converting JSON to HTML "


~/ talisman - to - html . sh \
" $ ( pwd ) / t a li s ma n_ r ep or t / t a l i s m a n _ r e p o r t s / data / report . json "
\
" $ ( pwd ) / t a li s ma n_ r ep or t / t a l i s m a n _ r e p o r t s / data / talisman -
report . html "

rm $ ( pwd ) / t a li sm a n_ re p or t / t a l i s m a n _ r e p o r t s / data / report . json

echo "[ INFO ] Verifying files exist :"


ls - la ta l is ma n _r ep o rt / t a l i s m a n _ r e p o r t s / data /
’’’
a r c h i v e A r t i f a c t s a l l o w E m p t y A r c h i v e : true ,
artifacts : ’ webapp / ta l is ma n _r ep o rt /
t a l i s m a n _ r e p o r t s / data /* ’ ,
fingerprint : true
}

SICS2 - ENSAO Année 2024 - 2025 DevSecOps


75 CHAPTER 5. DEVSECOPS IN ACTION

post {
always {
echo " Talisman reports archived . Check artifacts for report .
json and talisman - report . html "
}
}
}
stage ( ’ OWASP - dependency - check ’) {
steps {
echo ’ dependency check using OWASP ’
d ep en d en cy C he ck a d d i t i o n a l A r g u m e n t s : ’’, o d cI ns t al l at io n : ’
dependency - check ’
d e p e n d e n c y C h e c k P u b l i s h e r pattern : ’ ’
a r c h i v e A r t i f a c t s a l l o w E m p t y A r c h i v e : true , artifacts : ’ dependency -
check - report . xml ’ , fingerprint : true , fol lowSym links : false ,
o n l y I f S u c c e s s f u l : true
sh ’ rm - rf dependency - check - report . xml * ’
}
}
stage ( ’ SCA using snyk ’) {
steps {
snykSecurity (
s n y k I n s t a l l a t i o n : ’ snyk ’ ,
snykTokenId : ’79230 cba -8022 -423 d -80 b0 -1 c625dc7b13a ’ ,
failOnIssues : false

)
}
}

stage ( ’ SonarQube Analysis ’) {


steps {
w i t h S o n a r Q u b e E n v ( i n s t a l l a t i o n N a m e : ’ sonarQube ’) {
sh " mvn clean verify sonar : sonar - Dsonar . projectKey =
j en ki n sP i pe li n e - Dsonar . projectName = ’ jenkinsPipeline ’ -
DskipTests "
}
}
}

stage ( ’ Semgrep - Scan ’) {


steps {
timeout ( time : 10 , unit : ’ MINUTES ’) {
sh ’’’
python3 -m venv venv
. venv / bin / activate
pip3 install semgrep
# semgrep ci
’’’
// Note : remove the -- disable - pro flag when we add more memory to
the Jenkins server
}
}
}

stage ( ’ Generate SBOM ’) {


steps {
sh ’’’
syft scan dir :. -- output cyclonedx - json = sbom . json
’’’
a r c h i v e A r t i f a c t s a l l o w E m p t y A r c h i v e : true , artifacts : ’ sbom * ’ ,
fingerprint : true , follo wSymli nks : false , o n l y I f S u c c e s s f u l :
true
sh ’ rm - rf sbom * ’
}
}

stage ( ’ build ’) {

SICS2 - ENSAO Année 2024 - 2025 DevSecOps


5.7. CONCLUSION 76

steps {
echo ’ Building the application ... ’
sh """
docker rmi $ { DOCKER_IMAGE }|| true
docker build -t $ { DOCKER_IMAGE } .
"""
}
}

stage ( ’ Container Security ’) {


steps {
script {
sh ’ ’ ’
find / var - name " trivy *" - exec rm - rf {} + 2 >/ dev / null ||
true
find / var - name " javadb *" - exec rm - rf {} + 2 >/ dev / null ||
true
’’’
// Verify image exists locally before scanning
sh " docker inspect $ { DOCKER_IMAGE }"

// Install Trivy if missing


sh ’’’
if ! command -v trivy & > / dev / null ; then
curl - sfL https :// raw . g i t h u b u s e r c o n t e n t . com /
aquasecurity / trivy / main / contrib / install . sh | sh -s
-- -b / usr / local / bin
fi
’’’

// Setup cache
sh " mkdir -p $ { TR I VY _C A CH E _D IR }"
sh " trivy -- cache - dir $ { T RI V Y_ CA C HE _D I R } image -- download - db -
only "

// Run Trivy Scan


sh """
mkdir -p $ { T RI VY _ CA C HE _D I R }/ tmp

TMPDIR = $ { TR I VY _C A CH E _D IR }/ tmp trivy \\


-- cache - dir $ { T RI V Y_ CA C HE _ DI R } image \\
-- scanners vuln \\
-- format json \\
-- output trivy - report . json \\
-- severity CRITICAL , HIGH \\
-- ignore - unfixed \\
-- skip - version - check \\
$ { DOCKER_IMAGE }
"""

a r c h i v e A r t i f a c t s ’ trivy - report . json ’

// Critical vulnerability check


def criticalFound = sh (
script : " grep -q ’ CRITICAL ’ trivy - report . txt " ,
returnStatus : true
) == 0

if ( criticalFound ) {
error ’ Critical v u ln er a bi li t ie s found in container image ’
}
}
}
}

stage ( ’ push ’) {
steps {
echo ’ Pushing the image to dockerhub ... ’
w it h Cr ed e nt ia l s ([ u s e r n a m e P a s s w o r d (
credentialsId : ’ dockerhub - creds ’ ,

SICS2 - ENSAO Année 2024 - 2025 DevSecOps


77 CHAPTER 5. DEVSECOPS IN ACTION

u s e r n a m e V a r i a b l e : ’ DOCKER_USER ’ ,
p a s s w o r d V a r i a b l e : ’ DOCKER_PWD ’
) ]) {
sh " docker login -u $ { DOCKER_USER } -p $ { DOCKER_PWD }"
sh " docker push $ { DOCKER_IMAGE }"
}
}
}
stage ( ’ deployement for DAST ’) {
steps {
echo ’ deploying for testing ’
sh ’ docker - compose down -- rmi local -- volumes -- remove - orphans ||
true ’
sh ’ docker rm -f tomcat - devsecops ’
sh ’ docker rm -f nginx - devsecops ’
sh " docker rm -f $ { DOCKER_IMAGE }"
sh ’ docker compose -f docker - compose . yml up -d ’
}
}
stage ( ’ DAST ’) {
steps {
script {
sh ’ mkdir -p zap - reports ’
sh ’’’
docker pull zaproxy / zap - stable
docker run -- rm \
-v " $WORKSPACE / zap - reports :/ zap / wrk " \
-u $ ( id -u ) : $ ( id -g ) \
-t zaproxy / zap - stable \
zap - full - scan . py \
-t http : / / 1 0 4 . 2 4 8 . 2 5 2 . 2 1 9 : 8 8 8 8 / \
-r zap - report . html || true
’’’
}
echo ’[ INFO ] ZAP scan completed . Check the report if the build
fails . ’
a r c h i v e A r t i f a c t s ’zap - reports / zap - report . html ’
}
}
stage ( ’ Deployment Approval ’) {
steps {
script {
def userInput = input (
message : ’ Do you approve the deployment ? ’ ,
parameters : [
choice ( name : ’ Approval ’ , choices : [ ’ Yes ’ , ’No ’] ,
description : ’ Select Yes to proceed or No to abort
’)
]
)

if ( userInput == ’No ’) {
error ( ’ Deployment was not approved . ’)
}
}
}
}

stage ( ’ Deployment with monotoring and alerting integrated ’) {


steps {
echo ’ deployment ’
sh ’ docker - compose down -- rmi local -- volumes -- remove - orphans ||
true ’
sh ’ docker rm -f tomcat - devsecops - waf ’
sh ’ docker rm -f nginx - devsecops - waf ’
sh " docker rm -f $ { DOCKER_IMAGE }"
sh ’ docker compose -f docker - compose - waf . yml up -d ’
}
}
}

SICS2 - ENSAO Année 2024 - 2025 DevSecOps


5.7. CONCLUSION 78

post {
always {
// Publish ZAP Report
publishHTML target : [
allowMissing : true ,
reportDir : ’./ zap - reports / ’ ,
reportFiles : ’zap - report . html ’ ,
reportName : ’ zap - reports ’ ,
keepAll : true
]

// Cleanup Trivy cache


sh " rm - rf $ { T R IV Y_ C AC HE _ DI R } || true "
}
}
}

Listing 3 – jenkinsfile

A.3 Secret Scan with Talisman.


stage ( ’ Secret Scan with Talisman ’) {
steps {
sh ’’’
echo "[ INFO ] Cloning repo for Talisman scan "
rm - rf webapp ta li s ma n_ r ep o rt || true
git clone https :// github . com / R4z1o / webapp . git webapp
cd webapp

echo "[ INFO ] Installing Talisman "


curl -L https :// github . com / thoughtworks / talisman / releases / download / v1
.37.0/ t a l i s m a n _ l i n u x _ a m d 6 4 -o talisman
chmod + x talisman

echo "[ INFO ] Running Talisman Scan "


./ talisman -- scan || true

echo "[ INFO ] Converting JSON to HTML "


/ root / talisman - to - html . sh \
" $ ( pwd ) / t a li s ma n_ r ep or t / t a l i s m a n _ r e p o r t s / data / report . json " \
" $ ( pwd ) / t a li s ma n_ r ep or t / t a l i s m a n _ r e p o r t s / data / talisman - report . html
"

rm $ ( pwd ) / t a li sm a n_ re p or t / t a l i s m a n _ r e p o r t s / data / report . json


’’’
a r c h i v e A r t i f a c t s a l l o w E m p t y A r c h i v e : true ,
artifacts : ’ webapp / ta l is ma n _r ep o rt / t a l i s m a n _ r e p o r t s / data
/* ’ ,
fingerprint : true
}
post {
always {
echo " Talisman reports archived . Check artifacts for report . json and
talisman - report . html "
}
}
}

A.4 Bash script JSON to HTML.


# !/ bin / bash

# Convert Talisman JSON to clean HTML table


# Usage : ./ talisman - to - html . sh input . json output . html

SICS2 - ENSAO Année 2024 - 2025 DevSecOps


79 CHAPTER 5. DEVSECOPS IN ACTION

INPUT_FILE = " $1 "


OUTPUT_FILE = " $2 "

# Create HTML with clean table


cat << ’ EOF ’ > " $OUTPUT_FILE "
<! DOCTYPE html >
< html >
< head >
< title > Talisman Scan Results </ title >
< style >
body { font - family : Arial , sans - serif ; margin : 20 px ; }
table { border - collapse : collapse ; width : 100%; margin - top : 20 px ; }
th , td { border : 1 px solid # ddd ; padding : 8 px ; text - align : left ; }
th { background - color : # f2f2f2 ; position : sticky ; top : 0; }
tr : nth - child ( even ) { background - color : # f9f9f9 ; }
. high { color : # d9534f ; font - weight : bold ; }
. medium { color : # f0ad4e ; }
. low { color : # 5 cb85c ; }
. filepath { font - family : monospace ; word - break : break - all ; }
</ style >
</ head >
< body >
<h1 > Talisman Scan Results </ h1 >
< table >
< thead >
<tr >
<th > File </ th >
<th > Type </ th >
<th > Severity </ th >
<th > Message </ th >
<th > Commits </ th >
</ tr >
</ thead >
< tbody >
EOF

# Process JSON with proper escaping


jq -r ’. results [] | . filename as $file | . failure_list [] |
[ $file , . type , . severity , . message , (. commits | length ) ] |
map ( @html ) | @tsv ’ " $INPUT_FILE " | \
while IFS = $ ’\ t ’ read -r file type severity message commits ; do
echo " <tr > "
echo " < td class =\ " filepath \ " > $file </ td > "
echo " <td > $type </ td > "
echo " < td class =\ " $severity \ " > $severity </ td > "
echo " <td > $message </ td > "
echo " <td > $commits </ td > "
echo " </ tr > "
done >> " $OUTPUT_FILE "

# Close HTML
cat << ’ EOF ’ >> " $OUTPUT_FILE "
</ tbody >
</ table >
</ body >
</ html >
EOF

echo " Clean HTML report generated at $OUTPUT_FILE "

A.5 Dockerfile Breakdown


FROM maven :3.9.6 - eclipse - temurin -17 AS build

WORKDIR / app

SICS2 - ENSAO Année 2024 - 2025 DevSecOps


5.7. CONCLUSION 80

COPY pom . xml .

COPY src ./ src

RUN mvn clean package

FROM amazon corret to :21 - alpine - jdk

RUN apk add --no - cache wget tar

RUN wget https :// downloads . apache . org / tomcat / tomcat -10/ v10 .1.41/ bin / apache - tomcat
-10.1.41. tar . gz && \
tar xvf apache - tomcat -10.1.41. tar . gz -C / opt / && \
rm apache - tomcat -10.1.41. tar . gz

EXPOSE 8080

COPY -- from = build / app / target / WebApp . war / opt / apache - tomcat -10.1.41/ webapps /

CMD ["/ opt / apache - tomcat -10.1.41/ bin / catalina . sh " , " run "]

A.6 jenkinsfile container security scan


stage ( ’ Container Security Scanning ’) {
steps {
script {
// Cleanup old scan files
sh ’’’
find / var - name " trivy *" - exec rm - rf {} + 2 >/ dev / null || true
find / var - name " javadb *" - exec rm - rf {} + 2 >/ dev / null || true
’’’

// Verify Docker image exists


sh " docker inspect $ { DOCKER_IMAGE }"

// Install Trivy if missing


sh ’’’
if ! command -v trivy & > / dev / null ; then
curl - sfL https :// raw . g i t h u b u s e r c o n t e n t . com / aquasecurity / trivy
/ main / contrib / install . sh | sh -s -- -b / usr / local / bin
fi
’’’

// Download latest vulnerability DB


sh " mkdir -p $ { TR I VY _C A CH E _D IR }"
sh " trivy -- cache - dir $ { T RI V Y_ CA C HE _D I R } image -- download - db - only "

// Run Trivy Scan ( JSON report )


sh """
mkdir -p $ { T RI VY _ CA C HE _D I R }/ tmp
TMPDIR = $ { TR I VY _C A CH E _D IR }/ tmp trivy \\
-- cache - dir $ { T RI V Y_ CA C HE _ DI R } image \\
-- scanners vuln \\
-- format json \\
-- output trivy - report . json \\
-- severity CRITICAL , HIGH \\
-- ignore - unfixed \\
-- skip - version - check \\
$ { DOCKER_IMAGE }
"""

// Archive report
a r c h i v e A r t i f a c t s ’ trivy - report . json ’

// Fail pipeline if CRITICAL issues found


def criticalFound = sh (
script : " grep -q ’ CRITICAL ’ trivy - report . json " ,

SICS2 - ENSAO Année 2024 - 2025 DevSecOps


81 CHAPTER 5. DEVSECOPS IN ACTION

returnStatus : true
) == 0

if ( criticalFound ) {
error ’ Critical v u ln er a bi li t ie s found in container image ’
}
}
}
}

A.7 XXE Vulnerabilities in XML Parser (FileSer-


vice.java)
// File : FileService . java
SAXReader xmlReader = new SAXReader () ;
xmlReader . setFeature ( " http :// apache . org / xml / features / nonvalidating / load - external -
dtd " , false ) ;

A.8 Spring Actuator Exposure (applica-


tion.properties)
# File : application . properties
# Line : 2

server . port =8000


management . endpoints . web . exposure . include =*

A.9 OS Command Injection Example


// Command injection via unsanitized input
Process process = Runtime . getRuntime () . exec ( new String [] { " sh " , " -c " , " ping -c 1 "
+ domainName }) ;

A.10 Path Traversal / File Inclusion Example


// Potential path traversal vulnerability
public String readFile ( String path ) throws FileForbiddenFileException ,
FileReadException {
if (! path . startsWith ( A LLOWED _PREFI X ) ) {
throw new F i l e F o r b i d d e n F i l e E x c e p t i o n ( " You are not allowed to read " + path
);
}
try ( Buffe redRea der br = new Bu ffered Reader ( new FileReader ( path ) ) ) {
...
} catch ( IOException e ) {
throw new F i l e R e a d E x c e p t i o n ( e . getMessage () ) ;
}
}

SICS2 - ENSAO Année 2024 - 2025 DevSecOps

You might also like