0% found this document useful (0 votes)
923 views100 pages

Admin Network Security - Issue 78 2023

Uploaded by

Niyazi Makuloglu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
923 views100 pages

Admin Network Security - Issue 78 2023

Uploaded by

Niyazi Makuloglu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 100

ADMIN

Network & Security


Domain-Driven ShardingSphere MS Power Apps
Design

Network & Security ISSUE 78

Domain-Driven
Principles for
programming a
domain model
Event-Driven Ansible
Microsoft Power Apps
Low-code/no-code
programming
3 Black Box Monitoring
Solutions
Monitoror, Vigil, Statping-ng
Velociraptor Incident
Response
Keeping Azure VMs Up to Date
Knative Visual Studio Code for
Serverless workloads the Web
for Kubernetes Secure remote connectivity
MACsec Ripgrep
Layer 2 link Accelerated terminal
DVD
encryption search E
WWW.ADMIN-MAGAZINE.COM
FR E
Welcome to ADMIN W E LCO M E

Digital Forensics
Consider a new direction in system administration.
In the Welcome column, I write about jobs, careers,
trends, and sometimes random but relevant topics. For
this issue, I’m discussing a new direction in system ad-
ministration that you might know as computer forensics,
cyberforensics, or digital forensics.
Digital forensics is the discovery, recovery, investigation, and
examination of data found in computer systems. Computer sys-
tems is a broad category that includes databases, network devices,
and mobile devices. It may also include other devices (e.g., supervisory
control and data acquisition (SCADA) instruments) that store, process, or
use data. Although digital forensics isn’t new, it can be a new direction for those
who have traditionally held system administration jobs.
You might wonder why I’m discussing a security topic for a column focusing on system administration. I’ve
mentioned before that security is everyone’s job, and it’s certainly true for system administrators, and digital
forensics is an extension of that role. The reality of the system administrator’s role is that our job description is
“Other duties as assigned” and little else. We do everything, and security is often the least offensive task that
we have the pleasure to perform.
To illustrate how the roles overlap, assume that you suspect a system has been compromised. You begin col-
lecting and comparing logs to find out when the breach occurred. Next, you search for compromised or new
accounts. You search for open ports and check network data to see if information is being exfiltrated. You
isolate systems and run various vulnerability and rootkit scans. You might even enlist the assistance of other
digital forensic specialists to help locate backdoors, trojans, scripts, and changed files. You probably changed
all your root and administrator passwords. Performing these and similar tasks is digital forensics.
Some sys admins have a special talent for digital forensics, while others will have no interest at all. I was shocked
when one of my former colleagues told me to “have fun” doing my investigative work on a suspected breach
and let him know when I’ve “had enough.” To his surprise, I solved the issue. I uncovered an internal breach and
traced it to the offending person.
In this instance, a set of maintenance scripts used a non-secure protocol to update code from a development
system to multiple other staging and production systems. He couldn’t be bothered to tunnel or otherwise secure
passwords and data traversing the network. It looked like an outside attack from a compromised system because
it traversed a firewall, a bastion host, and the DMZ. My colleague had to explain himself to our manager and the
security team. He also had to provide extensive documentation and a plan to secure the data and its transfer.
Not all suspected breaches are quite this easy to unravel and resolve. Fortunately, the incident didn’t require
public disclosure because it only included data and information for our intranet, and no client data or informa-
tion was involved. The problem required mitigation because the process was a prototype for client production
data and information. It would have been much worse in six months when the process was moved to production.
This is what digital forensics is all about. If performing those tasks interests you, several online classes and
university options can take your interests to the next level. All system administrators should be required to
Lead Image © armmypicca, 123RF Free Images

have digital forensics training. Even if you have not performed any forensics-related tasks, the training will
help you protect your systems and assist investigators during the reconnaissance and recovery phases of an
incident. If you love to solve puzzles, have an aptitude for detailed work, and enjoy devising strategies against
an opponent, digital forensics might be what you’re looking for in moving your sys admin career forward.
The job of system administration is fun, but expanding your horizons and exploring something new and different
doesn’t hurt. You might find yourself on a new path to a great and rewarding career as a full-time digital
forensics professional.
Ken Hess • ADMIN Senior Editor

W W W. A D M I N - M AGA Z I N E .CO M A D M I N 78 3
S E RV I C E Table of Contents

ADMIN Network & Security

Features Tools Containers and Virtualization

10 Introduction to DDD 22 Manage AD with PyAD 42 Knative


Domain-driven design comprises Delegate tedious routine tasks Transfer serverless workloads to
principles for team-driven when automating Active Directory Kubernetes, with all the container
software development, from configuration and management components you need to build
the design of entire software to Python scripts with the PyAD serverless applications and PaaS
landscapes to the design of library. and FaaS services.
domain models, patterns, and
code. 26 Apache ShardingSphere 48 Dockerize a Legacy App
Extend databases by adding a When you are ready to convert
14 DDD and Agility modular abstraction layer to your legacy application to a
Central components of agility support horizontal sharding and containerized environment,
come into play with domain- scalability. Docker offers the tools for a
driven design, which is oriented smooth and efficient transition.
on business values, a deep 32 Shared and Tiered Storage
understanding of the domain The storage of job output for
model, and information exchange clusters requires an understanding
by business experts and of resource managers and a
developers. discussion of where data “should” Security
or “could” go.
18 Domain-Driven 56 MACsec
Transformation 38 Microsoft Power Apps Encrypt defined links with high
Bring out the hidden business If the IT staff is having trouble performance and secure Layer 2
treasure in legacy systems keeping up with the demand protocols between client and
by identifying the parts of for custom applications, end switch or between two switches.
the source code that contain users can pitch in with low-code
valuable business knowledge programming tools. 62 Visual Studio Code Server
and refurbishing it in increments You can gain a significant
while mitigating risk. advantage for remote development
by connecting to remote machines
through secure tunnels, without
the need for SSH.

70 Velociraptor
A powerful query language
Service monitors and queries your IT
infrastructure, combining and
3 Welcome extending the functionality
6 News of GRR Rapid Response and
97 Back Issues OSQuery when seeking clues to
98 Call for Papers cyberattacks and indicators of
compromise.

4 A D M I N 78 W W W. A D M I N - M AGA Z I N E .CO M
Table of Contents S E RV I C E

10, 14, 18 | Domain-Driven

Principles for programming a domain model


Business experts and developers collaborate to define domain
models and business patterns that guide software development.

Highlights

48 Dockerize Your Apps 56 MACsec 72 Event-Driven Ansible


Docker Compose and other Media Access Control Security, Agentless automation with a
tools in the Docker toolset defined by IEEE 802.1X-2010 reactive extension that uses
provide a safe, efficient, in combination with 802.1AE, events to launch automations
and versatile approach for encrypts in Layer 2 for transforms Ansible’s push-
migrating your existing cryptographic point-to-point only architecture.
applications to a container security on wired networks
environment. (e.g., switches) with virtually
no loss of speed.

Management Nuts and Bolts On the DVD

72 Event-Driven Ansible 86 Updating Azure VMs Fedora Server 39


A powerful rules engine and a A number of methods allow you Run server workloads on bare metal
modular, open concept enable to keep the operating system or virtual machines. Fedora describes
automation of target systems of an Azure VM up to date, its Server Edition as “a platform for
developers and system integrators,
without the need for an agent. including the new Azure Update
providing an implementation of the
Management Center. latest server technology.”
78 Monitoring Servers and Fedora Server features:
Services 92 Ripgrep Q Runs on virtual machines and
Keep track of internal and The best features of tools like containers
external servers and services Grep, Ack, and Silver Searcher Q Enables a variety of server workloads
with black box monitoring by are combined when it comes to Q The latest open source technologies
Monitoror, Vigil, and Statping-ng. the use of search patterns in a are curated by the Fedora Community
terminal window. Q The remote server administration
tool is ready to use on first boot
Q Deployment of servers is quick, with
all the tools you need to spin up your
workloads

@adminmagazine

@adminmag

ADMIN magazine

@adminmagazine

W W W. A D M I N - M AGA Z I N E .CO M A D M I N 78 5
NEWS ADMIN News

News for Admins

Tech News
Red Hat Announces Ansible Lightspeed with IBM watsonx Code Assistant
Red Hat has released Ansible Lightspeed with IBM watsonx Code Assistant — a generative AI ser-
vice for Ansible operators and developers, aimed at helping organizations accelerate IT automation.
Ansible Lightspeed (https://www.redhat.com/en/technologies/management/ansible/ansible-lightspeed) “accepts
prompts entered by a user and then interacts with IBM watsonx foundation models to produce
code recommendations built on Ansible best practices,” Red Hat says.
The service also helps keep codebases updated. It “scans existing content and automatically
provides update recommendations that are ready to review, test, and apply, making it easier to
maintain quality and consistency across the development life cycle,” Red Hat says.

Dell APEX Cloud Platform for Red Hat OpenShift Announced


Dell has announced the availability of the Dell APEX Cloud Platform for Red Hat OpenShift
(https://www.dell.com/en-us/dt/apex/cloud-platforms/red-hat-openshift.htm), which it describes as “the
first fully integrated application delivery platform purpose-built for Red Hat OpenShift.”
The APEX Cloud Platform, which was jointly engineered with Red Hat, “combines Dell’s auto-
mation management software, PowerEdge servers, and software-defined storage with Red Hat’s
container orchestration platform in a single appliance,” according to the announcement
(https://www.dell.com/en-us/blog/dell-red-hat-and-the-future-of-containers/). It offers:

• Simplified multicloud deployment


• Accelerated application delivery
• Optimized workload placement

The collaboration between Dell and Red Hat also “helps ensure that customers have rapid access
to new patches, helping to mitigate security vulnerabilities,” Dell says.

NSA Offers Best Practices for OSS in Operational Technology Environments


Implementation and patching of open source software (OSS) in operational technology (OT)
(https://en.wikipedia.org/wiki/Operational_technology) environments “continues to be a challenge due
to safety concerns and the potential disruption of critical systems,” according to the NSA.
Get the latest To promote better understanding and highlight best practices, the NSA, along with CISA and
IT and HPC news other agencies, has released new guidance (https://www.nsa.gov/Press-Room/Press-Releases-Statements/Press-
Lead Image © vlastas, 123RF.com

in your inbox Release-View/Article/3552309/nsa-and-us-agencies-issue-best-practices-for-open-source-software-in-operationa/) for


securing these systems.
Subscribe free to The fact sheet recommends “supporting OSS development and maintenance, patch management,
ADMIN Update authorization and authentication policies, and establishing common frameworks.” The guidance
and HPC Update “also encourages the adoption of ‘secure-by-design’ and ‘secure-by-default’ principles to decrease
bit.ly/HPC-ADMIN-Update cybersecurity risk in OT environments.”

6 A D M I N 78 W W W. A D M I N - M AGA Z I N E .CO M
NEWS ADMIN News

Civil Infrastructure Platform Adds New Super-Long-Term Linux Kernel


The Civil Infrastructure Platform (CIP) (https://www.cip-project.org/) has added the 6.1-based Linux
kernel series to its super-long-term stable (SLTS) kernel program, which means the project is
committed to maintaining the 6.1-cip kernel for a minimum of 10 years after its initial release.
Separate from the Linux kernel project, which recently announced that long-term support (LTS)
for Linux kernels would be reduced (https://www.fosslife.org/linux-long-term-support-being-cut-back) from
six to two years, the CIP clearly has a different mission. The SLTS program is part of CIP’s efforts
to establish “an open source base layer of industrial grade Linux to enable the use and implemen-
tation of software building blocks for civil infrastructure.”
Additionally, the announcement notes, “CIP kernels are maintained like regular long-term-stable
(LTS) kernels, and developers of the CIP kernel are also involved in LTS kernel review and testing.”
Other kernels in the program include 4.4-cip, 4.19-cip, and 5.10-cip.

HTTP/2 Protocol Exploited in Largest DDoS Attack Ever


Google, Cloudflare, and Amazon Web Services have revealed a new zero-day vulnerability
(https://nvd.nist.gov/vuln/detail/CVE-2023-44487) known as “HTTP/2 Rapid Reset.”
Attacks exploiting the vulnerability targeted cloud and Internet infrastructure providers and
peaked in August. “These attacks were significantly larger than any previously reported Layer 7 at-
tacks, with the largest attack surpassing 398 million requests per second,” Google says (https://cloud.
google.com/blog/products/identity-security/google-cloud-mitigated-largest-ddos-attack-peaking-above-398-million-rps).
The attack used a novel “Rapid Reset” technique leveraging the stream multiplexing feature of
the widely implemented HTTP/2 protocol (https://http2.github.io/).
See further analysis at Google Cloud (https://cloud.google.com/blog/products/identity-security/
how-it-works-the-novel-http2-rapid-reset-ddos-attack).

Docker Announces Three New Products for Secure App Delivery


Docker has announced three products aimed at secure app delivery: Docker Scout GA, Docker
Build, and Docker Debug.
According to the announcement, “the products combine the responsiveness and convenience of
local development with the on-demand resources, connectedness, and collaboration of the cloud.”
Docker Scout is now generally available, while the other products are available in public beta:

• Docker Scout — Provides relevant insights and integration to continuously evaluate container
images against defined policies, aligned with software supply chain best practices.
• Docker Build — Speeds up builds by as much as 39 times by taking advantage of large, on-
demand cloud-based servers and team-wide build caching.
• Docker Debug — Provides a language-independent, integrated toolbox for debugging local and
remote containerized apps.

Learn more at Docker (https://www.docker.com/blog/announcing-docker-scout-ga/).

CloudBees Updates Jenkins and Offers New DevSecOps Platform


CloudBees has announced major performance and scalability enhancements to its widely used
Jenkins CI/CD software, as well as a new DevSecOps solution based on Tekton.
The updates are part of the CloudBees CI (https://www.cloudbees.com/capabilities/continuous-integration)
enterprise version of Jenkins and include features such as workspace caching, pipeline explorer,
and high-availability mode, which aim “to reduce build times, speed up troubleshooting, enhance
controller efficiency, and maximize uptime,” the announcement says (https://www.cloudbees.com/blog/
biggest-update-for-jenkins-in-over-a-decade-with-cloudbees-ci).
The new cloud-native DevSecOps platform (https://www.cloudbees.com/products/saas-platform)
“uses a GitHub Actions style domain-specific language (DSL) and adds feature flagging,
security, compliance, pipeline orchestration, analytics and value stream management
(VSM) into a fully-managed single-tenant SaaS, multi-tenant SaaS or on-premise virtual

8 A D M I N 78 W W W. A D M I N - M AGA Z I N E .CO M
ADMIN News NEWS

private cloud instance,” according to the announcement (https://www.cloudbees.com/newsroom/


cloudbees-announces-new-cloud-native-devsecops-platform).
Additionally, the platform puts the focus on the emerging role of platform engineering, which
“brings together multiple roles such as site reliability engineers (SREs), DevOps engineers, security
teams, product managers, and operations teams.”

Linkerd 2.14 Released with Improved Multi-Cluster Support


The latest release of the Linkerd (https://linkerd.io/2.14/overview/) service mesh for Kubernetes features
improved support for multi-cluster deployments on shared flat networks, full gateway API confor-
mance, and much more.
Shared flat network architecture is increasingly common in enterprise environments, according
to the announcement (https://www.cncf.io/blog/2023/09/18/announcing-linkerd-2-14-improved-enterprise-multi-
cluster-gateway-api-conformance-and-more/), and it “allows pods in different clusters to establish TCP
connections with each other.”
“Importantly, this new multi-cluster support retains a critical aspect to Linkerd’s design: inde-
pendence of clusters as a way of isolating security and failure domains. Each cluster runs its own
Linkerd control plane, and the failure of a single cluster cannot take down the service mesh on
other clusters,” the announcement says.

NIST Releases Draft of Cybersecurity Framework v2.0


The National Institute of Standards and Technology (NIST) has released a draft of the Cybersecu-
rity Framework (CSF) 2.0 (https://csrc.nist.gov/pubs/cswp/29/the-nist-cybersecurity-framework-20/ipd). The
framework was first released in 2014 to help organizations understand cybersecurity risk.
“The NIST Cybersecurity Framework 2.0 provides guidance to industry, government agen-
cies, and other organizations to reduce cybersecurity risks. It offers a taxonomy of high-level
cybersecurity outcomes that can be used by any organization — regardless of its size, sector,
or maturity — to better understand, assess, prioritize, and communicate its cybersecurity efforts,”
according to the framework abstract.
NIST is accepting public comment on the draft framework until Nov. 4, 2023, according to the
announcement (https://www.nist.gov/news-events/news/2023/08/nist-drafts-major-update-its-widely-used-cyber-
security-framework), but does not plan to release another draft. “A workshop planned for the fall will
be announced shortly and will serve as another opportunity for the public to provide feedback and
comments on the draft. The developers plan to publish the final version of CSF 2.0 in early 2024.”

CISA and MITRE Announce Open Source Caldera for OT


The Cybersecurity and Infrastructure Security Agency (CISA) and MITRE have announced Caldera
for OT, a cyberattack emulation platform developed specifically for operational technology (OT)
networks.
The project is an extension of MITRE Caldera (https://caldera.mitre.org/), which is aimed at reduc-
ing the amount of time and resources needed for routine cybersecurity testing.
According to the announcement (https://medium.com/@mitrecaldera/announcing-mitre-caldera-for-ot-
47c6f22a676d), Caldera for OT builds on that functionality, “offering 29 distinct OT abilities to the
hundreds of existing enterprise-focused abilities already included with Caldera.” These new pl-
ugins “enable practitioners to emulate adversary behavior across both enterprise and industrial
networks.”
The Caldera for OT plugins are free and open source and can be downloaded from the project’s
GitHub repository (https://github.com/mitre/caldera-ot).

W W W. A D M I N - M AGA Z I N E .CO M A D M I N 78 9
F E AT U R E Introduction to DDD

The basics of domain-driven design

Bit by Bit
Domain-driven design addresses many aspects of software development, from the design of entire software
landscapes and the relationships between (sub)systems to the design of domain models, patterns, and code.
By Stefan Hofer

When Eric Evans wrote his book Evans describes a pattern language If a domain is too large to be under-
on domain-driven design (DDD) in (i.e., coherent patterns) that can be stood and defined as a whole, then it
2003 [1], he primarily had enterprise used to design domain models. is also too large to model completely
software in mind, wherein project The larger the scope of a domain in software. In this case, it is better
teams develop software for various model, the more its inconsistencies to represent self-contained parts of a
departments. Since then, a vibrant become visible. The reasons lie in domain in separate, smaller models,
community has been established the domain, because many domain which allows the software and the
around DDD that continues to expand concepts can only be meaningfully development team to grow, limiting
the discipline. As a result, DDD has defined within a specific context, and the cognitive load for all stakeholders,
become widespread in the develop- forcing them into an enterprise-wide who no longer need to understand
ment of software products. model leads to “god classes” and cy- a sprawling overall model, but only
clical dependencies – that is, large, manageable models within clearly
Domain Models cluttered monoliths that are difficult defined contexts (aka bounded con-
to evolve. Evans proposed developing texts). The goal of strategic design is
To build software that solves domain- context-dependent domain models that to design these bounded contexts and
specific problems, development teams unambiguously represent concepts and their relationships to each other.
condense their understanding of the behavior, which leads to functional To begin, you need to break down
domain’s structures and rules into modularization of the software, or stra- the large and complex domain into
a domain model that exists in the tegic design. subdomains. The result of this
minds of developers in the form of di- analysis is the linguistic boundar-
agrams, conversations, text, and code. Strategic DDD ies within which business solutions
In DDD, the domain model forms the can be modeled in a consistent and
core of a software system. Of course, Suppose you had to create software self-contained manner. In the sim-
this core needs to be surrounded by for a movie theater. How would you plest case, a subdomain becomes a
technology (interfaces, persistence, define a movie ticket? A movie ticket is bounded context. Considerations such
Lead Image © Christos Georghiou, 123RF.com

communication, etc.) for the software both a unit of sales (with a price, sales as team size, technological complex-
to be useful. tax, discount options, etc.) and an ac- ity, strict requirements in terms of
Building models was not the cess token (with validity, validation user experience, scalability, and more
brainchild of Evans. He drew on capability, etc.). To represent all these must be taken into account when
object-oriented analysis and object- properties and the complete handling of designing bounded contexts. Design-
oriented design and added two es- a move theater ticket in a single model ing bounded contexts therefore often
sential concepts that solve problems leads to the problems described above, means making compromises.
in the design of large systems. In commonly referred to in DDD as the Some subdomains are more critical
tactical design (more on this later), “big ball of mud” [2]. to business success than others. In

10 A D M I N 78 W W W. A D M I N - M AGA Z I N E .CO M
Introduction to DDD F E AT U R E

DDD, three types


of subdomains are
distinguished:
Q Core Subdo-
mains (typically
simply known as
core domains)
distinguish a
company and
are often techni-
cally complex.
Software for
core domains
can be developed
in-house to pro-
vide a competi-
tive advantage. Figure 1: Same name, different model: The MovieTicket class in different contexts of a movie theater.
Q Supporting
Subdomains do not represent the data? In fact, duplication is not a ma- contexts). They then are very differ-
technical core of a company, but jor worry because each bounded con- ent from the data-centric services of
they are necessary for the core do- text holds only the data relevant to its a service-oriented architecture (SOA).
mains’ tasks. The technical models specific model (Figure 1). Instead of consisting of bounded
of supporting domains are typically When you draw boundaries, you have contexts such as ticket purchase and
specific to a company and rarely to to make sure business processes work admission, an SOA consists of a ticket
an industry. Software for support- across bounded contexts. Bounded service, a movie service, and so on.
ing subdomains can be built by a contexts therefore need to be inte- In an SOA, multiple services often
service provider. grated, and teams need to consult on need to be called to complete a busi-
Q Generic Subdomains do not pro- the required interfaces. In the DDD ness task.
vide a competitive advantage and universe, this process is known as Therefore, strategic design is about
are not company specific. One context mapping, and a number of more than software modularization,
classic example is payroll account- organizational and technical patterns technical integration, and data flows.
ing, which is essential for a com- can be used to facilitate coopera- The design of bounded contexts also
pany, but does not, say, let movie tion between teams. For example, a involves model-level dependencies
theater operators distinguish them- customer-supplier relationship defines and relationships between the teams
selves from their competitors. As a directed dependency between two developing the bounded contexts.
an enterprise, you will want to use teams: In Figure 2, Team A (supplier) Team organization in particular has
off-the-shelf solutions for generic provides the functionality on which been one of the most common rea-
subdomains rather than building Team B (customer) builds. Team B, as sons companies have looked at DDD
the software yourself. the customer, can
Bounded contexts can be imple- typically impose
mented as modules of a monolith or requirements on
as (micro-)services. They not only Team A.
draw boundaries between the mod- Context maps like
els, but also (1) between the teams’ that in Figure 2
responsibilities (a team implements visualize bounded
at least one bounded context but contexts, their
can implement more than one); (2) dependencies, and
between the requirements (e.g., as a their organiza-
separate backlog for each bounded tional and techni-
context); (3) in the source code (e.g., cal integration.
as a package tree or namespace, pos- Bounded contexts
sibly even in the form of separate need to be able to
code repositories in a version control perform their tasks
system), and (4) in the data storage. as independently
The last two points in particular are as possible (i.e.,
sometimes questioned: Don’t they without calling
lead to duplicate code and redundant other bounded Figure 2: A simple example of a context map for a movie theater.

W W W. A D M I N - M AGA Z I N E .CO M A D M I N 78 11
F E AT U R E Introduction to DDD

for a number of years. Strategic DDD Table 1: Building Blocks


also provides ideas for one of the
Module Meaning
most exciting approaches to organiz-
Entity Domain-oriented concepts that have a life cycle and identity.
ing development teams: team topolo-
gies [3] – a model to describe teams Value objects A domain-oriented concept with value semantics (i.e., without identity); the
properties of an entity are expressed as value objects.
and their interactions.
Aggregate Related entities and value objects combined to create a consistent whole.
Repository A domain-oriented interface that stores and accesses aggregates (i.e.,
Tactical DDD encapsulates persistence).
What does the architecture of a Domain service Maps business processes and domain behavior that cannot otherwise be
assigned to value objects, entities, and aggregates.
bounded context look like? When
Evans published his book, layered
architecture (including a domain a major influence in the development In fact, this precision makes such
layer) was the state of the art. Since of event sourcing. Conversely, event a language ubiquitous. It is used in
then, other architectural styles have sourcing introduced a concept that conversations, texts, diagrams, and
emerged that suit DDD even better, entered both domain analysis and tac- code; prevents a conceptual break be-
most notably hexagonal architecture tical design as another design pattern. tween domain and technical models
with a domain core [4]. (Figure 3); and facilitates communi-
DDD supports the design of the do- Domain Language cations with domain experts.
main layer or the domain core with A ubiquitous language is not simply
a pattern language that defines the You need a domain language to de- the result of analysis and is not an
building blocks and permitted usage sign and implement domain models. upfront design. It emerges from dis-
relationships (Table 1). The pattern As you know from strategic design, cussions between the subject matter
language does not have to be used for DDD has no such thing as a single de- experts and the development teams.
every bounded context, but the more finitive model and therefore no single It evolves over time and then prompts
complex the domain expertise, the domain language. Like any natural refactoring in the code.
more worthwhile it becomes. language, a domain language is char- Ubiquitous language and domain
How the building blocks are imple- acterized by dialects; it is ambiguous, models are not a one-way street from
mented in the code depends on the context dependent, and subject to business to IT. Software development
programming language and program- constant change. can generate new business ideas. To
ming paradigm. In the early days of However, technical models can only stick to the movie theater example,
DDD, object-oriented programming be expressed accurately with precise during the development of an online
with Java dominated publications language and codified in software. In ticketing service, the notion of a
on tactical design. In the meantime, DDD, this precise language is known recommendation based on previous
DDD has arrived in many languages. as the ubiquitous language. It is de- purchase behavior can be fed back
With the advent of languages such veloped for each bounded context into the ubiquitous language of the
as Kotlin and F#, functional program- and based on a context-dependent bounded context.
ming has gained widespread appeal subset of the domain language. The
in DDD. Many a domain model can terms of the ubiquitous language are Collaborative Modeling
be elegantly implemented with func- defined with sufficient precision (e.g.,
tional programming. in a glossary) to describe require- How do you find good context
Technology openness also applies to ments, to design the domain model of boundaries and develop ubiquitous
the persistence concept. The classic a bounded context, and to name busi- languages and domain models?
approach is to store the state of an ness concepts in the code (e.g., a type Evans cites interviews, discuss-
aggregate. Event sourcing [5], on the of movie ticket). ing specific scenarios, prototype
other hand, stores
every change to an
aggregate in the
form of an event.
The current state
results from the
sum of changes,
just as the balance
of a bank account
results from depos-
its and withdraw-
als. DDD has been Figure 3: A class diagram with (left) and without (right) taking ubiquitous language into consideration.

12 A D M I N 78 W W W. A D M I N - M AGA Z I N E .CO M
Introduction to DDD F E AT U R E

implementations, and short feedback method in which models are designed [3] Team topologies:
cycles. Beyond that, however, he up front (see the “DDD and Agility” [https://teamtopologies.com]
provides aspiring DDD practitioners article by Eberhard Wolff in this is- [4] Hexagonal architecture:
very few tools of the trade. This sue). It is by no means necessary to [https://en.wikipedia.org/wiki/Hexagonal_
methodological gap for the analysis use the technology throughout from architecture_(software)]
of domains and the design of appli- the strategic to the tactical aspects. [5] “Event Sourcing” by Martin Fowler:
cation-oriented software was filled DDD doesn’t force you to build mi- [https://martinfowler.com/eaaDev/
by others. Popular methods include croservices, nor does it conjure away EventSourcing.html]
event storming [6], domain storytell- complexity; complex subject matter [6] Brandolini, Alberto. EventStorming.
ing [7], or example mapping [8]. remains complex even with DDD. Leanpub, last updated August 26, 2021:
These methods have in common the What DDD has at its core, on the other [https://leanpub.com/introducing_event-
bringing together of development hand, is a set of principles [9]. To de- storming]
teams and subject matter experts who sign software for complex subject mat- [7] Domain Storytelling:
model collaboratively, which is why ter, development teams (developers, [https://domainstorytelling.org/]
these workshop formats are grouped testers, analysts, etc.) need to build a [8] “Example Mapping” by Matt Wynne:
together under the term “collabora- deep, shared understanding of the ap- [https://cucumber.io/blog/bdd/
tive modeling.” The collaborative plication domain. In the process, they example-mapping-introduction]
modeling toolbox is one of the most are guided by subject matter experts. [9] “What is Domain-Driven De-
important additions to DDD, and This understanding grows out of the sign (DDD)” by Mathias Verraes:
the community is developing it in- language of the domain, which must [https://verraes.net/2021/09/
tensively. Additionally, collaborative be formalized in a joint process in a what-is-domain-driven-design-ddd]
modeling can also be useful indepen- coordinated and unambiguous manner [10] Vernon, Vaughn. Domain-Driven Design
dent of DDD (e.g., when discovering to create a ubiquitous language. Distilled. Addison-Wesley Professional,
requirements). Understanding is expressed in a 2017: [https://www.oreilly.com/library/
model shared by business experts and view/domain-driven-design-distilled/
What DDD Is (Not) developers that describes the problem 9780134593449/] (video)
space (as opposed to the solution [11] Khononov, Vlad. Learning Domain-
DDD has become far more important in space). The model must explicitly ex- Driven Design. O’Reilly Media, 2021:
the past 20 years. However, widespread press the essential complexity of the [https://www.oreilly.com/library/
use is also accompanied by misunder- domain. Complex subject matter can- view/learning-domain-driven-design/
standings and exaggerated expecta- not be efficiently expressed through a 9781098100124/]
tions, which should be cleared up now. single universal model and language, [12] Vernon, Vaughn. Implementing
DDD is neither a framework, nor an which makes it essential to break the Domain-Driven Design. Addison-
architectural style, nor a method, subject matter down into bounded Wesley Professional, 2013: [https://
because there are no rules on how contexts. The model, language, and www.oreilly.com/library/view/
to apply it. It is also not a waterfall code need to evolve as the under- implementing-domain-driven-design/
standing of the domain grows. DDD 9780133039900/]
Recommended Reading is not necessarily applied everywhere, [13] Vernon, Vaughn, and Tomasz Jaskula.
Domain-Driven Design Distilled [10] by in the spirit of pragmatism, but where Strategic Monoliths and Microservices:
Vaughn Vernon provides a good overview of it will have the greatest effect. Driving Innovation Using Purposeful
the concepts of DDD. For a more comprehen- I hope that this article has helped to Architecture. Addison-Wesley Profes-
sive summary and advice for practical use, communicate a clear idea of DDD. sional, 2021: [https://www.oreilly.com/
see Vlad Khononov’s Learning Domain-Driven If you want to delve deeper, see the library/view/strategic-monoliths-and/
Design [11]. Although a decade has passed “Recommended Reading” box for my 9780137355600/]
since Vaughn Vernon wrote Implementing recommendations on the subject. Q
Domain-Driven Design [12], it is still con-
sidered the reference work for anyone who
wants to apply tactical design down to the Info Author
code level. The author is currently working [1] Evans, Eric. Domain-Driven Design: Tack- Stefan Hofer, a consultant and trainer, helps
on a new version in the form of two books: ling Complexity in the Heart of Software. customers clarify requirements and apply
The already published Strategic Monoliths Pearson, 2003: [https://www.pearson. domain-driven design. He has been working at
and Microservices [13] addresses DDD de/domain-driven-design-tackling- WPS – Workplace Solutions GmbH (Hamburg and
from a product innovation and architecture complexity-in-the-heart-of-software- Berlin) since 2005. As one of the minds behind
perspective, and the complementary book 9780321125217] Domain Storytelling [13], he puts the language
on implementing strategic monoliths and mi-
[2] Big ball of mud: of subject matter experts at the center of re-
croservices is due to be published in 2024.
[http://www.laputan.org/mud/mud.html] quirements discovery.
Q

W W W. A D M I N - M AGA Z I N E .CO M A D M I N 78 13
F E AT U R E DDD and Agility

Domain-driven design and agile development

Pulling Together
At first glance, the domain-driven design software architecture approach and the agile process model seem to
cover different areas of software development. In fact, they do more than generate synergies; in some cases,
they even aim for the same targets. By Eberhard Wolff

Domain-driven design (DDD) is a arrangement is the only way to imple- Event storming is a well-known
software development approach that ment a domain-driven design. example of collaborative model-
develops a domain model – an ab- ing, wherein everyone involved
straction of the behavior and data of a Practice Instead of Theory in the project uses sticky notes to
system – and refers to the entire design create complex business processes
of the software, from the code level to Initially, the DDD movement mainly [2]. Each sticky note contains an
the architecture, and the interaction of encompassed ideas on structuring event from the domain (e.g., Order
the development teams involved. For a code or software systems. Discuss- received) that clarifies the process
consistent domain orientation to work, ing these with subject matter ex- flow. In practice, the method even
everybody involved needs to exchange perts is difficult because they often works for more complex processes;
information, especially technical ex- lack the technical background to do you just need enough space for the
perts and developers. so. Concepts like classes and mod- sticky notes.
Ultimately, domain strategies can ules are simply not general knowl- Because the idea is not particularly
only form the basis of development if edge. However, it cannot be a pre- complicated at this level, everyone
all stakeholders communicate them requisite for joint work to first train involved should be able to write a
clearly; otherwise, only the subject subject matter experts in this area couple of sticky notes. Later, more in-
matter experts have the required de- – the bar is simply too high. DDD, formation can be added on additional
tailed knowledge of the domain – not however, now includes a host of col- sticky notes of different colors. An-
Lead Image © omnimages, 123RF.com

the developers. This point is exactly laborative modeling techniques. The other technique with the same objec-
where the first tie to agility can be objective is to strengthen the un- tive – visualizing the flow of a busi-
found. One of the 12 principles of the derstanding of the domain through ness process – is known as domain
agile software development mani- joint work on artifacts that are not storytelling [3]. In this way, DDD
festo includes: “Business people and used directly to structure the soft- adds concrete techniques to the agile
developers must work together daily ware but serve solely to explain the idea of collaboration between subject
throughout the project” [1]. This domain in more detail. matter experts and developers.

14 A D M I N 78 W W W. A D M I N - M AGA Z I N E .CO M
DDD and Agility F E AT U R E

Forced Iterations too. For example, the principles of the configurator, it needs to be easy to
Agile Manifesto state that the highest use and adapt. Invoicing, on the other
DDD aims to establish a model of the priority is “to satisfy the customer hand, requires other qualities, such as
domain in executable code. The logic through early and continuous deliv- accuracy. A quality problem in invoic-
in the code reflects the facts (e.g., ery of valuable software” [1]. Typi- ing would be unpleasant, but in the
calculating the cost of a delivery or cal agile processes comply with this case of the product configurator, it
its current state). In doing so, DDD principle by having domain experts turns into a genuine competitive dis-
assumes that despite all efforts, this prioritize features, which allows the advantage because of its importance.
model will never be perfect, as Eric principle to be implemented without With concrete patterns like core
Evans wrote in 2003 in one of the first affecting the structure of the software domains, DDD helps concretize the
books about DDD [4]. or the code. abstract, agile concept of “valuable”
At its core, the teams’ role in DDD Domain-driven design complements software at the level of the software’s
is to understand the subject matter the core domain concept and refers to structure. Accordingly, software can
(knowledge crunching), which is un- the area of a system that implements be described as valuable if it achieves
derpinned by a learning process that the motivation of the project, which high quality in the core domain.
takes time. Accordingly, the model in sets it apart from generic subdomains
the code exclusively represents the that do not allow for differentiation. DDD and Teams
developers’ current understanding, A core domain can be a product con-
with all the errors and inaccuracies figurator, which gives the company a Agile approaches rely on cross-func-
that entails. The challenge in software unique selling point compared with tional teams. In teams, all roles gather
development is not programming its competitors. An example of a ge- to implement a business-relevant
the code, but understanding what to neric subdomain could be invoicing, feature. The team will include devel-
program. which although it is equally impor- opers with different orientations such
The model in the code cannot ever tant because, without invoices you as at the front end or the back end.
be perfect because the subject matter have no revenue, it does not further Other roles can also exist such as user
changes. If business processes change, the cause of differentiation. experience (UX) experts or operations
you need to adapt your software. Core domains and generic subdo- specialists.
Conversely, introducing software influ- mains break down the value of the DDD comes to similar conclusions
ences the facts. If you engage with and software in the structure into more from different directions. It divides
reflect on the facts as you model them, and less valuable parts of the soft- a software system into bounded
you are likely to identify an optimiza- ware (Figure 1). The distinction contexts, behind which are business
tion potential that could be lost in the between a core domain and generic modules that integrate a specific,
daily grind. To take advantage of this subdomain is based on a simple ob- separate business functionality (e.g.,
potential (e.g., automation), you need servation. Obviously, not all parts of a invoicing or taking orders). These
to change both the business facts and software system have the same qual- modules are significant in terms of
the software. ity, on which basis you have only two analysis, as well, because specific
If the business facts change during courses of action: Either you leave technical terms apply in a bounded
development, you respond to these quality distribution to chance, or context. Accounting includes terms
changes with a new iteration (i.e., a you actively control it. If you decide such as “tax” and “debt collection.”
new version) of the software. To en- to take control, you need to know At first, it seems that bounded con-
sure that everyone learns, you need which parts need to be particularly texts simply define the structure of
to make the software available to us- good, which is exactly where the the software. But appearances are de-
ers after each iteration. Developers core domain comes into play: You ceptive: DDD gives them a role in the
discover new details of the business accordingly pay attention to the high- project’s organization. Ideally, one
facts, and subject matter experts dis- est quality and create a competitive team will take care of one bounded
cover new optimization potentials. advantage in the
Therefore, one of the core elements process.
of agile software development is Strictly speaking,
developing software iteratively and quality in this
incrementally. These conditions are context means
required for DDD and knowledge maintainability
crunching to work. and adaptability.
If a company dif-
Prioritization ferentiates itself
from its com-
Relationships between agile values petitors through Figure 1: The product configurator core domain and the generic
and DDD can be found in other areas, a product accounting subdomain are given different priorities.

W W W. A D M I N - M AGA Z I N E .CO M A D M I N 78 15
F E AT U R E DDD and Agility

In the conform- are not responsible for a technical


ist alternative, part of the system such as a bounded
the team cannot context, but instead offer a platform
request anything that provides technical features on
but adapts to which other teams can build.
what is offered.
This situation Humans, Not Robots
Figure 2: A cross-functional team works on a bounded context. might occur if
the other team is All of these patterns and practices
context. Because the team is respon- looking after legacy software that is in both DDD and agile development
sible for the specific bounded context no longer maintainable and therefore have an effect on the formal organi-
and therefore also for the domain- simply cannot meet specific require- zation. Accordingly, they would be
oriented functionality implemented in ments. If the order-taking software reflected in an organizational chart,
it, the team can autonomously make cannot be changed, the invoicing but people don’t always follow orga-
certain domain-oriented changes. team will find itself in a conformist nizational charts. Just because it says
Cross-functional teams are based on role, with the corresponding impli- somewhere that one team should help
the same idea. You would build a cations for the development of the another team doesn’t mean that em-
team such that it can implement busi- software. ployees will actually behave that way.
ness features on its own, if possible. Strictly speaking, these relation- They all know ways to get around
Agility focuses on the composition ships exist at the bounded context reorganization or instructions from
of the team, and ideally it should level, which means that as soon as management.
reflect all roles. DDD, on the other another team takes responsibility for When this situation transpires, an
hand, focuses on the division of the a bounded context, it also has the important idea from the Agile Mani-
software, which needs to proceed associated role (e.g., customer or festo takes hold. Individuals and
such that each team can implement conformist). interactions play a more important
independent functionality. As soon as Once again, DDD and agility comple- role than processes and tools. This
you combine the two concepts, cross- ment each other. Autonomous teams statement can be interpreted as
functional teams emerge that work that can meet technical requirements meaning that formal collaborations,
on a bounded context and cover the as independently as possible are sup- such as those established by DDD
business facts in full (Figure 2). ported by patterns because they can or team topologies, do not matter as
Such teams comprise five to eight now request help from other teams. much as the actual interactions of
people who develop software to- These patterns rely on a decentral- individuals. The work of many agile
gether. However, systems often turn ized organization. After defining a coaches or scrum masters focuses
out to be so large that a single team customer-supplier relationship, teams on improving these interactions and
cannot handle the work involved, so can interact in line with that relation- interpersonal relationships because
you will end up coordinating several ship. Central coordination, as envis- both are critical to the success of ag-
teams. In this case, DDD provides aged by some approaches to scaling ile implementation.
for relationships that result from agile methods, is no longer needed in The term “sociotechnical system”
bounded contexts that you can assign this case. has become established in the field
to teams (Figure 3). The team topology [5] describes of DDD and means not only con-
When a team needs help from another further relationships between teams. sidering the software but also the
team to implement features in its It extends these ideas to include con- organization that develops the soft-
own bounded context, it enters into a structs such as platform teams, which ware system. Experience shows that
customer-supplier relationship. As the
customer, the team requests functions
from the other team, which then as-
sumes the supplier role. Formulated in
the context of this example, one team
takes care of invoicing and needs sup-
port from the team that takes orders.
Without an interface that provides
information about the order, it is al-
most impossible to write an invoice.
The customer team communicates the
necessary features, and the two teams
then negotiate the necessary tasks and Figure 3: The two types of relationships between teams in DDD – customer-supplier and
schedule for delivery. conformist – follow different rules.

16 A D M I N 78 W W W. A D M I N - M AGA Z I N E .CO M
DDD and Agility F E AT U R E

problems in projects mainly occur glance, it is simply a matter of creat- is values from the Agile Manifesto
in the interaction between the orga- ing an artifact; for example, event and an approach defined in itera-
nization and the software. Addition- storming yields an overview of all the tions. DDD is about mapping the
ally, organizations and people react events in a given business context. domain and the subject matter par-
very differently to measures such As a concrete, tangible effect, the end ticularly well and therefore operates
as reorganization, whereas software result is an artifact that possesses a more at the level of development
is essentially deterministic and can specific quality. and architecture, whereas agility
be implemented by an engineering At the same time, when people in- tends to describe a process model.
approach. teract with each other in the event However, the two are more or less
storming process, the tensions or mutually dependent if you want to
Fact and Fiction alliances that exist in the organiza- implement them effectively. People
tion are revealed, as well as which are ultimately at the center of both
Therefore, DDD and agile develop- people have particularly detailed strategies and the question is how
ment face a very similar problem: knowledge in which areas. More- they can develop in an agile and
Ideas only appear to be realized. The over, participants can practice col- domain-oriented manner. In areas
term “cargo cult” is used in this con- laborating on a problem during an such as sociotechnical systems, the
text. Cargo cult manifested with the event-storming session. two approaches come together to
inhabitants of Pacific Islanders who solve this central problem of soft-
watched American planes carrying Conclusions: Only Together ware development.
goods to the islands during World If you want to delve further into the
War II. However, they did not under- To see how close the relationship subject, you can check out a video on-
stand the context, so after the Ameri- between agility and DDD is, it’s line [6] to learn about possible chal-
cans left the islands and more goods worth asking whether you can even lenges when implementing DDD. Q
failed to arrive, the islanders built realize DDD without agility. DDD
runways or headphones to mimic the requires that the project is oriented
Americans’ behavior – in the hope of on business values and a precise Info
receiving deliveries. understanding of what the project [1] Agile Manifesto: [https://agilemanifesto.
In the case of cargo cult agility, you is really trying to achieve within the org/principles.html]
can hold all scrum meetings without business. However, this process only [2] Event storming:
following the Agile Manifesto and works through close collaboration, [https://www.eventstorming.com]
the values defined therein, includ- and that requires agility. Similarly, [3] Domain storytelling:
ing, among other things, generating as already mentioned, adapting the [https://domainstorytelling.org]
business value, close cooperation model implemented in the software [4] Evans, Eric. Domain-Driven Design: Tack-
between subject matter experts and as the basis of DDD knowledge ling Complexity in the Heart of Software,
developers, and process optimiza- crunching is something that can O’Reilly, 2003: [https://www.oreilly.
tion. Thanks to DDD, you can use the only be achieved through iterative com/library/view/domain-driven-design-
techniques (i.e., decompose a system and incremental development. Ac- tackling/0321125215/]
into bounded contexts) without align- cordingly, you need to implement [5] Team topologies:
ing the architecture with the business the basic cornerstones of agility to [https://teamtopologies.com]
value or genuinely understanding and operate DDD successfully. [6] DDD Europe 2019: [https://www.youtube.
optimally supporting the business Now you are thinking that maybe you com/watch?v=eKIMpCF-cqU]
meaning of the system. can implement agility without DDD. [7] Eberhard Wolff homepage:
The risk of implementing DDD or Ultimately, DDD seems to be more [https://ewolff.com]
agile development in this way exists technical at its core and more focused
because these superficial activi- on architecture. In fact, it is also
ties are easy to review and easy to possible to develop highly technical Author
implement compared with agile systems in an agile manner (e.g., a Eberhard Wolff is Head of Architecture at
values. Working on the orientation machine control system). DDD makes SWAGLab GmbH (Hamburg, Germany) and has
of the project, the architecture, or little sense in such an area because been working as a software architect and
the values requires far more effort. it focuses on systems that support consultant for more than 15 years. He is the
Additionally, you will find it much business processes. For such systems, author of numerous articles and books, is a
more difficult to identify whether however, an agile process will likely keynote speaker at international conferences,
these values or the desired align- lead to the use of at least basic ele- and offers a weekly live-stream on the topic of
ment has been achieved. ments of DDD. software architecture [7]. His technological focus
Collaborative modeling is a practical DDD and agility are two essential is on modern architecture and development
technique from DDD and a good ex- mechanisms of modern software approaches such as continuous delivery,
ample of the different levels. At first development. The core of agility DevOps, and microservices.

W W W. A D M I N - M AGA Z I N E .CO M A D M I N 78 17
F E AT U R E Domain-Driven Transformation

Revamp your software architectures with Domain-Driven Transformation

Panacea
Domain-Driven transformation can refurbish a legacy system in obsolete technology or expanding the
business case have mutated into Her-
increments while mitigating risk. By Carola Lilienthal and Henning Schwentner culean tasks. To make things worse,
the stakeholders are bogged down
When greenfield teams start a soft- resilience, and development speed in a team structure that does not
ware project, it’s all fun and easy. The back to aging systems. The core lend itself to, or in fact prevents, fast
developers respond to new require- task is to control and break up the progress.
ments at lightning speed, and the complexity. Within the last 20 years, work with
users are thrilled. Development pro- Software systems suffer from differ- Domain-Driven Design (DDD) and
ceeds in leaps and bounds. However, ent “diseases,” and you need a vari- legacy software has identified some
this picture changes over the lifetime ety of medicines to cure them. Four cures for these diseases: refactoring,
of the system as the complexity of malaises in various permutations are domain storytelling, event storming,
the software inevitably increases, observed in organizations and their team topologies, and the modularity
making the system more prone to er- legacy systems, whether monoliths or maturity index (MMI). These cures
ror, slowing progress and affecting microservices. can be combined in a kind of thera-
maintainability. Over time, a legacy system grows into peutic plan referred to as Domain-
When worst comes to worst, even a “big ball of mud,” wherein unman- Driven Transformation.
the smallest change can take months aged dependencies lead to everything When addressing the treatment of a
to reach production. What was ini- being interrelated with everything project, you should ensure that the
tially a flourishing green meadow has else. Additionally, business cases development team has a positive
Lead Image © faithie, 123RF.com

turned into a brownfield. “Legacy become entangled in a large domain outlook that in turn boosts motiva-
system,” “old software,” “big ball of model whose parts don’t genuinely tion. The further the healing process
mud,” and “monolith” are the un- fit together – or that even get in each progresses, the more satisfied the
flattering names teams use for these other’s way. In the third disease, the users, project leads, and managers,
kinds of systems, but don’t give up domain and technical source code as the clumsy and expensive legacy
hope: You can bring flexibility, error intermingle. In such cases, replacing software becomes more stable,

18 A D M I N 78 W W W. A D M I N - M AGA Z I N E .CO M
Domain-Driven Transformation F E AT U R E

adapts more quickly, and ultimately way. The collective experience of idea to replace the legacy system.
even opens up to innovative, for- the IT industry shows that the sec- Instead, just continue to operate it,
ward-looking extensions. ond approach – replacing the legacy rebuilding it in increments through
system step-by-step with a new refactoring.
The Choices system that grows by increments
(Figure 2) – works better than the Strategic Transformation
The first question is whether to re- big bang.
place the legacy system completely or In the process, you gradually cut out To disassemble legacy software techni-
keep it as a foundation and transform parts of the legacy system, revamp cally, you need to capture and think
or refactor it. The replacement solu- the design, and set the new parts through the current work processes
tion has two variants: a big bang or up alongside the legacy software. with all their workarounds. In other
a step-by-step replacement. Big bang After a short time, users are directly words, the key to success lies in (re)
means building a completely new confronted with the new system and understanding the domain. It is not
system on a greenfield site while the can work with it in production. As enough to look at the solution (i.e., the
legacy system remains in use. At a soon as the desired functionality is existing source code) alone. Instead,
certain point, you flip the switch from available in the new system, you you have to start with the problem –
the legacy to the new system. can switch off the old one (arrow 4). the domain – to unravel and simplify
The gradual replacement of a legacy Nearly two decades ago, Martin a software system that has become
system allows you to develop a new Fowler gave this approach the name entangled over the years through ad-
system one slice at a time and use “strangler fig application” [1]. He ditions and changes. When doing so,
each slice in production at the earli- chose this name because the old sys- you need to consider two things.
est possible stage. At the same time, tem is entangled and eventually over- To begin, you mentally have to put
you disable the corresponding func- come by the new system, like a host aside the monolith and the struc-
tionality in the legacy system. If you tree is by a strangler fig. ture that it contains and look at the
decide to walk down the reshaping This kind of modernization lends basics of the business case or, more
road, you need to refactor the big itself to the agile approach favored specifically, break down the domain
ball of mud because it remains opera- in IT, wherein you learn from itera- into subdomains. Interestingly, at
tional all the time. tion to iteration. You can continually least from an analytics point of
To begin, you should look at the ad- evaluate and adjust the path forward view, you start from scratch, even
vantages and disadvantages of the dif- according to the latest findings. though you already have a system.
ferent types of transformation so you Organizations assume a great deal From this fresh understanding of the
can make the decisions that are right of intelligence resides in their legacy domain, you can identify the redun-
for you. Figure 1 visualizes the first systems; understandably, they want dant or misunderstood parts in the
replacement variant, the big bang. to keep this know-how for the fu- software solution and remove and
This strategy is known as the “big ture. In this case, it’s not a good rebuild them.
bang” because in the end you are in
a whole new world. Unfortunately,
the name also hits the mark because,
inevitably, something is going to blow
up in your face. What sounds good
in theory (i.e., establishing the new
system next to the old one) does not
work in practice for various reasons.
For example, a lot of unknown knowl-
edge is hidden in the legacy system Figure 1: The old system remains in service (arrows 1 and 2), but only until the new system
that often falls by the wayside during is complete. At this point, you switch over in one fell swoop (arrow 3).
reconstruction. The legacy system
often turns out to be so large that it
cannot be rebuilt in a short period
of time. Therefore, the legacy system
cannot be “frozen,” so it becomes a
moving target during refactoring.
You can probably tell from these
comments that we are not fans of a
big bang rollout, and we are not the
only ones who see the difficulties Figure 2: In a step-by-step replacement, you slowly replace individual components until the
of replacing legacy systems in this legacy system has completely disappeared.

W W W. A D M I N - M AGA Z I N E .CO M A D M I N 78 19
F E AT U R E Domain-Driven Transformation

Therefore, for both the development horizontally organized teams (Fig- under a heap of technology. A num-
of the new software system and the ure 3) into vertically organized work- ber of measures help to expose the
decomposition of the monolith, the groups (Figure 4). business case [5].
critical activities do not simply con- In the following discussion, the term Start by separating the business and
sist of writing code but, first and fore- “refactoring” is used in a very broad technical source codes from each
most, include stakeholders learning sense. Alternative terms would be other. Then, to increase cohesion, you
from and understanding each other. “reorganization” or “sociotechnical enrich your modeling from a techni-
Developers need to learn what exactly transformation.” To give teams some cal perspective, use value objects [6],
the business experts do in their daily idea of how the planned changes introduce technical prerequisites, and
work. On the other hand, the busi- should proceed, you need to look at make the technical identity of entities
ness experts need to understand the some types of sociotechnical refactor- explicit. At the same time, you reduce
technical reasons for the constraints ing that include the cross-layer refac- the coupling by introducing ID refer-
and prioritization in the solutions. toring [2] method, wherein an inter- ences at unit boundaries, reducing
Strategic transformation can be de- disciplinary team is establish from inheritance in domain-oriented source
scribed by four substeps that include the members of the various specialist code, and using domain events. Fi-
rediscovering the domain, modeling groups (user interface, business logic, nally, it’s essential to document the
the domain-oriented target architec- database, etc.). architecture in code and test it.
ture, comparing the architecture with In partly layered refactoring [3], you
the target architecture, and prioritiz- establish a second interdisciplinary Conclusions
ing and implementing the restructur- team from members of additional
ing measures. functional groups and the first cross- Whether legacy systems are simply
functional team. During second-team in poor condition or already burning
Sociotechnical refactoring [4], you take a similar wrecks, business treasure is hidden in
approach, the difference being that them all. In most cases, the goodies
Transformation the existing interdisciplinary teams are valuable enough to bring out of
To transform a legacy system into continue to work without interruption hiding and back into the light, rather
something better, you need to con- because none of their members need than being replaced with completely
sider more than just the technical to join the new working group. At new development. With new devel-
and business dimensions of domain- the same time, you can create several opment, you can expect so many
driven transformation. It often makes new interdisciplinary teams. unknowns that the overhead often ex-
sense to rethink team structures and ceeds the estimates many times over.
procedures at the same time. Tactical Transformation Like greenfield development, re-
Fortunately, the agile approach is well furbishing a legacy system does
established in many organizations. If Tactical transfor-
this is not yet the case in your orga- mation is about
nization, you need to consider taking strengthening
the plunge because agile transition the expertise
and domain-driven transformation in your legacy
often go hand in hand. Moreover, software that is
organizational domain-driven trans- often inconsis-
formation often means evolving tent and hidden

Figure 4: The possibility of independence: layers with cross-


Figure 3: The way to dependencies: layers with horizontal teams. functional teams.

20 A D M I N 78 W W W. A D M I N - M AGA Z I N E .CO M
Domain-Driven Transformation F E AT U R E

consume serious chunks of time. [3] Partly layered refactor- visuell und agil zu fachlich wertvoller
Domain-driven transformation helps ing: [https://hschwentner.io/ Software. dpunkt, 2023, [https://dpunkt.
implement this undertaking in small domain-driven-refactorings/ de/produkt/domain-storytelling/] (in
increments, mitigating risk at the socio-technical/form-second-team-out- German); Domain Storytelling: A Col-
same time. To unearth the domain- of-partly-layer-team-and-first-team- laborative, Visual, and Agile Way to Build
oriented treasure in a legacy system, members] Domain-Driven Software. Addison-Wesley,
you need to identify the parts of the [4] Second-team refactoring: [https:// 2021 (in English)
source code that contain the valuable hschwentner.io/domain-driven- [9] ComoCamp: [https://comocamp.org]
business knowledge. refactorings/socio-technical/form-second-
Of course, this article can only give team-out-of-layer-team-only]
you a rough overview of the massive [5] Domain-driven refactoring: [https:// Authors
field of software architecture. For more hschwentner.io/domain-driven- Dr. Carola Lilienthal is a software architect
on the topic, see our book, Domain- refactorings] and managing director at WPS – Workplace
driven Transformation [7]. Q [6] Value object: [https://martinfowler.com/ Solutions GmbH (Hamburg and Berlin). Since
bliki/ValueObject.html] 2003 she has been analyzing the future viability
Info [7] Lilienthal, Carola, and Henning Schwentner. of software architectures, writing books and
[1] “StranglerFigApplication” by Martin Domain-Driven Transformation: Monolithen articles on the subject, and delivering keynotes
Fowler, June 2004: [https://martinfowler. und Microservices zukunftsfähig machen. at conferences.
com/bliki/StranglerFigApplication.html] dpunkt, 2023, [https://dpunkt.de/produkt/ Henning Schwentner lives out his passion for
[2] Cross-layer refactor- domain-driven-transformation/] (in German) high-quality programming as a coder, coach,
ing: [https://hschwentner.io/ Domain-Driven Transformation. Addison- and consultant at WPS – Workplace Solutions
domain-driven-refactorings/ Wesley, upcoming. (in English) GmbH. He is a public speaker on domain-driven
socio-technical/form-cross-functional- [8] Hofer, Stefan, and Henning Schwentner. design, the author of Domain Storytelling [8],
team-out-of-layer-team-members] Domain Storytelling: Gemeinschaftlich, and a co-founder of ComoCamp [9].
TO O L S Manage AD with PyAD

Automate Active Directory management


with the Python PyAD library

Snappy
Python
Windows admins can use the Python PyAD library to automate Active Directory configuration and
management. Deployment on a Windows server is a snap and paves the way for delegating tedious
routine tasks to Python scripts. By Tam Hanna

PyAD [1] is a useful tool for Ac- box. However, Windows Server 2022, the Install Now option: You do not
tive Directory (AD) automation with which is used in the following exam- need to adjust the default settings of
Python in many environments. How- ples, does not have Python support. the installation wizard. At the end of
ever, about two years ago, the devel- The Python you need is available from the installation, you will want to dis-
oper decided to stop maintaining the the Python website [2]. For this proj- able the path length limitations and
open source product. The lack of pull ect, I used the 3.11.3 version current then restart the server to complete the
requests in the Git repository shows at the time of writing and the Win- installation.
that the library generally works with- dows Installer for 64-bit architectures. The Python development team takes
out problems. For easier handling, it makes sense care to make the integration of exten-
In practice, however, the library needs to let the installation wizard include sions easy. All of the current crop of
to be assessed as a potential risk; the Python EXE file in the server’s Python runtimes come with a pack-
after all, it does affect Active Direc- path. Otherwise, you can work with age manager that is based on the
tory. Administrators
who are not forced
to use Python will
want to evaluate
their alternatives.
Moreover, consci-
entious testing is
strongly recom-
mended before
rolling out a new
Windows Server
version.
Lead Image © Eric Issele, 123RF.com

Getting Set Up
Unix operating
systems come with
a Python runtime
in place out of the Figure 1: The deprecation warning in Pip is relevant for PyWin32: Avoid the update.

22 A D M I N 78 W W W. A D M I N - M AGA Z I N E .CO M
Manage AD with PyAD TO O L S

structure of the defaults known from password="<Password>") In practice, however, it can be more
Linux. The pip command can be used user = U convenient to create a direct instance of
in the following, but watch out: As pyad.aduser.ADUser.from_cn ("<user>") a date-time object instead. The object
part of the initial execution, Pip of- can be transferred directly afterward:
fers to update from version 22.3.1 to You can also specify settings within
23.1.2 (Figure 1). Do not agree to this the connection setup as follows: import time
request under any circumstances if import pywintypes
you want to use PyAD, because the import pyad from pyad import aduser
library relies on an outdated installa- from pyad import aduser
tion command that no longer works user = aduser.ADUser.from_cn (U now_seconds = time.time ()
in the new version. "<user account>", options=dict (U now_pytime = pywintypes.Time (U
The PyWin32 extension is required to ldap_server=U now_seconds+60*60*24*2)
use PyAD. You can install the exten- "<domain controller address>"))
sion and the PyAD library with user = pyad.aduser.ADUser.from_cn (U
The library supports the ldap_server, "Administrator");
pip install pywin32 gc_server, ldap_ port, gc_port, user- print (user.set_expiration( now_pytime));
pip install PyAD name, password, and ssl parameters.
Note that the data transfer between In the lab, when I tried to set an
Previously, it was necessary to down- the Python runtime and AD server expiration time for the administra-
load PyWin32 manually because the is typically encrypted. If you do not tor account, I occasionally saw an
PyAD development team was a little want encryption, manually change error stating A device attached to the
lacking in the dependencies depart- the SSL value to False. system is not functioning.\r\n. The
ment. The repository does not have a recommendation is to use scripts with
dependency on PyWin32, so Pip does Managing Users in AD exception handling to field any errors
not install the support library when caused by PyAD.
deploying the Active Directory access The returned ADUser object provides PyAD also lets you use new users
module. insights into the settings applicable to to AD. In the simplest case, it looks
the user account. The class contains like this:
First Steps methods for editing the account. One
good example is determining the from pyad import aduser
After restarting the server, you need a age of the password. The user.get_ from pyad import *
work file in PY format. For an initial password_last_set() method returns
test, I used the sample program: a date-time object, so now you can ou = pyad.adcontainer.ADContainer.from_dn (U
send the return value directly to the "test.tamoggemon.com");
import pyad command line with print. The follow- new_user = pyad.aduser.ADUser.create (U
from pyad import aduser ing two-liner reveals the age of the "<New User>", ou, password="<Password>")
user = pyad.aduser.ADUser.from_cn(U admin password: print (new_user)
"Administrator");
print (user); user = pyad.aduser.ADUser.from_cn (U The method for new_user requires an
"Administrator"); AD organizational unit (OU) in ad-
ADUser is an AD user object. The print (user.get_password_ last_set()); dition to a username and password.
method assigned to user fetches the The easiest way to meet these re-
user object from the domain control- You can find more options in quirements is by typing
ler (DC) associated with the currently the documentation [3]. The set_
active user account. expiration(dt) method is interesting. PyAD.adcontainer.ADContainer.from_dn
The PyAD.set_defaults method can be It lets you set the password expiration
used to adjust the default settings for date for users. The parameter passed is the distin-
connecting to AD instances. It sup- Because you are using the PyWin guished name of the OU container
ports configurations that the library library, standard Python datetime you need to address.
implicitly accepts on calls and uses objects are incompatible. Calls The most important obstacle when
them to establish the connection: therefore fail and output an error running the above script is that
message stating TypeError: must be the password criteria stored on
import pyad a pywintypes time object (got da- the Windows server also apply
from pyad import * tetime.date). However, converting to users entered via PyAD. If the
pyad.set_defaults(U Python data types to PyWin time supplied password does not meet
ldap_server="<DC address>", U types is a task that has already been these conditions, Active Directory
username="<Account>", U solved [4]. refuses to accept it and outputs an

W W W. A D M I N - M AGA Z I N E .CO M A D M I N 78 23
TO O L S Manage AD with PyAD

error message. Assuming everything referred to earlier for setting connec- of the network card to ensure that
works, you will find the accounts in tion parameters lets you make some the GUID is globally unique). Make
Active Directory Users and Comput- adjustments in this area. Note, how- sure you enable the option shown in
ers (Figure 2). ever, that storing credentials in script Figure 3 in Active Directory Users
files is not the preferred approach in and Computers. You can then get a
Searching for Resources terms of security. GUID using:
The simplest way to find resources
PyAD is not limited to generating new is by their globally unique identifi- user = aduser.ADUser.from_guid (U
content. The library can also extract ers (GUIDs). Active Directory needs "<XXX-XXX-XXX>")
information from Active Directory them for its objects, but they are
and make it accessible by the various also used during the installation of Because the GUID editor included
PyAD editing functions. As before, programs for unique identification. in Windows Server does not directly
the resources returned by PyAD de- A GUID is a 16-byte (128-bit) num- support exporting to the clipboard,
pend on the permissions with which ber that contains a large amount of it is more convenient in many cases
the script is running. The method information (e.g., the MAC address to use distinguished names to de-
termine immutable Active Directory
elements:

testuser = aduser.ADUser.from_dn (U
"CN=tamstester, DC=test, U
DC=tamoggemon, DC=com");
print (testuser)

At the command line, these lines


output the properties belonging to
the AD user object. Alternatively, the
methods discussed previously are
available to adapt the user account’s
status to something closer to the de-
sired configuration.
In practice, however, the distin-
guished name or GUID are rarely
available. Searching AD is difficult
in PyAD because the search process
comprises two steps. ADQuery ob-
jects use a query language similar to
SQL that records the query you want
to run against the AD server. The fol-
Figure 2: Sufficiently complex passwords lead to the display of user accounts created in Python. lowing parameters are useful for an
initial attempt:

import pyad.adquery
q = pyad.adquery.ADQuery()
q.execute_query(attributes = [U
"<Distinguished-Name>", U
"<Description>"], U
where_clause = "objectClass = '*'",U
base_dn = "CN=<users>, DC=test, U
DC=tamoggemon, DC=com"
)

What is important here is to adapt the


string passed in q.execute_query to
match the current domain. The return
value is a data field whose values are
Figure 3: GUIDs appear in the Active Directory Users and Computers window if you enable pushed to the command line with a
the Advanced Features option. for iterator:

24 A D M I N 78 W W W. A D M I N - M AGA Z I N E .CO M
Manage AD with PyAD TO O L S

for row in q.get_results(): ou = ADContainer.from_dn (U in this case, a workstation.


print (row["<Distinguished-Name>"]) "ou=<workstations>, U
dc=<domain>, dc=com")
Conclusions
For example, you can determine AD
groups as shown in Figure 4. The ou variable must find an AD con- The PyAD Python tool facilitates
The next step is a more advanced tainer instance, which determines the interaction with Active Directory.
analysis. You need to break down the container in which the changes re- With the appropriate know-how, you
AD group objects into their members quested by the Python code will be ex- can integrate AD configurations in
with another iterator: ecuted. AD container objects also sup- Python workflows. However, because
port you at this point with a number PyAD is no longer under active de-
for row in q.get_results(): of create methods. For example, the velopment, you need to consider
group = pyad.adgroup.ADGroup.from_dn (U following lines add a new computer to carefully whether you want to start
row["<DistinguishedName>"]) the created AD container group: using it. If you have not used Python
print (group) scripts thus far, and if you exclu-
for item in group.get_members(): new_computer = U sively want to use Python for AD
print (item) ADComputer.create ("WS-489", ou) automation, it makes sense to check
new_group = U out the alternatives. However, for
You will also want to protect the ADGroup.create ("<IT Department>", U experienced Python script writers,
get_results() functions with ex- security_enabled=True, U PyAD is a useful, customized admin-
ception handling. The underlying scope='UNIVERSAL', U istration tool.
library throws an exception of the optional_attributes = {"description":U Because of space limitations, I
type (0, ‘Active Directory’, ‘There is "<All IT employeees in the company>"}) couldn’t cover the PyAD library com-
no such object on the server.\r\n’, pletely. If you want to learn more,
None, 0, -2147217865) if it encoun- If your access authorizations are the documentation [7] is a big help,
ters empty AD groups, and this er- sufficient, you can rename objects and the source code is available on
ror usually terminates the program or change their locations in Active GitHub [1]. The Stack Overflow [8]
run. Directory: website is also very helpful: A search
Note that Active Directory imple- for the “PyAD” tag returns dozens
ments rate limiting in the query func- computer = ou.create_computer ("WS-490") of questions on the topic, along with
tions used by PyAD [5] [6], which is computer.move(ADContainer.from_dn (U the answers. Q
significant when a query returns more "ou=<Workstations>, ou=HR, U
than 1,500 results. dc=<Company>, dc=com"))
computer.rename("WS-501") Info
Moving Objects in AD [1] PyAD: [https://github.com/zakird/pyad]
In this case, you are customizing the [2] Python for Windows:
PyAD does not limit you to generating name of a workstation. The next com- [https://www.python.org/downloads/
and querying AD objects. The library mand deletes Active Directory objects windows/]
can also generate various elements. with Python: [3] ADUser in PyAD:
In general, a method that follows this [https://zakird.github.io/pyad/objects.
pattern is used: ADComputer.from_cn ("WS-500").delete() html#aduser]
[4] Convert Python PyWin time types:
[https://stackoverflow.com/questions/
39028290/python-convert-pywintyptes-
datetime-to-datetime-datetime]
[5] Maximum number of return values:
[https://stackoverflow.com/questions/
56854376/python-active-directory-
module-PyAD-max-vreturned-group-
members]
[6] Code for return values:
[https://stackoverflow.com/questions/
60153136/python-3-PyAD-get-members-
how-to-list-more-than-1500-members]
[7] PyAD documentation:
[https://zakird.github.io/pyad/]
[8] Stack Overflow:
Figure 4: PyAD finds a list of Active Directory groups. [https://stackoverflow.com]

W W W. A D M I N - M AGA Z I N E .CO M A D M I N 78 25
TO O L S Apache ShardingSphere

Sharding and scale-out for databases

Shards
Apache ShardingSphere extends databases like MySQL or PostgreSQL, adding a modular abstraction layer to
support horizontal sharding and scalability – but not replication or encryption at rest. By Martin Loschwitz

Scalable databases have mush- single instances of MySQL or Post- requirements described for cloud-
roomed in recent years and, in some greSQL are no longer sufficient. ready applications.
cases, rely on completely new archi- Out-of-the-box DBMSs come with
tectural approaches. For example, quite a few limitations that make Sharding
YugabyteDB is a key-value store, but them unsuitable for operation in
it offers a MySQL compatibility layer. cloud environments. Two problems Before discovering how the Apache
Vitess, on the other hand, uses native are immediately apparent: First, mod- project addresses these problems
MySQL databases in the background ern environments assume that the with ShardingSphere, a review of ter-
but inserts an abstraction layer be- services running in them are implic- minology is needed, especially with
tween the user and the database itly redundant. Typically – and this is regard to the concept of “sharding.”
management system (DBMS) that especially true in microarchitecture You might be familiar with sharding
takes care of sharding and, in turn, services – any number of instances of from the early days of IT, although
horizontal scalability. The Apache a service are available for each task, it referred to a completely different
project takes a similar approach with monitoring each other and seam- technical issue back then.
its ShardingSphere [1] variants but lessly taking over the tasks of a failed The journey goes way back into the
promises easy handling, seamless ex- instance in an emergency. Anyone past, to a time when the capacity of
tensibility with plugins, and support who has ever faced the challenge of hard disks was still measured in gi-
for most common databases. achieving high availability for MySQL gabytes, not terabytes. Even then, the
will be aware that, out of the box, operators of large mail servers regu-
Redundancy and Scalability MySQL does not come with any func- larly had the problem of running out
tions for this task. The same is true of local space for messages. At that
Databases are a central component for PostgreSQL. time, the term “sharding” primarily
of most complex setups. True to the The second problem arises from the meant the distribution of user mail-
motto “a special tool for every task,” need for scalability. Monolithic da- boxes to different machines, which
a long-established practice is to let tabases of the past, such as MySQL continued to appear to the outside
databases take care of data manage- or PostgreSQL, are not designed for world as one logical mail server. In
ment because they solve the task best seamless horizontal scaling. Instead, principle, sharding in ShardingSphere
and most efficiently. The changes that the principle is that exactly one cen- is no different, except it involves da-
Photo by Dan Dennis on Unsplash

have occurred since the advent of tral instance of the service is pres- tabases, not email.
cloud computing – containers and the ent, where the capacity of the single In the database context, sharding
cloud-ready principle – also lead to server can be expanded, at most. means breaking down a database
tougher requirements for databases. In the cloud context, this arrange- namespace into its logical elements
In a scalable environment, the data- ment is a problem primarily because and distributing them to different
base also needs to be able to scale; it is diametrically opposed to the databases in the background. DBMS

26 A D M I N 78 W W W. A D M I N - M AGA Z I N E .CO M
Apache ShardingSphere TO O L S

sharding is therefore ultimately just comprises two components with differ- a pool of database back ends and
an abstraction layer that provides a ent feature sets that can be extended let ShardingSphere do the rest. This
uniform view to the outside world with plugins. The reason is historical: scenario is probably where Sharding-
and, when access occurs, knows to ShardingSphere was initially devel- Sphere is used most frequently.
which of the available instances it oped as a Java Database Connectivity Shortly after its inception in 2016,
needs to forward a client’s request. (JDBC) module (Figure 1). ShardingSphere-JDBC caused quite
Sharding supports additional features JDBC describes a driver environment a stir. Soon people were looking for
in this way, such as replicating in- for accessing databases with Java and a way to use its functionality outside
dividual parts of the namespace (or offers the huge advantages of modu- of JDBC, which saw the birth of the
shards) between different database larity and the ability to stack and second ShardingSphere variant, simply
instances in the background. Dedu- combine modules within the JDBC known today as ShardingSphere-Proxy
plication, parallel read-only or read- environment by connecting them (Figure 2). It does essentially the
write access, and encryption during in series. The motivation for Shard- same as the JDBC variant but comes
data transfer (on the fly) and when ingSphere was to create a flexible as a standalone service. In the back-
storing data (at rest) can also be im- intermediate layer for the dominant ground, ShardingSphere-Proxy man-
plemented in the abstraction layer. databases on the market at the time ages a pool of connections to database
(e.g., MySQL, PostgreSQL) for shard- back ends, much like JDBC, while
Architecture ing, encryption, redundancy, and maintaining protocol compatibility
availability without having to make with MySQL and PostgreSQL. Client-
Saying that ShardingSphere is a so- major modifications to the database side databases connect to the Proxy
lution is not completely correct. It itself. Instead, it was enough to define instead of directly to the database.
The advantage of ShardingSphere
is that by managing and controlling
the connection between client and
server, it can implement all kinds of
practical features without requiring
a special configuration client-side or
on the server. One of the main archi-
tectural principles of ShardingSphere
is that database clients must always
be able to talk to a DBMS in its SQL
dialect through the service with-
out errors and without any special
customization. Therefore, Sharding-
Sphere always remains transparent
from both the client’s and the serv-
Figure 1: By design, the ShardingSphere-JDBC application is designed for use in Java er’s point of view.
environments. It is transparent for applications and clients. © ShardingSphere The functional range of Sharding-
Sphere is quite impressive. The linch-
pin of all features is, as described, the
ability to break down a database into
small logical segments – or shards, if
you like. The developers emphasize
that they can scale both the storage
of data and any compute tasks hori-
zontally: The problem with a data-
base often is not that the individual
instance does not have enough local
space, but rather that it collapses
under the load of incoming requests
because resources such as CPU and
RAM are finite.
Sharding, as implemented by Shard-
ingSphere, avoids this problem, be-
Figure 2: ShardingSphere-Proxy provides functions similar to the JDBC implementation but cause the back ends that are currently
does not require the Java interface and can therefore be used more generically. holding the respective shards are still
© ShardingSphere responsible for processing the queries

W W W. A D M I N - M AGA Z I N E .CO M A D M I N 78 27
TO O L S Apache ShardingSphere

relating to the individual database activated as a plugin. It encrypts the in turn consumes performance. It
shards. If two large queries access dif- data in transit. If you need encryp- is quite possible that the people at
ferent data, they are handled by dif- tion at rest, you have to take care of ShardingSphere would be unable to
ferent back ends and do not affect the this yourself with one of the solutions maintain their otherwise fantastic
entire database. available for MySQL or PostgreSQL, performance values if replication
according to the terse note from the were activated.
Encryption and More developers. The answer to the replication issue is a
bit surprising, because ShardingSphere
ShardingSphere does not exhibit any No Replication has decided without further ado to
weaknesses in any other functional ignore the issue almost completely.
area. The project claims that it is If you only want to create a distrib- Instead, the documentation succinctly
significantly faster than comparable uted database, ShardingSphere leaves states that you will want to implement
competitor solutions, which the out a central aspect that comparable replication at the level of the database
developers have proven with bench- solutions handle: replication. Data- back end. Although this solution gives
marks [2]. The ShardingSphere team bases usually take care of this feature you high-availability (HA) functional-
outlined how it achieved near-native when scaling horizontally: It is a ity, ShardingSphere is not responsible
performance of the underlying da- requirement that the market quite for compliance. At the end of the day,
tabases on the basis of its solution. simply imposes on cloud-native and you end up building countless HA
ShardingSphere’s performance over- cloud-ready applications. Adding clusters from MySQL or PostgreSQL
head is minimal. a high-availability layer is obvious pairs and then feeding these to Shard-
Even with ShardingSphere-Proxy, when implementing sharding, be- ingSphere as back ends. However, if
which according to the developers is cause then, from a database point of you have ever manually tried to make
significantly slower than the JDBC op- view, it is possible to copy a shard to MySQL highly available, you will in-
tion, the performance overhead is now multiple back ends so that any one of evitably have dealt with tools such as
25 percent less (Figure 3). This perfor- them can step in if another fails. Pacemaker or distributed replicated
mance may well be considered a spe- At this point, the description sounds block devices (DRBDs). Pacemaker in
cial feature: The Vitess developers, for more trivial than its technical imple- particular is not only extremely com-
example, state that performance can mentation. After all, if you replicate plex to use but also is definitely not
drop by up to 50 percent compared to a database, you need to offer guaran- highly available.
a native MySQL database with their tees such as atomicity, consistency, The ShardingSphere developers un-
software. However, this is partly to do isolation, and durability (ACID). derstand that their user story has a
with the issue of replication, which the Moreover, replication is not very hole in it in terms of redundancy. A
article will go into in detail later. popular precisely because compliance lot of information online offers guid-
ShardingSphere contains an encryp- with consistency guarantees usually ance about what a valid HA setup
tion layer that can be dynamically forces synchronous replication, which can look like with ShardingSphere.
However, some guides refer to plugins
for ShardingSphere that no longer ex-
ist or that rely on hosted solutions,
such as database as a service (DBaaS)
from AWS to offload the manage-
ment of the database instances to the
provider.
Of course, this is only a valid user
story in environments where suit-
able as-a-service offerings exist at all,
which is precisely what private cloud
environments, such as those based
on OpenStack, often do not offer. The
fact that ShardingSphere today has a
granular integration with Kubernetes
and that all ShardingSphere services
can be run as a Kubernetes service
does not help, because integration for
the underlying application is virtually
Figure 3: Even with ShardingSphere-Proxy, the slower of the two variants, the performance nonexistent.
overhead for PostgreSQL is still well under 25 percent (tpmC = number of transactions that The Proxy introduces another pecu-
can be fully processed per minute). © ShardingSphere liarity in the ShardingSphere context:

28 A D M I N 78 W W W. A D M I N - M AGA Z I N E .CO M
Apache ShardingSphere TO O L S

In the documentation, the develop- with sample code, distributed over a will cause an application to fail if it
ers regularly refer to a cluster mode, multitude of files. How you get from tries to use a PostgreSQL feature that
but you should not be misled by this bare metal to a running Sharding- the YugabyteDB replica cannot han-
reference. Cluster mode in Sharding- Sphere instance is not revealed by dle. ShardingSphere takes a smarter
Sphere is simply an operating mode these documents. approach because it uses actual Post-
that groups the various database greSQL or MySQL databases in the
instances in the background. Most What About Competition? background. Anyone looking for an
administrators probably associate add-on solution for distributed data-
the term “cluster” with high avail- As I mentioned earlier, Sharding- bases will want to include Sharding-
ability, but it is completely missing in Sphere is a comprehensive solution Sphere in their evaluation.
ShardingSphere. for dragging databases into the pres- However, ShardingSphere does
ent day and adding cloud-ready ca- not immediately cover the issue of
Eye Candy pabilities and scalability. How does implicit high availability. The com-
the tool fare compared with its com- petitors are far more advanced in
Not to be left unmentioned at this petitors? The question is not so easy this case: Vitess, for example, can
point is the ShardingSphere user in- to answer, precisely because there replicate the individual shards of
terface (UI), which reveals almost in are now a large number of complex a node to other nodes and seam-
passing elementary weaknesses of the solutions in the market segment that lessly exchange each of these during
product documentation and the devel- ShardingSphere addresses. However, operation. The same is true for the
oper community of ShardingSphere. their underpinnings differ from those key-value database at the heart of
On paper, the ShardingSphere UI is of ShardingSphere – in some cases YugabyteDB.
a Vue.js-based graphical interface considerably.
for the database distributor (Fig- The closest thing to ShardingSphere is Future Outlook
ure 4). On the one hand, it enables Vitess [3], which implements its own
administrative tasks such as setting MySQL-specific sharding with a simi- The ShardingSphere developers con-
the ShardingSphere configuration lar component structure. Again, it does sider their work far from complete
parameters. On the other hand, it is not implement its own storage but and are already working on a third
intended to help you understand the accesses MySQL instances in the back flavor of their product: Sharding-
structure of the data currently stored end and then uses them to distribute Sphere-Sidecar (Figure 5). Anyone
in ShardingSphere, to identify the its own logical database namespace. who works in the container and
distribution and to analyze the back Unlike ShardingSphere, however, Kubernetes environment will already
ends currently in use. Vitess specializes in MySQL and com- have some idea of where this prod-
It’s a pity that the ShardingSphere pletely lacks support for PostgreSQL. uct is headed. Sidecar is said to offer
documentation for the UI says virtu- Other solutions such as Yugabyt- ShardingSphere capabilities while
ally nothing, and although the Git eDB [4] do offer this support. coming across as a cloud-native ser-
repository contains the code, it only YugabyteDB operates as a classic vice and integrating seamlessly with
offers a few lines of advice for the key-value store under the hood but container fleets.
install. The rest is left to you to fig- exposes its structures to the outside Sidecar works in close cooperation
ure out on your own. This task is world with a PostgreSQL compatibil- with the Proxy, which is essential in
nontrivial, especially with respect to ity layer created specifically for this a Sidecar setup that uses Sharding-
connecting the UI to a running Shard- purpose. In the worst case, this setup Sphere. Thanks to an integrated mesh
ingSphere instance or cluster. If you
don’t have any experience with the
solution, in the worst case you won’t
understand the configuration syntax
and will spend hours experimenting.
Fatally, the UI documentation is not
the only missing part. Time and time
again you come across passages
in the documentation that you can
only understand if you have already
worked with the JDBC or gained pre-
vious experience with the Proxy. De-
spite several quickstart guides, life is
anything but easy for ShardingSphere
newcomers, because the guides are Figure 4: The ShardingSphere UI is intended to make it far easier to understand the stored
often only links to GitHub directories data and cluster functionality, but it is noticeably under-documented. © ShardingSphere

W W W. A D M I N - M AGA Z I N E .CO M A D M I N 78 29
TO O L S Apache ShardingSphere

functionality, the individual applica- competitor YugabyteDB, for example, to evaluate, as should Vitess, which
tions no longer communicate directly it does not try to reinvent the wheel: takes a similar approach for MySQL. Q
with the Sharding Proxy, but with ShardingSphere lets applications talk
a local instance of a Sharding Mesh to a real MySQL database instead of a Info
Sidecar. The active Mesh Sidecars, in translation layer. [1] Apache ShardingSphere:
turn, forward the data to Sharding- However, the individual Sharding- [https://shardingsphere.apache.org]
Sphere’s Sharding Sidecars, which Sphere components noticeably are [2] Li, R., L. Zhang, J. Pan, J. Liu, P. Wang, N.
ultimately communicate with the da- not fully feature-compatible with Sun, S. Wang, C. Chen, F. Gu, and S. Guo.
tabase back ends. each other. That the JDBC implemen- Apache ShardingSphere: A Holistic and
At the time of writing, Sharding- tation was the first is noticeable by Pluggable Platform for Data Sharding: VIII.
Sphere-Sidecar was not released for the fact that it still offers the most Evaluations. In: Proceedings of 2022 IEEE
production; you will have to make features. However, this does not 38th International Conference on Data
do with alternatives for now, which demote ShardingSphere-Proxy to a Engineering (ICDE) (IEEE, May 2022), pp.
could mean, for example, com- second-class component. From the 2468-2480: [http://www.kangry.net/paper/
bining the ShardingSphere-Proxy developer’s or administrator’s point ICDE2022_SS.pdf]
server with a mesh (e.g., the Istio of view, it is important to choose [3] Vitess: [https://vitess.io]
service mesh). carefully between the variants. If [4] YugabyteDB: [https://www.yugabyte.com]
JDBC is available within the scope
Conclusions of a project anyway, this option is a The Author
good choice. Freelance journalist Martin
ShardingSphere is a comprehensive Either way, if you want to look be- Gerhard Loschwitz focuses
and powerful tool for upgrading yond ready-made boxed solutions for primarily on topics such
conventional-style databases for to- scalable databases, ShardingSphere as OpenStack, Kubernetes,
day’s scalable IT world. Unlike its should be on your list of solutions and Chef.

Figure 5: The JDBC driver and Proxy will soon be joined by ShardingSphere-Sidecar, which is optimized for operating the solution in clouds.
However, a production-ready version was not available when this magazine went to press. © ShardingSphere

30 A D M I N 78 W W W. A D M I N - M AGA Z I N E .CO M
TO O L S Shared and Tiered Storage

Where does job output go?

Data Depot
Where does your job data go? The answer is fairly straightforward, but I add some color by throwing in a little
high-level background about what resource managers are doing and evolve the question to include a discussion
of where data “should” or “could” go. By Jeff Layton

The second question I need to so on. When the job finishes or the interactive environment and returns
answer from the top three storage time allowed is exceeded, the job a prompt to a node from the resource
questions [1] a friend sent me is stops and releases the resources. manager. When this happens, you
“How do you know where data is As resources change in the system can execute whatever commands or
located after a job is finished?” This (e.g., nodes become available), the scripts or binaries you want. Today,
is an excellent question that HPC resource manager checks the resource one common interactive application
users who use a resource manager requirements of the job, along with is a Jupyter Notebook. When you exit
(job scheduler) should contemplate. any internal rules that have been the node or your allocated time runs
The question is straightforward to defined about job priorities, and de- out, the job is done and the resources
answer, but the question also opens a termines which job to run next. Many are returned to the resource manager.
broader, perhaps philosophical ques- times, the rule is simply to run the If you choose to run interactive jobs,
tion: Where “should” or “could” your jobs in the order they were submitted be sure to save your data often; oth-
data be located when running a job – first in, first out (FIFO). erwise, when the resource manager
(application)? When you submit your job script to takes back the resources, you could
To answer the question with a little the “queue,” creating the job, the lose some work.
background, I’ll start with the idea of resource manager holds the details Because the job, regardless of its type,
a “job.” Assume you run a job with of the job script and a few other is executed for you, it is important
a resource manager such as Slurm. items, such as details of the submit that you tell the resource manager
You create a script that runs your command that was used. After creat- exactly what resources you need. For
job – this script is generically referred ing the job, you don’t have to stay example, how many nodes do you
to as a “job script” – and submit the logged in to the system. The resource need? How many cores per node do
job script to the resource manager manager runs the job (job script) on you want? Do you need any GPUs or
Photo by Charles Forerunner on Unsplash

with a simple command, creating a your behalf when the resources you special resources? How much time
“job.” The job is then added to the requested are available and it’s your do you think it will take to complete
job queue controlled by the resource turn to run a job. In general, these are the job? Equally important is explain-
manager. Your job script can define jobs that don’t require any user inter- ing in your job script any special
the resources you need; set up the action; they are simply run and the options pertaining to how the job
environment; execute commands, resources are released when it’s done. script should be run and the location
including defining environment vari- However, you can create interactive of the input and output data that the
ables; execute the application(s); and jobs in which the job script sets up an resource manager creates.

32 A D M I N 78 W W W. A D M I N - M AGA Z I N E .CO M
Shared and Tiered Storage TO O L S

If your job script or your application directories with needed data, if have done to help address this prob-
doesn’t specify data locations, then you like. If you don’t specify where lem is to examine what is happening
Slurm will accept all input data to be the input data resides or the name with the data.
in the directory where you created of the output, it will show up as Usually, the results of the examina-
the job, along with all the output. You slurm-.out in the pwd where you tion are twofold:
will see this referred to as pwd (pres- submitted your job script. Q The “age” distribution of the user
ent working directory), which refers Unless told otherwise, users will al- data skews older; that is, a signifi-
to the directory in which you submit- most always submit their jobs from cant portion of the data is older
ted the job script. (Slurm captures their /home directory, meaning all the and hasn’t been touched or used
the pwd information.) Of course, the input and output data will be located for a long time, compared with
application can specify specific paths there. As users are added, as projects other data that is actively used.
to the input data and the paths where grow, and as the applications require Q Many applications don’t need ex-
it will create the output data. These more input data, this arrangement treme storage performance to run
paths can be hard coded in the ap- might be untenable because of the effectively. They can run using
plication; many times, the application capacity and performance limits of / slower performance storage with
will read environment variables that home. The cluster admin can impose little effect on applications.
tell it where to perform the I/O. quotas that help control the growth From these observations, questions
In general, if the job script or the of data and suggest compressing data start popping up. Does the old data
application doesn’t specify the loca- files to save space. In the end, capac- that hasn’t been used in a long time
tion of the input and output, then ity is finite. need access to storage with very high
all the data will appear in the pwd. Additionally, storage has a fixed per- performance? Why not put this data
Almost always, the resource man- formance limit, and the more users on slower storage that has a much
ager captures the stdout and stderr sharing that resource, the more the greater capacity and has a much
output and puts in into a resource- effective performance that each user lower cost per gigabyte? It seems
manager-defined file. For example, can get goes down, possibly reduc- logical to create different pools of
in Slurm this would be something ing application performance. At some storage that each have very different
like slurm-.out. But the resource point cluster admins and users start characteristics.
managers let you change the name wondering whether there isn’t a bet- This approach is called “tiering” stor-
of this output file in the job script ter way to organize their data, appli- age, in which a small amount of re-
or on the command line when you cation data, and so on. ally fast storage with a large cost per
submit the job script. gigabyte is purchased. The savings
I don’t want to dive into the writ- Where to Run Jobs come because you don’t need to buy
ing of job scripts, but because the a huge amount. Instead, you buy a
resource manager creates an output The obvious answer to needing more great deal more storage that has much
file for you, or lets you define the storage capacity and possibly perfor- less performance but also has a much
name of the file, you can include mance is simply to buy more. Your smaller cost per gigabyte, allowing
all kinds of detail about the job in storage solution can add capacity and for much more capacity. This slower
this output file, including dates and performance easily, right? However, storage has the tongue-in-cheek title
times, paths, nodes used, the states notice that this is a single tier of stor- of “cheap and deep” (low-cost and
of nodes (e.g., how much memory age; that is, all the storage has the lots of it).
is available), and so on. This infor- same performance for all the data. Compared with the original single
mation can help document the job If you need more performance, you tier of storage where you had to bal-
details so you can scan through the just buy a faster storage solution or ance capacity and cost so that neither
files to look for specific details. For add more storage hardware (e.g., requirement was really satisfied, this
example, when I first started using more storage servers). Sometimes this new approach of tiered storage – with
a resource manager, one of the engi- can be done without adding much a much smaller amount of storage
neers on the team created a common (any?) additional capacity and almost that has much greater performance
definition that we included in the job always means that the cost per giga- than before, along with huge amounts
script to capture the details on the byte increases, so the overall cost for of “cheap and deep” storage that is
problem examined. It made life so the same amount of space goes up. much less expensive and has mas-
much easier when scanning for de- I’ve never seen storage solution costs sive capacity – can produce a much
tails about the simulations. go down with faster performance. better overall solution satisfying, to
Overall, that’s it. All data will be At some point, you will have to bal- some extent, the two extremes: per-
read and written to your pwd when ance the almost insatiable appetite for formance and capacity.
you submit your job script to the more storage capacity and the need Of course, tiering introduces com-
resource manager. The applica- for faster storage to run applications plications. You have to worry about
tion or job script can specify other effectively. What HPC data centers copying data from one of the slower

W W W. A D M I N - M AGA Z I N E .CO M A D M I N 78 33
TO O L S Shared and Tiered Storage

tiers to the fastest tier, run the ap- perhaps three tiers (Figure 1). The top PVFS [2], which is now known as
plications, and copy the data back tier has the fastest performance with OrangeFS [3]) that is part of the
to the slower tier; that is, you save the smallest capacity (greatest $/GB). kernel. Applications could then use
the capacity of the fastest tier for The second tier holds the /home direc- this shared space to, perhaps, access
jobs that are actively running or tories for users, with less performance better performing storage to improve
soon will run applications. To per- than the top tier but a much larger performance.
form this data migration, users must capacity (smaller $/GB than the top Quite a few distributed applications,
learn about the commands and tools tier). The third tier has very little per- primarily the message passing inter-
needed for the manual intervention, formance but has massive capacity face (MPI) [4], only had one process
which, by virtue of being manual, is (smallest $/GB). – the rank 0 process – perform I/O.
prone to mistakes. Consequently, a shared filesystem
Combining the storage tiers into one Storage Tiering with Local wasn’t needed, and I/O could happen
seemingly unified storage space or us- locally on one node.
ing “automatic” tools to migrate data
Node Storage For many applications, local node
has not had much success because When HPC clusters first became storage provided enough I/O perfor-
the processes that users develop to popular, each node used a single mance that the overall application
perform their work are so diverse that disk to hold the operating system runtime was not really affected, al-
no single tool is sufficient. Creating a (OS) that was just large enough for lowing applications that were sensi-
new type of filesystem with tiers is a the OS and anything else needed tive to I/O performance to use the
very nontrivial task. Therefore, most locally on the node. These drives presumably faster shared storage. Of
sites stick with manual data migra- could be a few gigabytes in capacity, course, when using the local node
tion by users, but they really focus on perhaps 4GB, or about double that storage, the user had to be cognizant
training and developing any sort of capacity. Linux distributions, espe- of having to move the files onto and
tool to make the task easier. cially those used in HPC, did not off of the local storage to something
Another complication that arises with require much space. Consequently, more permanent.
manual data migration is that the the unused capacity in these drives Over time, drives kept getting larger
admin needs to police the fast-tier was quite large. and the cost per gigabyte dropped
storage, so users don’t forget about This leftover space could be used for rapidly. Naturally, experiments were
any data sitting there that isn’t being local I/O on the node, or you could tried that put several drives in the
actively used. use this extra space in each node local node and used a hardware-
You can have as many tiers as you to create a distributed filesystem based RAID controller to combine
like. Many larger HPC systems have (e.g., a parallel virtual filesystem, them as needed. This approach

Figure 1: Three tiers of storage are approximately placed in the chart according to their performance. The width of the block is relative to the
capacity of that tier. Travel up the y-axis for increased performance and cost per gigabyte. Go right along the x-axis for increased capacity.

34 A D M I N 78 W W W. A D M I N - M AGA Z I N E .CO M
Shared and Tiered Storage TO O L S

allowed applications with somewhat data, although at some risk). In the same time, the cost of CPUs and
demanding I/O performance require- the extreme, you could even cre- accelerators, such as GPUs, has been
ments to continue to use local stor- ate a temporary RAM disk – system increasing rapidly. These computa-
age and worked well, but for a few memory made to look like a stor- tional components of a node now
drawbacks: age device – that could be used for account for the vast majority of the
Q The cost of the hardware RAID storage. total node cost. Adding a few more
card and extra drives could notably The third trend is that the world SSDs to a node does not appreciably
add to the node cost. now had solid-state drives (SSDs) affect the total cost of a node, so
Q The performance of the expansion that were inexpensive enough to put why not put in a few more SSDs and
slot that held the RAID card could into compute nodes. These drives use software RAID?
limit storage performance, or the had much greater performance than Today you see compute nodes with
controller’s capability could limit hard drives, making them very ap- 4-6 (or more) SSD drives that are
performance. pealing. Initially the drives were combined by software RAID with
Q Hard drives failed more often than fairly expensive, but over time costs anything from RAID 0 to RAID 5
desired, forcing the need for keep- dropped. For example, today you or RAID 6. These nodes now have
ing spares on hand and thus add- can get consumer-grade 1-2TB SSD substantial performance in the tens
ing to the cost. drives for well under $100, and I’ve of gigabytes per second range and
As a result, many times only a small seen some 1TB SSDs for under $30 a capacity pushing 20TB or more.
subset of nodes had RAID-enabled lo- (August 2023). Don’t forget that Many (most?) MPI applications still
cal storage. SSDs can range from thin 2.5-inch do I/O with the rank 0 process, and
As time progressed, I noticed three form factors to little NVMe drives this amount of local storage with
trends. The first is that CPUs gained that are only a few millimeters thick fantastic performance just begs for
more and more cores and more per- and very short. The M.2 SSDs are users to run their code on local
formance, allowing some of the cores approximately 22mm wide and 60- node storage.
to be used for RAID computations 80mm long. It became obvious that Local storage can now be considered
(so-called software RAID) so that putting a fair number of SSDs that an alternative or, at the very least, a
hardware-based RAID cards were not are very fast, very small, and really partner to the very fast, expensive,
needed, saving money and improving inexpensive in every node is an eas- top-tier storage solution. Parallel
performance. ily achievable goal, but what about filesystems can be faster than local
The second trend was systems with the economics of this configuration? node storage with SSDs, but the cost,
much more memory than ever be- With SSD prices dropping and ca- particularly the cost per gigabyte, can
fore could be used in a variety of pacities increasing, the cost per giga- be large enough to restrict the total
ways (e.g., for caching filesystem byte has dropped rapidly, as well. At capacity significantly. If applications

Figure 2: Storage tiering with local storage added.

W W W. A D M I N - M AGA Z I N E .CO M A D M I N 78 35
TO O L S Shared and Tiered Storage

don’t need superfast storage perfor- an option that operationally is the applications, using storage that is lo-
mance, local storage offers really good same as before will only result in a cal to the compute node has become
performance that is not shared with little disturbance. popular. Putting several SSDs into an
other users and is less expensive. As The crux of this section? Buy as accelerated server with software RAID
before, you have freed the shared re- much SSD capacity and perfor- does not appreciably increase the
ally fast storage for applications that mance for each compute node as node cost. Moreover, this local stor-
need performance better than the you can, and run your applications age has some pretty amazing perfor-
local node. on local storage, unless the applica- mance that fits into a niche between
Figure 2 is the same chart as in tion requires distributed I/O or mas- the highest and middle performance
Figure 1, but local storage has been sive performance that local nodes tiers. Many applications can use this
added. Note that local storage isn’t cannot provide. The phrase “as you storage to their advantage, perhaps
quite as fast as the high-perfor- can” generally means to add local reducing the capacity and perfor-
mance tier, but it is faster than the storage until the price per node no- mance requirements for the highest
middle tier. The local storage tier ticeably increases. The point where performance tier. Q
has a better (lower) cost per giga- this occurs depends on your system
byte than the highest performance and application I/O needs.
tier, but it is a bit higher than the Info
middle tier. Overall the local storage Summary [1] Storage Questions: [https://www.
tier has less capacity than the other admin-magazine.com/HPC/Articles/
tiers, but that is really the storage The discussion of where the data Getting-Data-Into-and-Out-of-the-Cluster]
per node. Nonetheless, applications should or could go started with a lit- [2] PVFS: [https://en.wikipedia.org/wiki/
can’t access the local storage in tle history of shared storage for clus- Parallel_Virtual_File_System]
other nodes. ters and then sprang into a discussion [3] OrangeFS: [https://www.orangefs.org/]
Users still have to contend with of storage tiering. Storage tiers are all [4] MPI: [https://en.wikipedia.org/wiki/
moving their data to and from the the rage in the cluster world. They are Message_Passing_Interface]
compute nodes’ local storage. (Note very effective at reducing costs while
that you will always have a user still providing great performance for The Author
who forgets to copy their data to those applications that need it and Jeff Layton has been in the
and copy their data back from the high capacities for storing data that HPC business for over 30
compute node. This is the definition isn’t used very often but can’t be years (starting when he was
of humanity, but you can’t let this erased (or the user doesn’t want to 4 years old). When he’s not
stop the use of this very fast local erase it). grappling with a stubborn
storage.) Users have had to con- With the rise of computational ac- systemd script, he’s looking for deals for his home
tend with storage tiers, so adding celerators and their corresponding cluster. His twitter handle is @JeffdotLayton.

36 A D M I N 78 W W W. A D M I N - M AGA Z I N E .CO M
TO O L S Microsoft Power Apps

Low-code development with Microsoft Power Apps

Special Delivery
If the IT staff is having trouble keeping up with the demand for custom applications, end users can
pitch in with low-code programming tools like Microsoft Power Apps. By Joydip Kanjilal
Low-code and no-code methodolo- dragging and dropping elements market, increase agility, enhance
gies allow users to build applications and connecting data sources. Some collaboration, and reduce costs.
with no or minimal coding skills. popular low-code platforms include However, as you might expect,
These approaches leverage declarative Appian, Mendix, OutSystems, and low-code and no-code techniques
programming, abstracting the chal- Microsoft Power Apps [1]. are not right for every project. In
lenges of conventional programming. The goal of no-code platforms is to some scenarios, low-code and no-
The biggest difference between allow the non-technical user to create code can limit flexibility, reduce
low-code and no-code platforms is basic applications or automate pro- scalability, inhibit integration with
that low-code provides the option cesses without support from IT or de- legacy systems, and promote vendor
to add code manually, if necessary, velopment staff. No-code tools make lock-in. Typical uses for low-code
whereas no-code platforms abstract extensive use of visual modeling and methods include handling complex
the code entirely. Low-code tools pre-built templates. Examples of no- processes, system integration, and
typically offer visual interfaces, code platforms are: Airtable, Bubble, custom logic. No-code platforms are
pre-built components, and access Glide, Webflow, and Zapier. often used for prototyping, automat-
to code libraries, allowing develop- Skillful use of low-code and no-code ing repetitive tasks, and building
ers to create applications by easily programming can improve time to small applications.

Table 1: Low-Code vs. No-Code


Feature Low-Code No-Code
Lead Image © Peapop, Fotolia.com

Primary user Developers Business users


Support for end-to-end development? Yes No
Purpose Provides support for Rapid Application Development (RAD) Provides the necessary interfaces and tools to design
and build apps without the need for coding skills
Support for customization Pre-built templates to build custom applications Support for complete customization
Application complexity Can be used to build complex applications Can be used to build simple applications

38 A D M I N 78 W W W. A D M I N - M AGA Z I N E .CO M
Microsoft Power Apps TO O L S

Table 1 shows some of the key differ- a rapid development environment to containing additional services. Busi-
ences between low-code and no-code build custom apps for your business ness professionals often use Power
platforms. needs.” Power Apps is a good ex- Apps to deploy applications for
ample of a low-code platform that lets employee onboarding, supply chain
Building a Low-Code you roll out a custom application in management, and customer relation-
just a few steps. ship management.
Application You can build an app for free by sign- One powerful feature of Power Apps is
Microsoft Power Apps (Figure 1) is a ing in to the Power Apps website. its support for application templates,
low-code development platform for You’ll need a license to use the apps which you can use to build an app
building web and mobile applications. you create. Microsoft offers a 30-day in only a few steps. For instance, to
Microsoft calls Power Apps “a suite trial, and pricing thereafter starts create a simple expense report ap-
of apps, services, and connectors, as at $20 for a single user, with pack- plication, click Start with an app tem-
well as a data platform, that provides ages for larger groups and bundles plate in the Power Apps home screen

Figure 1: Microsoft Power Automate in action. Power Apps helps build applications that connect to several data sources, such as Office
365, SharePoint, SQL Server, Microsoft Azure, JIRA, OneDrive, and Power BI.

Figure 2: The PowerApps home screen.

W W W. A D M I N - M AGA Z I N E .CO M A D M I N 78 39
TO O L S Microsoft Power Apps

Figure 3: The My Expenses template.

(Figure 2) and select the My Expenses interface (Figure 6). Select the their expertise. For many businesses,
app (Figure 3). Next, choose a name Advanced tab in the Properties win- adopting a low-code or no-code ap-
for the app and click Next. dow (on the right) to view the code, proach can save effort, time, and
When prompted for permission to use change the properties for the button money. Microsoft Power Apps is a
SharePoint, click Allow (Figure 4); control, or add some custom code of low-code platform that makes exten-
then, click on the tree view in the left your own. sive use of templates, allowing us-
pane of the Power App window (Fig- ers to build easy business apps that
ure 5). As you can see in the figure, Conclusion would take much longer to create by
the tree view easily lets you select op- conventional techniques. Q
tions for building the expense report. Low-code and no-code technologies
You can create new screens and add make it easier for non-programmers
components to the report. to create their own applications, al- Info
Now click on NewExpenseCreate- lowing professional IT personnel to [1] Microsoft Power Apps:
Button to create a button for the focus on matters that still require [https://powerapps.microsoft.com/en-us/]

Author
Joydip Kanjilal ([https://
joydipkanjilal.com/]) is a
Microsoft Most Valuable
Professional in ASP.NET
(2007-2012), speaker, and
author of several books
and articles. He has more than 25 years of
experience in IT with more than 20 years in
Microsoft .NET and its related technologies.
He has been a community credit winner at
[http://www.community-credit.com] several
times. His technical strengths include: C#, Mi-
crosoft .NET, ASP.NET Core, ASP.NET Core MVC,
Azure, AWS, Microservices, Serverless Architec-
ture, Kubernetes, Kafka, RabbitMQ, REST, SOA,
Design Patterns, SQL Server, Oracle, Machine
Figure 4: Prompt for permission to use SharePoint. Learning, and Data Science.

40 A D M I N 78 W W W. A D M I N - M AGA Z I N E .CO M
Microsoft Power Apps TO O L S

Figure 5: The completed Expense Report application.

Figure 6: The properties of the Expense Report application.

W W W. A D M I N - M AGA Z I N E .CO M A D M I N 78 41
CO N TA I N E R S A N D V I RT UA L I Z AT I O N Knative

Native serverless computing in Kubernetes

Go Native
Knative transfers serverless workloads to Kubernetes and provides all the container components you need to
build serverless applications and PaaS and FaaS services. By Martin Loschwitz

If you check out the undisputed star onto Kubernetes often turns out to be likely worry than instill confidence in
of the container scene and its server- a bumpy road. Knative is the name of experienced administrators of classic
less capabilities, you might be some- the project and the tool that aims to workloads.
what disappointed to find that Ku- change just that. What you need are a few clearly
bernetes (K8s) really does not shine The “K” in Knative stands – you defined terms before setting off into
out of the box. Ultimately, Kubernetes guessed it – for Kubernetes, so it’s the Knative adventure. Like other
sees itself as an orchestrator for con- about native integration of work- solutions, Knative suffers greatly
tainer workloads across the boundar- loads into Kubernetes. Like many from terms like “serverless” or “cloud
ies of individual compute nodes. The components from the K8s universe, native” being thrown around every-
keyword is “orchestrator.” Kubernetes Knative is not particularly open or where and all the time with either no
looks to manage existing containers flexible for newcomers, but people definition or everyone simply ignor-
and sees its strengths precisely there. in the function-as-a-service (FaaS) or ing it. At the beginning of this article,
Creating matching workloads is some- platform-as-a-service (PaaS) universes I already defined what serverless is all
thing it tends not to consider its job. will benefit from the advantages of about: being able to access services
At times, this perspective has devas- running serverless workloads with such as databases or load balancers
tating consequences from the user’s Knative. In this article, I introduce without having to go down the clas-
point of view. you to Knative and explain the most sic route of a virtual instance with
When you look at the K8s landscape important terms and features. the installed service and matching
from the Cloud Native Computing configuration.
Foundation (CNCF), you will find The Beginning What sounds like nitpicking from the
countless solutions from which you professional admin’s point of view
are expected somehow to put together If you haven’t had much to do with is hugely important for developers.
your own setup, including many serverless computing or Knative, The advantage to developers is huge
Photo by Adam Bixby on Unsplash

continuous integration/continu- you’re likely to find yourself sitting if they can click together a database
ous delivery (CI/CD) tools that let in front of the Knative documenta- with a standardized API and not have
admins create serverless workloads tion fairly confused. Some of it reads to worry about its maintenance after-
for Kubernetes. That said, managing like it was written by a marketing ward. The main idea behind server-
the external resources for serverless department – serverless here, cloud less computing is to hide factors such
operation that are somehow grafted native there. The buzz will more as hardware and operating systems,

42 A D M I N 78 W W W. A D M I N - M AGA Z I N E .CO M
Knative CO N TA I N E R S A N D V I RT UA L I Z AT I O N

as well as their maintenance, from an Knative’s core components: Serving, serverless service. A revision is a kind
environment’s users. Instead, these Eventing, and Building. Some caution of static mapping to the combination
tasks are handled by the platform is advised here. Today, the Building of the three previously mentioned
operators, who build their own tools component is no longer a part of Kna- factors.
for this purpose and maintain specific tive. Instead, it is being continued as If you think about the four aspects
processes for the tasks. an independent project named Tekton that form the Serving component
In the Knative context, though, a few Pipelines [1], partly with contribu- (Figure 1), a coherent picture
more technical terms need clarifica- tions from other developers. Knative quickly emerges. The term “ser-
tion. These terms are particularly has returned to its roots, so to speak, vices” describes logical Knative
interesting for administrators who do and now just comprises Serving and functions in Kubernetes that take
not currently work with Kubernetes Eventing. Tekton is discussed in care of managing the entire lifecycle
on a daily basis. They are directly more detail later. For the moment, of an application. In other words,
related to how Kubernetes man- however, the focus is on Knative; for the services component creates all
ages resources and how Knative is starters, understanding it is compli- the services required for a virtual
integrated. One of these terms is the cated enough. To look under the Kna- setup, provides them with a valid
custom resource definition (CRD). tive hood, you should visualize the route and a valid configuration,
Kubernetes itself is known to be con- basic tasks that Knative is designed and creates a revision for them in
trolled by an administrator or devel- to handle from the developer’s point its own database. How the parts
oper through a REST API. Commands of view. needed for a service come together
to Kubernetes always go from the At the top of all considerations is, as is decided by the administrator or
client to the server and use the HTTP usual, the concept of the app. If you developer as part of the configura-
format. are operating serverless applications tion. You can use Knative to store
Kubernetes handles objects internally, in Knative, the ultimate aim is to roll configurations and define routes or,
and resources are an object type. In out and operate a specific applica- instead, tell Knative to find the best
Kubernetes, you have containers, tion automatically, without external possible route to access a network
pods, virtual IP addresses, and other intervention. However, an app can element itself.
facilities out of the box. Custom re- comprise multiple services, which Here, Knative also indirectly competes
source definitions are a bridge to ex- is why Knative uses the term “ser- with other solutions such as Istio and
ternal applications that let developers vices” internally, although it means the like, which use Kubernetes-native
and admins create their own resource a function rather than a specific ser- resources but then smuggle some of
types in Kubernetes, which they can vice. A service is a central object in the tasks past Kubernetes.
then use and manage later just like a K8s cluster extended by Knative.
resources that are included in Kuber- All other resources to be created in Events Count, Too
netes out of the box. Knative makes Kubernetes depend on this object
almost excessive use of CRDs, extend- to ensure that a service can reliably The other major Knative subcom-
ing an existing K8s cluster to include handle the task expected of it. In ponent is less about what and how
a large number of CRDs that establish MariaDB, for example, the service and far more about when, and once
Knative-specific resources at the K8s would be the active mysqld instance again goes by a technical term –
cluster level. – not because it is a single process, event-driven architecture – which in
Knative uses CRDs as a point to but because this service provides the information technology refers to an
dock onto the container orchestrator, central function of the database. Sev- architectural approach in software
which is absolutely essential: It is the eral processes might well be required development that allows an event A
only way that Knative can deliver on to perform a function as part of an to be followed by a response B. Put
its own promise that running server- overall structure. simply, an event-based architecture
less applications is running first-class Several resources are usually attached in Kubernetes is based on the prin-
citizens in Kubernetes. In the cloud to a service, typically including at ciple that changes to resources auto-
native world, “first-class citizens” least one route and a service-specific matically trigger other actions. These
refers to resources that can be fully configuration. The route is a reference actions are recognized as events
controlled by the existing API of the to the element of a virtual network to which a higher level instance
orchestrator or fleet manager. through which the actual function responds. In the example here, the
can be reached. In addition to ser- parent instance is Knative Eventing.
Two Features vices, routes, and configurations, A separate standard named Cloud-
Knative has a fourth resource type, Events, under the auspices of the
If you search online with the Knative revision, which denotes a specific CNCF, governs the way events are
keyword, you will regularly come configuration consisting of a service, sent in the cloud native context.
across older documentation and a route, and a configuration and A functional Eventing architec-
presentations that describe three of can occur any number of times per ture requires a sender as well as

W W W. A D M I N - M AGA Z I N E .CO M A D M I N 78 43
CO N TA I N E R S A N D V I RT UA L I Z AT I O N Knative

understanding of the ideas behind


Knative as a whole. Conveniently,
the Knative developers provide
quite a few examples of various
functions [2], so you do not have to
search for them. One of these exam-
ples is a classic from the developer
world: a web server that outputs
Hello World! [3].
The example comprises two compo-
nents and is written in Go. The hel-
loworld.go file contains the service,
which listens on port 80 and prints
out Hello World! in an HTTP-com-
patible format (Figure 2) as soon as
someone contacts the service. Far
more important is the service.yaml
service definition for Knative, because
it is what makes the application pal-
atable for Knative in the first place
(Listing 1).
Anyone who has at least looked at
code snippets for Kubernetes will
quickly understand: Initially, the code
sample uses the Knative serving.kna-
tive.dev resource to create a service
Figure 1: Knative services take a prebuilt container and roll it out along with all the named helloworld-go in the default
required services. It also has its own infrastructure services, such as an ingress gateway. namespace. The service specification
© Knative states that the helloworld-go image
from docker.io (i.e., from Docker
a receiver for the event. In simple exclusively acts inside of Kuber- Hub) runs the service. The TARGET
terms, the running setup needs netes and does not require external environment variable containing Go
to send a notification if its state resources. Like Knative Serving, Sample v1 is also set up.
changes somehow, and another Eventing extends the K8s API to in- If you apply this file to Kubernetes
component needs to receive the clude several CRDs that can be used with Knative, Knative launches the
event and take action on it. In the from within running programs. container in a pod through Kuber-
context of event-based architectures, netes and makes it accessible from
the entity that outputs events is also Hands-On the outside. This is where it also
referred to as an agent, whereas the becomes clear why Knative sees
entity that receives the events and If you are not at home in the cloud itself as a solution for operating
responds to them is also known native world, you might not fully serverless architectures: Many of
as a sink. Again, Knative Eventing understand the central concepts of the parameters you would have to
the solution. A few practical ex- specify in Kubernetes for a normal
Listing 1: service.yaml for Hello World! amples should help provide a better container without Knative support
apiVersion: serving.knative.dev/v1
kind: Service
metadata:
name: helloworld-go
namespace: default
spec:
template:
spec:
containers:
- image: docker.io/{username}/helloworld-go
env:
Figure 2: The Hello World! example only scratches the surface of what Knative can do.
- name: TARGET
But the pod definition for the service is already far easier than a manual approach in
value: "Go Sample v1"
Kubernetes. © Neeharika Kompala/GitConnected

44 A D M I N 78 W W W. A D M I N - M AGA Z I N E .CO M
Knative CO N TA I N E R S A N D V I RT UA L I Z AT I O N

are autonomously implemented by and therefore is outside the box. If Listing 2: service.yaml for Services
Knative without user intervention. from this example you send the run- apiVersion: serving.knative.dev/v1
You can then focus on your applica- ning service a name in the body of kind: Service
tions without having to worry about an HTTP request, you get back a metadata:
the details of Kubernetes or running Hello <Name>!, but – and this is name: cloudevents-go
containers in it, and it also applies the important bit – not because the namespace: default
to operational tasks. Out of the box, service itself uses a function for the spec:
for example, Knative scales the pod response. Instead, a request to the template:
up or down as a function of the in- service triggers an event in the Kna- spec:
coming load so that an appropriate tive API that passes the tool to a spe- containers:
number of instances of the respec- cially defined sink. In the example, - image: ko://github.com/knative/docs/code-samples/
tive pod are running – or none at the running application itself serves serving/cloudevents/cloudevents-go
all. The scaling behavior can be as the sink. It then responds on the - name: K_SINK
value: http://default-broker.default.svc.
completely predefined. code side by sending back a Hello
cluster.local
A second example shows how events with the matching HTTP body when
can be intercepted and processed in a Knative event is received.
Knative (Figure 3). The entire wealth The key point here is the integration more examples in its documentation,
of Knative functions is available, such of the event agent and event sink including code for integration into
as automatic horizontal scaling or into Kubernetes itself, which lacks Kubernetes and the application code.
specific routing between Kubernetes the entire infrastructure for manag- These examples give developers a
and a service, right through to the use ing events – Knative provides this better idea of the solution’s capabili-
of external routing services. instead. The example from Listing 2 ties. Specific examples of autoscaling
The standard example is slightly shows the complete implementation applications can also be found.
simpler. It is again based on Go and of the sample service. Moreover, it is quite remarkable that
implements an HTTP service that Admittedly, these two examples do Knative implements all the required
opens a port and then waits on it for not come close to demonstrating the functions itself on the basis of built-
incoming messages. The implemen- power offered by the Knative feature in K8s resources. Autoscaling, for
tation is for purely academic reasons set. The project lists quite a few example, exclusively relies on basic

Figure 3: Knative Serving provides a broker service for events that receives incoming events as a sink and forwards them to defined
targets, which means that developers can build a system of events and responses into their services. © Knative

W W W. A D M I N - M AGA Z I N E .CO M A D M I N 78 45
CO N TA I N E R S A N D V I RT UA L I Z AT I O N Knative

However, to this day Tekton can’t


quite conceal its family ties to
Knative. Under the hood, the com-
ponent’s architecture is similar to
Knative’s Serving and Eventing. Like
Knative, Tekton extends an exist-
ing K8s cluster to include a number
of CRDs – for creating and build-
ing applications in this case. Not
to be outdone by its competitors in
terms of marketing, Tekton develop-
ers now describe this integration as
a CI/CD pipeline for cloud native
environments.
The focus is still on serverless ap-
plications. Meanwhile, Tekton is also
great to use to build other applica-
tions within Kubernetes. To do this, it
relies on pipelines that it creates and
configures in Kubernetes (Figure 5).
Argo CD, for example, is a separate
Figure 4: The Activator plays a crucial role in Knative. It fields most of the incoming commands CI/CD system and does not have
and processes them or forwards them to the appropriate Knative services. © Knative much to do with Tekton. However,
Argo CD and Tekton can be teamed
features that Kubernetes includes components: Serving, Eventing, up to integrate artifacts created in
anyway. If you want to use Knative and Building. The Building com- Argo CD directly into Kubernetes.
with external extensions (e.g., Istio) ponent, however, was something If you then add Knative, you can per-
because you are already familiar with of an outsider from the very begin- form party tricks, such as building an
them, you will find instructions on- ning. The Knative developers have image on demand after a commit to a
line that describe the steps. Knative is always distinguished between op- Git directory, with the image then being
extremely powerful, but also very so- erating applications and the build automatically rolled out to the produc-
ciable with both internal and external process, which is understandable tion environment. The key factor that
solutions. from a logical point of view and sets Tekton apart from quite a few com-
At this point, also remember that Kna- completely correct if you think peting products is the ability to build a
tive launches quite a few services for its things out. Although the Serving container with a command to the K8s
own operations in an active K8s cluster. and Eventing layers of Knative can- API. Unlike Argo CD, Tekton provisions
The Activator plays the central role, re- not be used without one another, all resources and infrastructure for up-
ceiving most of the requests from Kna- the path and approach used to coming build tasks independently and
tive CRDs and acting something like a create the artifacts to be operated autonomously in Kubernetes. It also
central switchboard (Figure 4). are basically irrelevant for Knative cleans up afterward, if desired.
itself. Some time ago, the company Tekton closes a gap in this respect.
The Renegade Son got down to business and out- When reading the paragraphs on
sourced the Building component to Serving and Eventing, many expe-
As mentioned at the beginning, a separate project; it has been oper- rienced K8s admins may have been
Knative originally comprised three ating as Tekton ever since. bothered that a ready-made Docker

Figure 5: Tekton implements pipelines and infrastructure at the K8s API level to build application images to run in Kubernetes. © IBM

46 A D M I N 78 W W W. A D M I N - M AGA Z I N E .CO M
Knative CO N TA I N E R S A N D V I RT UA L I Z AT I O N

container and application must Kubernetes. Other solutions might ul- Info
be available, which Knative then timately give you the same results, but [1] Tekton Pipelines:
launches as an instance in the run- it means operating the infrastructure [https://tekton.dev/docs/pipelines/
ning cluster. Although it has little to outside of Kubernetes without the abil- pipelines/]
do with CI/CD, it is precisely the CI/ ity to control it with the Kubernets API. [2] Knative examples:
CD factor that plays a significant role If you value a solution from a single [https://knative.dev/docs/samples/
in Kubernetes. Tekton builds the run- source, Knative and Tekton are the right serving/]
ning container from the sources of choice. [3] “Hello World!” with Knative:
a Docker image, which Serving and Boundless euphoria is not the order [https://github.com/knative/docs/tree/
Eventing then process. of the day, however. Many external main/code-samples/serving/hello-world/
tools offer functions that Knative can- helloworld-go]
Conclusions not implement within Kubernetes be-
cause of the system. At this point, ad- The Author
Knative proves to be a powerful tool mins and developers need to test the Freelance journalist Martin
and really turns up the heat when available alternatives and choose the Gerhard Loschwitz focuses
paired with Tekton. At the moment, tool that best fits their own require- primarily on topics such
this combination is the only way ments. In many cases, this is likely to as OpenStack, Kubernetes,
to achieve true CI/CD directly in be Knative, but there are exceptions. Q and Chef.

Q
CO N TA I N E R S A N D V I RT UA L I Z AT I O N Dockerize a Legacy App

Dockerizing Legacy Applications

Makeover
Sooner or later, you’ll want to convert your legacy application to a efficiently, which is crucial for appli-
cations that have to deal with sudden
containerized environment. Docker offers the tools for a smooth and load spikes. The fact that you can
efficient transition. By Artur Skura start and terminate containers quickly
has several other consequences. You
In the past, we ran applications on many legacy applications based on can deploy your applications much
physical machines. We cared about older architectures are still in use. If faster – and roll them back equally
every system on our network, and we your legacy application is running quickly if you experience problems.
even spent time discussing a proper fine in a legacy context, you might be
naming scheme for our servers (RFC wondering why you would want to go Getting Started
1178 [1]). Then virtual machines to the trouble to containerize.
came along, and the number of serv- The first advantage of containers is To work with Docker, you need to
ers we needed to manage increased the uniformity of environments: Con- set up a development environment.
dramatically. We would spin them tainerization ensures that the applica- First, you’ll need to install Docker
up and shut them down as neces- tion runs consistently across multiple itself. Installation steps vary, depend-
sary. Then containers took this idea environments by packaging the app ing on your operating system [2].
even further: It typically took several and its dependencies together. This Once Docker is installed, open a
seconds or longer to start a virtual means that the development environ- terminal and execute the following
machine, but you could start and stop ment on the developer’s laptop is command to confirm Docker is cor-
a container in almost no time. fundamentally the same as the testing rectly installed:
In essence, a container is a well-iso- and production environments. This
lated process, sharing the same kernel uniformity can lead to significant sav- docker --version
as all other processes on the same ings with testing and troubleshooting
machine. Although several container future releases. Another benefit is Now that you have Docker installed,
technologies exist, the most popular is that containers can be horizontally you’ll also need Docker Compose, a
Docker. Docker’s genius was to create scaled; in other words, you can scale tool for defining and running multi-
a product that is so smooth and easy the application by increasing (and de- container Docker applications [3]. If
to use that suddenly everybody started creasing) the number of containers. you have Docker Desktop installed,
using it. Docker managed to hide the Adding a container orchestration tool you won’t need to install Docker
underlying complexity of spinning up like Kubernetes means you can opti- Compose separately because the
a container and to make common op- mize resource allocation and better Compose plugin is already included.
Lead Image © hanohiki, 123RF.com

erations as simple as possible. use the machines you have – whether For a simple example to illustrate the
physical or virtual. The power of con- fundamentals of Docker, consider a
Containerizing Legacy Apps tainer orchestration makes it easy to Python application running Flask, a
scale the app with the load. Because web framework that operates on a
Although most modern apps are cre- containers start faster than virtual specific version of Python and re-
ated with containerization in mind, machines, you can scale much more lies on a few third-party packages.

48 A D M I N 78 W W W. A D M I N - M AGA Z I N E .CO M
Dockerize a Legacy App CO N TA I N E R S A N D V I RT UA L I Z AT I O N

Listing 1 shows a snippet of a typical When running a container, the -p flag but now both containers are isolated
Python application using Flask. maps a host port to a container port: in a custom network, offering more
To dockerize this application, you control and security.
would write a Dockerfile – a script docker run -d -p 8080:80 U For applications requiring more com-
containing a sequence of instructions --name web-server nginx plex networking setups, you can use
to build a Docker image. Each instruc- Docker Compose and define multiple
tion in the Dockerfile generates a new In this case, NGINX is running in- services, networks, and even volumes
layer in the resulting image, allowing side the container on port 80. The -p in a single docker-compose.yml file
for efficient caching and reusability. By 8080:80 maps port 8080 on the host to (Listing 3).
constructing a Dockerfile, you essen- port 80 on the container. Now, access- When you run docker-compose up,
tially describe the environment your ing http://localhost:8080 on the host both services will be instantiated,
application needs to run optimally, ir- machine directs traffic to the NGINX linked, and isolated in a custom net-
respective of the host system. server running in the container. work, as defined.
Start by creating a file named Dock- For inter-container communication, As you can see, effective networking
erfile (no file extension) in your Docker offers several options. The in Docker involves understanding
project directory. The basic structure simplest approach involves using con- and combining these elements: port
involves specifying a base image, set- tainer names as DNS names, made mapping for external access, inter-
ting environment variables, copying possible by the default bridge net- container communication via custom
files, and defining the default com- work. First, run a database container: bridge networks, and orchestration
mand for the application. Listing 2 (managed here by Docker Compose).
shows a simple Dockerfile for the ap- docker run -d --name U
plication in Listing 1. my-database mongo
Volumes and Persistent Data
In this Dockerfile, I specify that I’m
using Python 3.11, set the working Now, if you want to link a web ap- Managing persistent data within
directory in the container to /app, plication to this database, you can Docker involves understanding and
copy the required files, and install the reference the database container by leveraging volumes. Unlike a con-
necessary packages, as defined in a its name: tainer, a volume exists independently
requirements.txt file. Finally, I specify
that the application should start by docker run -d --link my-database:db U Listing 1: Simple Flask App
running app.py. my-web-app from flask import Flask
To build this Docker image, you would
navigate to the directory containing the In this setup, my-web-app can connect app = Flask(__name__)
Dockerfile and execute the following to the MongoDB server by using db as
commands to build and run the app: the hostname. @app.route('/')

Although useful, the --link flag is def hello_world():


return 'Hello, World!'
docker build -t my-legacy-app . considered legacy and is deprecated.
docker run -p 5000:5000 U A more flexible approach is to create
if __name__ == '__main__':
my-legacy-app custom bridge networks. A custom
app.run(host='0.0.0.0', port=5000)
network facilitates automatic DNS
With these steps, you have contain- resolution for container names, and it
erized the Flask application using also allows for network isolation. Listing 2: Dockerfile for Flask App (Listing 1)
Docker. The application now runs iso- For example, you can create a custom # Use an official Python runtime as a base image
lated from the host system, making it network as follows: FROM python:3.11-slim
more portable and easier to deploy on
# Set the working directory in the container
any environment that supports Docker. docker network create my-network
WORKDIR /app

Networking in Docker Now, run containers in this custom


# Copy the requirements.txt file into the container
network with: COPY requirements.txt /app/
Networking is one of Docker’s core
features, enabling isolated containers docker run -d --network=my-network U # Install the dependencies
to communicate amongst themselves --name my-database mongo U RUN pip install --no-cache-dir -r requirements.txt
and with external networks. The most --network-alias=db
straightforward networking scenario docker run -d --network=my-network U # Copy the current directory contents into the container
involves a single container that needs to my-web-app COPY . /app/
be accessible from the host machine or
# Run app.py when the container launches
the outside world. To support network Here, my-web-app can still reach my-da-
CMD ["python", "app.py"]
connections, you’ll need to expose ports. tabase using its name or a DNS alias,

W W W. A D M I N - M AGA Z I N E .CO M A D M I N 78 49
CO N TA I N E R S A N D V I RT UA L I Z AT I O N Dockerize a Legacy App

and retains data even when a con- Whereas anonymous volumes are Dockerizing a Legacy Web
tainer is terminated. This characteris- suitable for quick tasks, named vol-
tic is crucial for stateful applications, umes provide more control and are
Server
like databases, that require data to easier to manage. If you use docker Containerizing a legacy web server
persist across container life cycles. run and specify a named volume, involves several phases: assessment,
For simple use cases, you can create Docker will auto-create it if needed. dependency analysis, containerization,
anonymous volumes at container run- You can also create a named volume and testing. For this example, I’ll focus
time. When you run a container with explicitly with: on how to containerize an Apache
an anonymous volume, Docker gener- HTTP Server. The process generally in-
ates a random name for the volume. docker volume create my-mongo-data volves creating a Dockerfile, bundling
The following command starts a Mon- configuration files, and possibly incor-
goDB container and attaches an anony- Now you can start the MongoDB porating existing databases.
mous volume to the /data/db directory, container and explicitly attach this The first step is to create a new direc-
where MongoDB stores its data: named volume: tory to hold your Dockerfile and con-
figuration files. This directory acts as
docker run -d --name my-mongodb U docker run -d --name my-mongodb U the build context for the Docker image:
-v /data/db mongo -v my-mongo-data:/data/db mongo
mkdir dockerized-apache
Listing 3: Sample docker-compose.yml File You can use named volumes to share cd dockerized-apache
services: data between containers. If you need
web: to share data between the container Start by creating a Dockerfile that
image: nginx and the host system, host volumes specifies the base image and instal-
networks: are the choice. This feature mounts a lation steps. Imagine you’re using
- my-network specific directory from the host into an Ubuntu-based image for compat-
database: the container: ibility with your legacy application
image: mongo (Listing 5).
networks: docker run -d --name my-mongodb U In Listing 5, the RUN instruction in-
- my-network
-v /path/on/host:/data/db mongo stalls Apache, and the COPY instruc-
networks:
tion transfers your existing Apache
my-network:
Here, /path/on/host corresponds to configuration file (my-httpd.conf) into
driver: bridge
the host system directory you want to the image. The CMD instruction speci-
mount. fies that Apache should run in the
Listing 4: Sample Named Volume With Docker Compose, volume foreground when the container starts.
services: specification becomes streamlined Place your existing Apache configura-
database: and readable, especially when dealing tion file in the same directory as the
image: mongo with multi-container, stateful legacy Dockerfile. This configuration should
volumes: applications. Listing 4 shows how you be a working setup for your legacy
- my-mongo-data:/data/db could define a service in docker-com- web server. Build the Docker image
volumes:
pose.yml with a named volume. from within the dockerized-apache
my-mongo-data:
When you run docker-compose up, it directory:
will instantiate the service with the
Listing 5: A sample Dockerfile for Apache web server specified volume. docker build -t dockerized-apache .
# Use an official Ubuntu as a parent image Data persistence isn’t confined
FROM ubuntu:latest to just storing data; backups are Run a container from this image,
equally vital. Use docker cp to copy mapping port 80 inside the container
# Install Apache HTTP Server files or directories between a con- to port 8080 on the host:
RUN apt-get update && apt-get install -y apache2 tainer and the local filesystem. To
back up data from a MongoDB con- docker run -d -p 8080:80 --name U
# Copy local configuration files into the container tainer, enter: my-apache-container U

COPY ./my-httpd.conf /etc/apache2/apache2.conf dockerized-apache


docker cp my-mongodb:/data/db U
/path/on/host The legacy Apache server should
# Expose port 80 for the web server
now be accessible via http://
EXPOSE 80
Here, data from /data/db inside localhost:8080.
the my-mongodb container is cop-
# Start Apache when the container runs
ied to /path/on/host on the host If your legacy web server interacts
CMD ["apachectl", "-D", "FOREGROUND"]
system. with a database, you’ll likely need to

50 A D M I N 78 W W W. A D M I N - M AGA Z I N E .CO M
Dockerize a Legacy App CO N TA I N E R S A N D V I RT UA L I Z AT I O N

dockerize that component as well or nontrivial task that demands meticu- environments. For instance, setting the
ensure the web server can reach the lous planning and execution. The JAVA_HOME variable for a Java applica-
existing database. For instance, if you procedure involves containerizing tion can be done in the Dockerfile:
have a MySQL database, you can run the database, managing persistent
a MySQL container and link it to your storage, transferring existing data, FROM openjdk:11
Apache container. A tool like Docker and ensuring security measures are ENV JAVA_HOME U
Compose can simplify the orchestra- in place. One area where container- /usr/lib/jvm/java-11-openjdk-amd64
tion of multi-container setups. izing a traditional SQL database is ex-
For debugging, you can view the logs tremely useful is in development and However, hard-coding sensitive or
using the following command: testing (see the “Testing” box). environment-specific information in
If you choose to dockerize a database, the Dockerfile is not recommended,
docker logs my-apache-container the first step is to choose a base im- because it compromises security and
age. For MySQL, you could use the reduces flexibility.
This example containerized a legacy official Docker image available on
Apache HTTP Server, but you can use Docker Hub. You will also find official Testing
this general framework with other images for Oracle Database. You can quickly spin up a container with your
web servers and applications as well. The following is a basic example of database, usually containing test data, and
The key is to identify all dependen- how to launch a MySQL container: then immediately verify if your app works
cies, configurations, and runtime properly with the database. And you can do
parameters to ensure a seamless docker run --name my-existing-mysql U it all without asking the Ops team to provi-
transition from a traditional setup to a -e MYSQL_ROOT_PASSWORD=U sion a database host for you.
containerized environment. my-secret-pw -d mysql:8.0 Good examples of this usage are services
in GitLab CI/CD pipelines, such as Post-
In this example, the environment greSQL [4] or MySQL [5]. In Listing 6, I
What About a Database? use a Docker image containing Docker and
variable MYSQL_ROOT_PASSWORD is set
Docker Compose and the usual variables
Containers are by nature stateless, to your desired root password. The
defining the database, its user, and pass-
whereas data is inherently stateful. -d flag runs the container in de-
word. I also define the so-called service,
Therefore databases require a more tached mode, meaning it runs in the based on the image postgres:16 and
nuanced approach. In the past, run- background. aliased as postgres. In the test job, I
ning databases in containers was This quick setup works for new da- install the PostgreSQL command-line cli-
usually not recommended, but nowa- tabases, but keep in mind that exist- ent and execute a sample SQL query con-
days you can do it perfectly well – ing databases require you to import necting to the postgres service defined
you just need to make sure the data existing data. You can use a Docker earlier, having exported the password.
is treated properly. volume to import a MySQL dump file The postgres service is simply a Docker
Or, you can decide not to container- into the container and then import it container with the chosen version of Post-
ize your databases at all. In this sce- into the MySQL instance within the greSQL conveniently started in the same
nario, your containers connect to a Docker container. network as the main container so that
you can connect to it directly from your
dedicated database, such as an RDS
pipeline.
instance managed by Amazon Web Configurations and
Services (AWS), which makes sense if
your containers are running on AWS.
Environment Variables Listing 6: PostgreSQL in a GitLab CI Pipeline
Amazon then takes care of provision- Legacy applications often rely on image: my-image-with-docker-and-docker-compose
ing, replication, backup, and so on. complex configurations and environ-
This safe and clean solution lets you ment variables. When dockerizing variables:
POSTGRES_DB: my-db
concentrate on other tasks while AWS such applications, it’s crucial to man-
POSTGRES_USER: ${USER_NAME}
is doing the chores. One common age these configurations efficiently,
POSTGRES_PASSWORD: ${USER_PASSWORD}
scenario is to use a containerized without compromising security or
database in local development (so it’s functionality. Docker provides mul- services:
easy to spin up/tear down), but then tiple ways to inject configurations and - name: postgres:16
swap out for a managed database ser- environment variables into contain- alias: postgres
vice in production. At the end of the ers: via Dockerfile instructions, com-
day, your app is using the database’s mand-line options, environment files, test:
communication protocol, regardless and Docker Compose. Each method script:
of where and how the database is serves a particular use case. - apt-get update && apt-get install -y
running. Dockerfile-based configurations postgresql-client
Dockerizing an existing database are suitable for immutable settings - PGPASSWORD=$POSTGRES_PASSWORD psql -h postgres
-U $POSTGRES_USER -d $POSTGRES_DB -c 'SELECT 1;'
like MySQL or Oracle Database is a that don’t change across different

W W W. A D M I N - M AGA Z I N E .CO M A D M I N 78 51
CO N TA I N E R S A N D V I RT UA L I Z AT I O N Dockerize a Legacy App

For more dynamic configurations, use Run it with environment variables Implement least privilege principles
the -e option with docker run to set sourced from a .env file or directly for containers. For instance, don’t run
environment variables: from the shell: containers as the root user. Specify a
non-root user in the Dockerfile:
docker run -e "DB_HOST=U DB_HOST=database.local U
database.local" -e "DB_PORT=U DB_PORT=3306 docker-compose up FROM ubuntu:latest
3306" my-application RUN useradd -ms /bin/bash myuser
Configuration files necessary for your USER myuser
While convenient for a few variables, application can be managed using
this approach becomes unwieldy Docker volumes. Place the configu- Containers also should not run
with a growing list. As a more scal- ration files on the host system and with least privileges. The following
able alternative, Docker allows you to mount them into the container: example:
specify an environment file:
docker run -v U docker run --cap-drop=all -cap-add=U
# .env file /path/to/config/on/host:U net_bind_service my-application
DB_HOST=database.local /path/to/config/in/container U
DB_PORT=3306 my-application starts a container with all capabilities
dropped and then adds back only the
Then, run the container as follows: In Docker Compose, use: net_bind_service capability required
to bind to ports lower than 1024.
docker run --env-file .env U services: Use read-only mounts for sensi-
my-application my-application: tive files or directories to prevent
image: my-application:latest tampering:
This method keeps configurations or- volumes:
ganized, is easy to manage with ver- - /path/to/config/on/host:U docker run -v /my-secure-data:U
sion control systems, and separates /path/to/config/in/container /data:ro my-application
the configurations from application
code. However, exercise caution; en- This approach provides a live link If the container needs to write to a
sure the .env files, especially those between host and container, enabling filesystem, consider using Docker
containing sensitive information, real-time configuration adjustments volumes and restricting read/write
are adequately secured and not without requiring container restarts. permissions appropriately.
accidentally committed to public It is also important to implement log-
repositories. Docker and Security ging and monitoring to detect abnor-
In multi-container setups orches- mal container behavior, such as un-
trated with Docker Compose, you can
Concerns expected outgoing traffic or resource
define environment variables in the Securing Docker containers requires utilization spikes.
docker-compose.yml file: checking every layer: the host system,
the Docker daemon, images, contain- Dockerizing a Legacy
services: ers, and networking. Mistakes in any
my-application: of these layers can expose your appli-
CRM System
image: my-application:latest cation to a variety of threats, includ- To dockerize a legacy Customer Rela-
environment: ing unauthorized data access, denial tionship Management (CRM) system
DB_HOST: database.local of service, code execution attacks, effectively, you need to first under-
DB_PORT: 3306 and many others. stand its current architecture. The hy-
Start by securing the host system run- pothetical legacy CRM I’ll dockerize
For variable data across different en- ning the Docker daemon. Limit access consists of an Apache web server, a
vironments (development, staging, to the Docker Unix socket, typically / PHP back end, and a MySQL data-
production), Docker Compose sup- var/run/docker.sock. This socket al- base. The application currently runs
ports variable substitution: lows communication with the Docker on a single, aging physical server,
daemon and, if compromised, grants handling functions from customer
services: full control over Docker. Use Unix data storage to sales analytics.
my-application: permissions to restrict access to au- The CRM’s monolithic architecture
image: my-application:U thorized users. means that the web server, PHP back
${TAG-latest} Always fetch Docker images from end, and database are tightly inte-
environment: trusted sources. Scan images for vul- grated, all residing on the same ma-
DB_HOST: ${DB_HOST} nerabilities using a tool like Docker chine. The web server listens on port
DB_PORT: ${DB_PORT} Scout [6] or Clair [7]. 80 and communicates directly with

52 A D M I N 78 W W W. A D M I N - M AGA Z I N E .CO M
Dockerize a Legacy App CO N TA I N E R S A N D V I RT UA L I Z AT I O N

the PHP back end, which in turn talks Next, move the PHP back end to its Then, run the container:
to the MySQL database on port 3306. own environment. Use PHP-FPM to
Clients interact with the CRM through manage PHP processes separately. docker run --name U
a web interface served by the Apache Update Apache’s httpd.conf to route my-apache-container U
server. PHP requests to the PHP-FPM service: -d my-apache-image
The reasons for migrating the CRM
to a container environment are as # httpd.conf For PHP, start with a base PHP
follows: ProxyPassMatch ^/(.*\.php(/.*)?)$U image and then install needed ex-
Q Scalability: The system’s mono- fcgi://php:9000/path/to/app/$1 tensions. Add your PHP code after-
lithic nature makes it hard to scale wards (Listing 9).
individual components. For the MySQL database, configure a Build and run the PHP image simi-
Q Maintainability: Patching or updat- new MySQL instance on a separate larly to Apache:
ing one part of the applications machine. Update the PHP back end to
often requires taking the entire connect to this new database by alter- docker build -t my-php-image .
system offline. ing the database connection string in docker run --name my-php-container U
Q Deployment: New feature rollouts the configuration: -d my-php-image
are time-consuming and prone to
errors. <?php MySQL Dockerfiles are less com-
Q Resource utilization: The aging $db = new PDO('mysql:host=db;dbname=U mon because the official MySQL
hardware is underutilized but can’t your_db', 'user', 'password'); Docker images are configurable
be decommissioned due to the ?> via environment variables. How-
monolithic architecture. ever, if you have SQL scripts to run
To containerize the CRM, you need to During this isolation, you might find at startup, you can include them
take the following steps. that some components have shared (Listing 10).
libraries or dependencies that are Run the MySQL container with envi-
Step 1: Initial Isolation of Compo- stored locally, such as PHP exten- ronment variables to set up the data-
nents and Dependencies sions or Apache modules. These base name, user, and password:
Before you dive into dockerization, should be identified and installed in
it is important to isolate the individ- the respective isolated environments. docker run --name my-mysql-container U
ual components of the legacy CRM Missing out on these dependencies -e MYSQL_ROOT_PASSWORD=U
system: the Apache web server, PHP can cause runtime errors or func- my-secret -d my-mysql-image
back end, and MySQL database. tional issues.
This step will lay the groundwork While moving the MySQL database, Listing 7: MySQL Data
for creating containerized versions ensuring data consistency can be a # Data export from old MySQL
of these components. However, the challenge. Use tools like mysqldump [8] mysqldump -u username -p database_name > data-dump.sql
tightly integrated monolithic archi- for data migration and validate the
# Data import to new MySQL
tecture presents challenges in isola- consistency (Listing 7).
mysql -u username -p new_database_name < data-dump.sql
tion, specifically in ensuring that If user sessions were previously man-
dependencies are correctly mapped aged by storing session data locally,
and that no features break in the you’ll need to migrate this functional- Listing 8: Dockerfile for Apache
process. ity to a distributed session manage- # Use an official Apache runtime as base image
Start by decoupling the Apache web ment system like Redis. FROM httpd:2.4
server from the rest of the system.
One approach is to create a reverse Step 2: Creating Dockerfiles and # Copy configuration and web files
proxy that routes incoming HTTP re- Basic Containers
COPY ./my-httpd.conf /usr/local/apache2/conf/httpd.conf
quests to a separate machine or con- Once components and dependencies
COPY ./html/ /usr/local/apache2/htdocs/
tainer where Apache is installed. You are isolated, the next step is craft-
can achieve this using NGINX: ing Dockerfiles for each element: the
Apache web server, PHP back end, Listing 9: Dockerfile for PHP
# nginx.conf and MySQL database. For Apache, the # Use an official PHP runtime as base image
server { Dockerfile starts from a base Apache FROM php:8.2-fpm
listen 80; image and copies the necessary HTML
location / { and configuration files. A simplified # Install PHP extensions
proxy_pass U Dockerfile appears in Listing 8. RUN docker-php-ext-install pdo pdo_mysql

http://web:80; Build the Apache image with:


# Copy PHP files
}
COPY ./php/ /var/www/html/
} docker build -t my-apache-image .

W W W. A D M I N - M AGA Z I N E .CO M A D M I N 78 53
CO N TA I N E R S A N D V I RT UA L I Z AT I O N Dockerize a Legacy App

For production, you’ll need to opti- For these containers to function co- need to worry about Docker network-
mize these Dockerfiles and runtime hesively as your legacy CRM system, ing, because Kubernetes has its own
commands with critical settings, such appropriate networking and data networking plugins.
as specifying non-root users to run management strategies are vital.
services in containers, fine-tuning Containers should communicate over Step 4: Configuration Management
Apache and PHP settings for perfor- a user-defined bridge network rather and Environment Variables
mance, and enabling secure connec- than Docker’s default bridge to enable Configuration management and envi-
tions to MySQL. hostname-based communication. Cre- ronment variables form the backbone
ate a user-defined network: of a flexible, maintainable dockerized
Step 3: Networking and Data application. They allow you to pa-
Management docker network create crm-network rametrize your containers so that the
At this point, the decoupled compo- same image can be used in multiple
nents – Apache, PHP, and MySQL – Then attach each container to this contexts, such as development, test-
each reside in a separate container. network (Listing 11). ing, and production, without altera-
Now, each container can reach an- tion. These parameters might include
Listing 10: Dockerfile for MySQL Startup Scripts other using an alias or the service database credentials, API keys, or
# Use the official MySQL image name as the hostname. For instance, feature flags.
FROM mysql:8.0 in your PHP database connection You can pass environment variables to
string, you can replace the hostname a container at runtime via the -e flag:
# Initialize database schema
with my-mysql-container.
COPY ./sql-scripts/ /docker-entrypoint-initdb.d/
Data in Docker containers is ephem- docker run --name my-php-container U
eral. For a database system, losing -e API_KEY=my-api-key U
Listing 11: Network Setup data upon container termination is -d my-php-image
docker run --network crm-network --name unacceptable. You can use Docker
my-apache-container -d my-apache-image volumes to make certain data persis- In your PHP code, the API_KEY vari-
docker run --network crm-network --name my-php-container tent and manageable: able can be accessed as $_ENV['API_
-d my-php-image KEY'] or getenv('API_KEY'). For a
docker run --network crm-network --name docker volume create mysql-data more comprehensive approach,
my-mysql-container -e MYSQL_ROOT_PASSWORD=my-secret -d
Docker Compose allows you to spec-
my-mysql-image
Bind this volume to the MySQL ify environment variables for each
container: service in the docker-compose.yml file:
Listing 12: docker-compose.yml Network Setup
services: docker run --network crm-network U services:
web: --name my-mysql-container U db:
image: my-apache-image -e MYSQL_ROOT_PASSWORD=U image: my-mysql-image
networks: my-secret -v mysql-data:U environment:
- crm-network /var/lib/mysql -d my-mysql-image MYSQL_ROOT_PASSWORD: my-secret

php:
For the Apache web server and PHP Alternatively, you can use a .env
image: my-php-image
back end, you should map any writ- file in the same directory as your
networks:
able directories (e.g., for logs or up- docker-compose.yml. Place your envi-
- crm-network
loads) to Docker volumes. ronment variables in the .env file:
db:
Docker Compose facilitates running
multi-container applications. Create API_KEY=my-api-key
image: my-mysql-image
environment: a docker-compose.yml file as shown MYSQL_ROOT_PASSWORD=my-secret
MYSQL_ROOT_PASSWORD: my-secret in Listing 12.
volumes: Execute docker-compose up, and all Reference these in docker-compose.yml:
- mysql-data:/var/lib/mysql your services will start on the defined
networks: network with the appropriate vol- services:
- crm-network umes for data persistence. Note that db:
user-defined bridge networks incur a image: my-mysql-image
networks: small overhead. Although this over- environment:
crm-network: head is negligible for most applica- MYSQL_ROOT_PASSWORD: U
driver: bridge tions, high-throughput systems might ${MYSQL_ROOT_PASSWORD}
require host or macvlan networks.
volumes:
If you decide to run your app in Ku- Running docker-compose up will
mysql-data:
bernetes, for example, you will not load these environment variables

54 A D M I N 78 W W W. A D M I N - M AGA Z I N E .CO M
Dockerize a Legacy App CO N TA I N E R S A N D V I RT UA L I Z AT I O N

automatically. Never commit sensi- The most basic level of validation is will likely opt for a container orches-
tive information like passwords or API functional testing to ensure feature par- tration solution already in use, such
keys in your Dockerfiles or code. ity with the legacy system. Automated as Kubernetes.
Configuration files for Apache, PHP, tools like Selenium [9] for web UI test-
or MySQL should never be hard-coded ing or Postman [10] for API testing of- Conclusion
into the image. Instead, mount them fer this capability. Running a test suite
as volumes at runtime. If you’re using against both the legacy and dockerized Containerization offers many techni-
Docker Compose, you can specify a environments verifies consistent behav- cal benefits, including uniformity,
volume using the volumes directive: ior. For example, to run Selenium tests security, and better scaling. In addi-
in a Docker container, you would type tion, containerizing your apps can
services: a command similar to the following: save you money with more efficient
web: testing and rollout, and a container
image: my-apache-image docker run --net=host selenium/U strategy can minimize the need for
volumes: standalone-chrome python U continual customization to adapt to
- ./my-httpd.conf:/usr/local/U my_test_script.py new hardware and software settings.
apache2/conf/httpd.conf Docker Compose and other tools in
Once functionality is confirmed, per- the Docker toolset provide a safe, ef-
Some configurations might differ formance metrics such as latency, ficient, and versatile approach for mi-
between environments (e.g., develop- throughput, and resource utiliza- grating your existing applications to a
ment and production). Use templates tion must be gauged using tools like container environment. Q
for your configuration files where Apache JMeter, Gatling, or custom
variables can be replaced at runtime scripts. You should also simulate ex- This article was made possible by sup-
by environment variables. Tools like treme conditions to validate the sys- port from Docker through Linux New
envsubst can assist in this substitution tem’s reliability under strain. Media’s Topic Subsidy Program (https://
before the service starts: Static application security testing www.linuxnewmedia.com/Topic_Subsidy).
(SAST) and dynamic application
envsubst < my-httpd-template.conf > U security testing (DAST) should also Info
/usr/local/apache2/conf/httpd.conf be employed. Tools like OWASP ZAP [1] RFC 1178: Choosing a Name for your
can be dockerized and incorporated Computer: [https://datatracker.ietf.org/
Strive for immutable configurations into the testing pipeline for dynamic doc/html/rfc1178]
and idempotent operations to ensure testing. While testing, activate moni- [2] Install Docker Engine:
your system’s consistency. Once a toring solutions like Prometheus and [https://docs.docker.com/engine/install/]
container is running, changing its Grafana or ELK stack for real-time [3] Docker Compose:
configuration should not require metrics and logs. These tools will [https://docs.docker.com/compose/]
manual intervention. If a change is identify potential bottlenecks or secu- [4] GitLab: Using PostgreSQL: [https://docs.
needed, deploy a new container with rity vulnerabilities dynamically. gitlab.com/ee/ci/services/postgres.html]
the updated configuration. Despite rigorous testing, unforeseen [5] GitLab: Using MySQL: [https://docs.
While this approach is flexible, it in- issues might surface post-deployment. gitlab.com/ee/ci/services/mysql.html]
troduces complexity into the system, Therefore, formulate a rollback strat- [6] Docker Scout:
requiring well-documented proce- egy beforehand. Container orchestra- [https://docs.docker.com/scout/]
dures for setting environment vari- tion systems, such as Kubernetes and [7] Clair: [https://github.com/quay/clair]
ables and mounting configurations. Swarm, provide the ability to easily [8] mysqldump: [https://dev.mysql.com/doc/
Remember that incorrect handling of rollout changes and rollback when is- refman/8.0/en/mysqldump.html]
secrets and environment variables sues occur. [9] Selenium: [https://www.selenium.dev/]
can lead to security vulnerabilities. [10] Postman: [https://www.postman.com/
Step 6: Deployment automated-testing/]
Step 5: Testing and Validation Deployment into a production en-
Testing and validation are nonnego- vironment is the final phase of
tiables in the transition from a legacy dockerizing a legacy CRM system. Author
system to a dockerized architecture. The delivery method will depend on Artur Skura is a senior DevOps engineer currently
Ignoring or cutting corners in this the application and your role as a working for a leading pharmaceutical company
phase jeopardizes the integrity of the developer. Many containerized ap- based in Switzerland. Together with a team of
system, often culminating in perfor- plications reside today in application experienced engineers, he builds and maintains
mance bottlenecks, functional incon- repositories, including Docker’s own cloud infrastructure for large data science and
sistencies, or security vulnerabilities. Docker Hub container image library. machine learning operations. In his free time, he
The CRM system, being business-crit- If you are deploying the application composes synth folk music, combining the vibrant
ical, demands meticulous validation. within your own infrastructure, you sound of the ’80s with folk themes.

W W W. A D M I N - M AGA Z I N E .CO M A D M I N 78 55
S EC U R I T Y MACsec

Link Encryption with MACsec

Under Seal
MACsec encrypts defined links with high performance and secures Layer 2 Securing Switch to
protocols between client and switch or between two switches. By Benjamin Pfister Terminal Device
In scenario 1, MACsec secures the
Networks are exposed to more than apply a later version, IEEE 802.1X- link from a terminal device to the
external attacks. Appropriate defenses 2010, in combination with 802.1AE switch. The objective is encrypted
need to be implemented at the entry (MACsec). transmission between the two devices
point to the internal network or, if The standard offers better perfor- after successful authentication and
third parties have physical access, to mance and is less complex to imple- authorization. In the IEEE 802.1X ter-
access points on the network. Initial ment than classic Internet Protocol minology, the end device in this case
authentication during access to the Security (IPsec)-based encryption. is referred to as the supplicant, the
local area network (LAN) without If required, however, a combination switch is referred to as the authenti-
downstream verification of the trans- with other security protocols such as cator, and the RADIUS server is the
mitted packets, as with classic net- IPsec and Transport Layer Security authentication server.
work access control (NAC) systems, is (TLS) is also possible. At the same The first step after link building is to
no longer sufficient. One approach is time, Layer 2 protocols such as Link authenticate the supplicant with EAP
Media Access Control Security, (MAC- Layer Discovery Protocol (LLDP), over LAN (EAPoL) frames between the
sec), which encrypts in Layer 2, with Cisco Discovery Protocol (CDP), and supplicant and the authenticator. To
virtually no loss of speed. Link Aggregation Control Protocol generate a RADIUS request for authen-
The MACsec [1] Layer 2 security (LACP), as well as Address Resolution tication from the authenticator to the
protocol is used for cryptographic Protocol (ARP), can be transmitted authentication server on the basis of
point-to-point security on wired net- transparently. MACsec also is com- this type of EAP frame, the authentica-
works (e.g., on switches). Network patible with IPv4 and IPv6 because tor packages the requests into RADIUS
access controls compliant with IEEE it resides one layer below in the OSI request messages. The supplicant’s
802.1X-2004 (i.e., port-based net- reference model. corresponding EAP method must be
Lead Image © Oleksandr Omelchenko, 123RF.com

work access control) only provide Because MACsec is implemented at enabled on the RADIUS server – for
authentication by the Extended Au- a low level close to the hardware, it example, with the Protected Extensible
thentication Protocol (EAP) frame- demonstrates high performance up to Authentication Protocol and Microsoft
work – in the best case combined the full line rate (i.e., the maximum Challenge-Handshake Authentication
with periodic re-authentication. possible data rate of the link). Attacks Protocol version 2 (PEAP-MSCHAPv2)
However, without an integrity such as session spoofing, replay at- for TLS-protected transfer of the user-
check, confidentiality cannot be tacks, or man-in-the-middle attacks name and password after validation
guaranteed at this level of the com- are thwarted. However, MACsec does of the RADIUS server’s authentication
munication relationship, unless you not ensure end-to-end encryption. certificate, or with EAP-TLS if you

56 A D M I N 78 W W W. A D M I N - M AGA Z I N E .CO M
MACsec S EC U R I T Y

Figure 1: Communication between the authentication server, authenticator, and supplicant.

want two-way TLS authentication be- identically on the supplicant and au- protection and if no other adequate
tween the supplicant and the authenti- thenticator. Additionally, the master encryption methods are used. Ex-
cation server. session key is generated. amples of this scenario include Layer
The main difference between authen- This master session key is then used 2 connections by providers, as used
tication in the older variant by IEEE as the basis for a key exchange with in metropolitan area Ethernet envi-
802.1X-2004 and IEEE 802.1X-2010 in a MACsec Key Agreement (MKA; see ronments. However, the same also
conjunction with 802.1AE is that au- Table 1 for MACsec terminology). applies to dark fiber connections op-
thorization with additional attributes After the MKA process, the authen- erated by third parties or if your own
occurs on the basis of successful ticator and supplicant have the key fiber optic connections run through a
authentication. For this to happen, material required to handle encrypted third party.
the authentication server returns a data transmission. Without encryption on these kinds
RADIUS Access-Accept with the MAC- of routes, someone could route data
sec policy to the authenticator. In this Securing Switch to Switch out by couplers or network TAPs for
way, the authenticator recognizes that data analysis downstream. The pay-
it needs to use MACsec (Figure 1). In scenario 2, MACsec encryption load of the MAC frame is encrypted
The matching encryption algorithms, is used for the connection between if MACsec is used and therefore can-
such as the Advanced Encryption two switches. If third parties have not be easily evaluated. For provider
Standard in Galois Counter Mode physical access to the cabling, MAC- connections, however, you need to
at 128 or 256 bits (AES128-GCM or sec transmission is recommended if clarify whether you can transmit
AES256-GCM), must be configured you have a corresponding need for MACsec frames with an Ethertype of

Table 1: MACsec Terminology


Abbreviation Meaning Function
CA Connectivity association Secured control plane connection between MACsec peers.
CAK CA key Control plane key from which the session key is derived.
CKN CAK name Frame for the CAK transmitted by the peers to each other for validation in plain text.
ICK ICV key Integrity check of each MKA protocol data unit (MKPDU) sent between two CAs.
ICV Integrity check value Provides secure connectivity associations with the AES-GCM Cipher Suites at 128/192/256 bits.
KEK Key encryption key Transmits the generated SAKs to the peer through the CA.
MKA MACsec key agreement A protocol to locate MACsec peers and to generate, renew, and exchange keys.
SA Security association Connection between two MACsec peers that guarantees an encrypted connection on the basis of
the SAK.
SAK SA key Session key derived from the CAK and used for encryption between two MACsec peers.
SC Secure channel Logical channel on which encrypted transmission takes place.
SCI SC identifier Unique identifier of the channel for encrypted transmission consisting of the MAC address and
port designation.
Authenticator Provides authentication by EAPoL with the supplicant by the RADIUS protocol to the authentica-
tion server. The RADIUS server denies or grants access by authorization.
Authentication Server RADIUS server for authentication, authorization, and accounting according to IEEE 802.1X.
Supplicant Client component for authentication according to IEEE 802.1X.

W W W. A D M I N - M AGA Z I N E .CO M A D M I N 78 57
S EC U R I T Y MACsec

0x88e5 on the provider’s network. for MACsec without the need for a association key name (CKN) must
Because MACsec works on Layer 2, it new EtherType. The SL field states the be stored on the MACsec peers. This
must be individually enabled for each length of the encrypted data. The SCI arrangement acts as a framework for
interface. is based on an identifier of the respec- the CAK, with peers exchanging CKNs
tive port on the component and the in plain text. They also need to match
Encryption Method MAC address. on both sides of the connection.
The entire MACsec frame is similar However, the CAK is initially only
MACsec uses AES-GCM as the en- to an Ethernet frame. It also contains used to secure the control data (con-
cryption algorithm, encrypting hop- an ICV to ensure that the frame has trol plane) and not to encrypt the
by-hop, which has both advantages not been manipulated en route. To user data (data plane). For this func-
and disadvantages. Key lengths of 128 this end, the MACsec peers decrypt tion, you need a SAK. The CAK and
and 256 bits are available. With the incoming frames and calculate the ex- CKN must match to generate this key.
802.1AEbw standard, IEEE introduced pected ICV with the session key from If this is the case, the MKA goes into
extended packet numbering (GCM- the MKA. If this does not match the action. It first discovers neighboring
AES-XPN) for the current require- transmitted ICV, the receiving peer peers and then determines the key
ments for high data rates. As a result, discards the frame. server among them. MKAs with lower
MACsec can transmit 232 frames numerical priority values take prior-
within a security association key Static and Dynamic Key ity over those with higher numerical
(SAK). This extension enables MAC- values.
sec to encrypt securely at data rates
Distribution The key server then generates the
above 100Gbps. The algorithm used Each MACsec session is built on top symmetrical SAKs and distributes
must match the configuration of the of a Connectivity Association (CA), them between the opposing switches.
switch and the end device or match which describes a logical session be- Downstream, encrypted data trans-
both switches. tween peers. The MACsec key agree- fer can take place on the data plane.
The MACsec header in IEEE 802.1AE ment (MKA) protocol establishes this The MKA then periodically generates
is also referred to as the security tag connection (Figure 2). However, be- new SAKs and distributes them in a
(SecTAG). Its length is 16 bytes. The fore encrypted transmission can take process known as key rollover. This
same applies to the integrity check place, the required key material must method is mostly used when coupling
value (ICV) that MACsec appends to be distributed. MACsec can use both two switches with MACsec.
the frame. The security tag comprises static and dynamic key distribution.
five fields: EtherType, TAG control As the name suggests, the static vari- Dynamic Key Distribution
information/association number ant uses preconfigured connectiv-
(TCI/AN), short length (SL), packet ity association keys (CAKs), which For the dynamic method, MACsec
number (PN), and SCI. The MACsec must match the peers involved. For builds on the EAP framework from
EtherType contains the value 0x88E5. this purpose, symmetric pre-shared IEEE 802.1X. After successful au-
The TCI/AN defines a version number keys (the CAKs) and a connectivity thentication and authorization of the

Figure 2: A MACsec session proceeds in a defined order up to the encrypted data transmission.

58 A D M I N 78 W W W. A D M I N - M AGA Z I N E .CO M
MACsec S EC U R I T Y

supplicant and authenticator, the Practical Configuration associated profile editor for the Cisco
two exchange the MKA data with a Secure Client by selecting MKA as
Examples
special EAPoL type. As in static key the key management approach in
distribution, the MKA first discovers A couple of examples will illustrate the Networks/Security area and the
the MACsec peers. With a successful the various MACsec encryption sce- encryption algorithm for MACsec.
EAP authentication, a RADIUS server narios on a switch. The examples This setting must be compatible
distributes a master key from which are based on Cisco Catalyst 9300 with the switch and be configured
the CAK is derived. As with the static switches [2] [3] with IOS XE version symmetrically. In this example, I
variant, CAKs also have an associ- 17.6 as the authenticator, a Windows choose AES128-GCM. Next, import
ated CKN. endpoint with Cisco Secure Client as the profile on the client. The au-
Further keys are then derived from the supplicant, and a Cisco Identity thenticator (switch) requires a little
the CAK: The ICK (ICV key) and the Services Engine as the authentication more manual work: First, you need
KEK (key encryption key). The key server. These are just examples and to define the RADIUS servers with IP
server determined by the MKA uses do not claim to be complete. The addresses, ports, and a shared secret.
the KEK to transmit the generated configurations are limited to the por- Second, reference them in the set-
SAKs to the peer via the CA. The tion specific to MACsec and 802.1X. tings for authentication, authoriza-
AES Key Wrap algorithm secures Other manufacturers’ hardware, tion, and accounting (AAA) and en-
this transfer. These SAKs then pro- hardware models, and software ver- able IEEE 802.1X globally if you have
tect the user data transmission on sions may differ. not already done so.
the data plane. The SAK has unique Additionally, you need a MACsec-
identifiers: The key identifier (KI, Switch to End Device with specific MKA policy. The key server
128 bits) and the key number (KN, priority is set in the policy. Also,
32 bits), with the peers transmitting
Dynamic Key Distribution you need to specify the encryption
the KI in plain text in all MACsec The supplicant needs a MACsec algorithm to match that of the sup-
frames. profile. You can create this in the plicant (i.e., AES128-GCM for the
S EC U R I T Y MACsec

example in Listing 1) and set the certain portion of the frame could Conclusions
Confidentiality Offset to 0 bytes in remain unencrypted (e.g., to allow
the MKA policy. This setting tells virtual local area network (VLAN) MACsec cannot replace end-to-end
MACsec to encrypt the whole frame. tags to be evaluated for transit com- encryption; however, when it comes
Where switches are coupled, a ponents) (Figure 3). Replay protec- to encrypting a defined link with high
tion defines the extent to which the performance, MACsec is a very good
Listing 1: MACsec Downlink to Terminal Device frame order can deviate from the choice. The same applies to securing
radius server macsec1 regular order. Layer 2 protocols between client and
address ipv4 192.0.2.1 auth-port 1812 acct-port 1813 Finally, set up the physical port, switch or between two switches. De-
key t0ps3cr3t which is used to define both the spite the latest revision in 2018, sup-
! legacy 802.1X settings and a refer- plicant availability is still fairly low. Q
radius server macsec2 ence to the MKA policy; MACsec is
address ipv4 192.0.2.2 auth-port 1812 acct-port 1813 enabled, and a linksec policy with Info
key t0ps3cr3t must-secure enforces encryption. [1] MACsec IEEE 802-1AE:
!
On the authentication server, you [https://1.ieee802.org/security/802-1ae/]
aaa group server radius macsec
first need to define an authorization [2] Cisco terminology for MACsec:
server name macsec1
profile with the must-secure policy, [https://community.cisco.com/t5/
server name macsec2
! which you then reference in an au- networking-knowledge-base/macsec-
aaa new-model thorization policy for the desired history-amp-terminology/ta-p/4436094]
aaa authentication dot1x default group macsec supplicants on the basis of their [3] MACsec configuration on Cisco Catalyst
aaa authorization network default group macsec properties from the authentication. 9300: [https://www.cisco.com/c/en/us/td/
aaa accounting dot1x default start-stop group macsec docs/switches/lan/catalyst9300/software/
!
dot1x system-auth-control
Switch to Switch in Static release/17-6/configuration_guide/sec/b_176_
sec_9300_cg/macsec_encryption.html]
! CAK Mode
mka policy ITA_MKA Listing 2: MACsec Uplink to Switch
The setup for MACsec between two
key-server priority 100
switches is somewhat simpler than key chain ITA macsec
macsec-cipher-suite gcm -aes-128
dynamic key distribution in the case key 1000
confidentiality-offset 0
replay-protection window-size 10 of a switch to end device. The config- cryptographic-algorithm aes-256-cmac
! uration is the same on both switches key-string 12345678911234567890123456789012
interface GigabitEthernet2/0/1 except for the key server priority. !
description ITA_MACsec_Client First, define a keychain for MACsec mka policy ITA_MKA_Switch
switchport mode access with a key number, which acts as key-server priority 100
switchport access vlan 10 the CKN. Inside the key number, the macsec-cipher-suite gcm-aes-256
macsec CKN creates the algorithm for the key confidentiality-offset 30
authentication host-mode multi-auth !
and the key string, which acts as the
authentication order dot1x interface TenGigabitEthernet1/0/1
CAK. Second, define the MKA policy
authentication port-control auto description ITA_MACsec_Client
(which I will not explain again here).
dot1x pae authenticator switchport mode trunk
Finally, bind the MKA policy and the
authentication linksec policy must-secure macsec network-link
mka policy ITA_MKA keychain as a pre-shared key and en-
mka policy ITA_MKA_Switch
spanning-tree portfast able MACsec on the physical interface mka pre-shared-key key-chain ITA
(Listing 2).

Figure 3: MACsec frame setup with encrypted VLAN tag and payload. In addition to classic Ethernet headers, the 802.1AE header (security
tag) and an ICV are inserted.

60 A D M I N 78 W W W. A D M I N - M AGA Z I N E .CO M
S EC U R I T Y Visual Studio Code Server

Secure remote connectivity with VS Code for the Web

Tunnel Tech
Connect to remote machines with Visual Studio Code for the Web through The extension ms-vscode-remote.
remote-ssh does the heavy lifting of
secure tunnels – no SSH needed. By Kevin Wittmer integrating with the local SSH client
for SSH host selection and connectiv-
The Visual Studio Code ecosystem established Remote-SSH VS Code ity. The .remote-ssh-edit extension
continues to thrive and expand. With extension that provides SSH connec- has a secondary purpose of providing
thousands of extensions available for tivity capabilities to virtual machines the ability to edit SSH config files. Fi-
download from the marketplace, de- (VMs) or containers and seamlessly nally, .remote-explorer shows a list of
velopers have a bewildering selection integrates with the standard function- available remote machines to which
to enrich their daily coding experi- alities of a local SSH client. you can connect.
ence. The Remote-Tunnels extension If you don’t already have the Remote- It’s worth noting that the .remote-ssh
[1] [2] is an example of powerfully SSH extension installed, use the extension uses the standard facili-
expanding a developer’s remote ca- --install-extension argument on the ties of a locally available SSH client.
pabilities. This extension integrates command-line interface (CLI): Therefore, an SSH client must be
with vscode.dev hosting Visual Stu- installed and available in the environ-
dio online services, allowing GitHub code --install-extension U ment runtime path. The conventional
users to access a lightweight Visual ms-vscode-remote.remote-ssh steps of SSH key generation and
Studio (VS) Code experience directly distribution apply here. Please see
through a web browser. Pairing the Of course, you can also access the the “SSH Key Generation Refresher”
remote tunneling capabilities of VS Extensions view in the VS Code user boxout for a quick refresher. By
Code Server with GitHub integrated interface (UI) to search and install. convention, assume that the .ssh
cloud services powers this secure This command-line action will result subdirectory exists in the user’s home
remoting capability. As a result, in the installation of three extensions. directory. The SSH config file residing
developers can establish remote con- To verify that these extensions were in this subdirectory references the pri-
nectivity to Linux servers hosted in installed successfully, enter: vate key data file associated with the
private networks, further expanding target SSH host as such:
the possibilities of their development code --list-extensions
workflows. In this article, I unpack IdentityFile U
the details. The output dumped to the console ~/.ssh/my_id_ed25519-remote-server
will be all the extensions currently
Lead Image © fckncg, 123RF.com

Remote SSH Connectivity installed, including the three newly The file path separators in the Identi-
installed extensions: tyFile directive assume a Linux shell
Revisited context.
Before diving into the developer ms-vscode-remote.remote-ssh The .remote-explorer extension UI
experience of a web-based VS Code ms-vscode-remote.remote-ssh-edit will render the SSH hosts defined in
environment, I’ll briefly revisit the ms-vscode.remote-explorer the config file. Any SSH hosts with

62 A D M I N 78 W W W. A D M I N - M AGA Z I N E .CO M
Visual Studio Code Server S EC U R I T Y

a valid definition and up-to-date offering allows access to a develop- SSH Key Generation Refresher
private key are available for connec- ment environment over a secure, non-
Below is a quick refresher for getting SSH
tion. Select an SSH host entry and hit SSH tunnel.
keys and interacting with SSH configuration:
Enter to initiate a connection. What Figure 1 shows the integrated compo-
1. Generate SSH key-data values (public and
happens behind the scenes when you nents and cloud services, including
private):
successfully authenticate via SSH, the Remote-Tunnels extension, the VS
in a nutshell, is that VS Code will Code CLI, and VS Code Server. From ssh-keygen -t ed25519 -a 100 -f
copy and install the VS Code Server the illustration, it’s evident that this ubuntu-sre-id_ed25519 -q -N
component into the target Linux remote access model necessitates a
2. Copy the SSH public key data on the tar-
environment. This footprint, well GitHub user account, which most get server to the authorized_keys file
beyond 100MB, represents the run- folks already have; if not, creating a in the $HOME/.ssh directory. You can use
time software for Visual Code Server. new GitHub user account is simple. the secure copy step to complete this.
From the SSH remoting scenario, you 3. Copy the SSH private key data to the local
can typically locate the binary dis- Getting Started system/env, where the SSH client is also
tribution in the hidden subdirectory installed.
.vscode-server (on the target SSH To begin with a streamlined server- 4. Update the config file containing the SSH
host for the specified users). Key enti- side approach, download the VS session server details. The file follows the
ties that land in the .vscode-server Code CLI. For the scope of this ar- SSH format and structure:
subdirectory include: ticle, I zero in on Linux-based serv-
Host Ubuntu-SRE_Penguin
ers. This CLI initializes the VS Code
User penguin
server through a download and basic
HostName 127.0.0.1
setup tailored for remote connectiv-
Port 3092
ity (similar to the SSH scenario).
IdentityFile "/Users/penguin/.ssh/
Manual setup and registration steps
ubuntu-sre-id_ed25519"
are required within the server envi-
ronment to associate with an active
GitHub web user session. Behind the binaries. Next, log in and cd into the
scenes, the VS Code Server estab- target destination subdirectory (pos-
lishes a secure tunnel with AES en- sibly creating a new subdirectory as
cryption. I’ll detail the commands to you go). Download the VS Code CLI
implement this remote setup in the from the VS Code download page.
subsequent sections. You can also use a tool like wget to
download the binaries directly. For
The remote SSH extension will popu- Remote Tunneling example, to download the Insiders
late the Remote user interface view edition of VS Code, enter:
on the basis of the config file contents
Step-by-Step
maintained in the .ssh subdirectory. To get started, first identify in which wget -O vscode_cli_alpine_x64_cli.tar.gz U
You must refresh or relaunch VS Code Linux user and user subdirectory 'https://code.visualstudio.com/sha/U
for the target services to appear in the to locate the VS Code Server CLI download?build=insider&os=cli-alpine-x64'
list view.

VS Code in
Your Browser
SSH connectivity
is a dependable
and secure ap-
proach to enabling
remote connectiv-
ity for VS Code
developers. How-
ever, a non-SSH
alternative is now
available through
the VS Code Web
experience (hosted
vscode.dev). This Figure 1: Screen capture of the SSH extension view from VS Code.

W W W. A D M I N - M AGA Z I N E .CO M A D M I N 78 63
S EC U R I T Y Visual Studio Code Server

Note that if you want the stable edi- As a side note, you can also source the code tunnel --name remote384 U
tion of VS Code, then change the standard version of VS Code from your --accept-server-license-terms
build query string to stable. local package manager. For example,
Next, unpack the compressed TAR you can install VS Code with snap: I suggest a naming convention,
with the command: something like remoteXYZ. Avoid
sudo snap install code --classic using a naming pattern with tunnel
tar xzvf vscode_cli_alpine_x64_cli.tar.gz because the VS Code remote exten-
The next step is to set up the tunnel. sion already inserts the string tun-
If you want to make the binary avail- First, inspect the tunneling command- nel as a fixed part of the URI base
able at the system level, one option is line configuration options (Table 1): path. Monitor the log output from
to copy the code executable to /usr/ the startup phase of the VS Code
local/bin. Once you have copied the code tunnel --help Server process and scan for the de-
binary to a preferred location, verify vice code:
the binary state and confirm the ver- Now register and establish the tunnel
sion of VS Code: while accepting first-time licensing *
services. Be sure to name the tunnel, * Visual Studio Code Server
code --version and also remember the name! It will * ..
become important when you transi- * To grant access to the server, please U
This command prints the code version tion into a browser context. In my ex- log into https://github.com/login/device U
and the final build commit. ample, I named my tunnel remote384: and use code 0123-9A8B

Table 1: Tunnel Commands and Arguments


Command or Argument Function
Most Useful Commands
code tunnel --help Dump tunnel command-line help contents to standard output.
code tunnel --accept-server-license-terms Create a tunnel that is accessible to vscode.dev (integrates with GitHub.com). Accept the
--name remote384 license terms and respond to the browser UI prompting to enter the code from initial CLI
interaction. It is highly recommended to name your tunnel; otherwise, a host name will often
be taken. You can also specify the argument --random-name to randomly name a tunnel for
the port forwarding service.
code tunnel status Show the status of the tunnel, including the connection status.
code tunnel restart Restart all tunnels running locally.
Useful to Clean Up, Uninstall
code tunnel kill Stop all tunnels running locally.
code tunnel unregister Remove this machine’s association with the port forwarding service.
code tunnel prune Delete all VS Code servers that are NOT currently running.
code tunnel rename Rename the specific tunnel.
Useful Arguments (e.g., to inspect verbose comments and debug info)
--cli-data-dir Directory where CLI metadata should be stored [env: VSCODE_CLI_DATA_DIR=]. The
default is normally the .vscode-cli home subdirectory. Use this CLI argument to change
to override and set an alternative path. Default home show/appears: /home/kevin/.
vscode-cli/server-stable/.
--log <level> Log level to use (possible values: trace, debug, info, warn, error, critical, off).
--verbose Print verbose output during code tunnel execution.
Linux Background Service Management
code tunnel service install Install the tunnel service on this local machine.
code tunnel service log Dump service log contents to the console.
code tunnel service uninstall Uninstall the tunnel service on this local machine.
… with systemctl
systemctl --user restart code-tunnel.service systemctl to stop, start, restart, and get status.
systemctl --user status code-tunnel.service Manage at the user level.
systemctl --user stop code-tunnel.service Note differences between user and system.
systemctl --user --state=running | grep code Search for running Visual Studio Code server instances.
sudo loginctl enable-linger $USER Ensure the service stays running after you disconnect.

64 A D M I N 78 W W W. A D M I N - M AGA Z I N E .CO M
Visual Studio Code Server S EC U R I T Y

Once the device


code is printed to
the console, copy
it to your text
buffer and switch
back to the active
GitHub.com ses-
sion in your web
browser.
If you haven’t
already, log in to
your GitHub ac-
count from the
web browser
where you plan
to manage your Figure 2: GitHub.com prompts for authorization.
remote session;
then, input the authorization code user state, you could
to the GitHub device login page [3] also see a dialog box
(Figure 2). confirming authoriza-
Once you verify the device code, tion. Opening the remote
the system will ask you to confirm VS Code for Web session
GitHub authorizations for VS Code URL in a web browser
(Figure 3). The default authorizations will initiate the VS Code
grant full control over namespaces Server download to the Figure 3: Authorize GitHub for VS Code access.
and access to read org projects, view remote environment
user profile data, manage private re- (Figure 4). As the messaging illus- the menu navigation hierarchy is the
pos fully, and update GitHub Actions trates (Listing 1), studying the log icon with three vertical bars (ham-
workflows. output can also confirm this behavior. burger menu) anchored to the top of
If you forget to specify the argument Part of session initialization will also the Activity Bar (Figure 6).
-accept-server-license-terms, you activate several extensions, including To open a project folder, click the
will be prompted to accept (y/n) the ms-vscode.remote-server, in the local hamburger menu, navigate to the File
conditions. Similarly, if you don’t server-side environment. To view the | Open Folder (Figure 7). The Inte-
specify the name argument, the sys- list of extensions installed as part of grated Terminal is also available in
tem will prompt you to name the tun- the initial setup, including the setup the Terminal submenu item. You can
nel (aka machine name). of the remote tunnel, switch back open multiple terminal windows from
Once acknowledged, VS Code for to the VS Code for the Web user in- the browser, which display as tabs
Web indicates the “device” is now terface, then browse to the running near the bottom-right corner.
connected and that you are all set. For extensions view (Figure 5). VS Code supports comprehensive key-
example, a console message will print board mappings to shortcut virtually all
the VS Code for Web URL (https:// VS Code Through a Browser functions available in VS Code. With
vscode.dev/tunnel/remote384). few exceptions, VS Code for the Web
You will be prompted to allow GitHub The basic layout of the VS Code for supports these keyboard mappings.
account access as part of the sign-in the Web user interface hosted at Typically classified as a code editor, VS
process. Depending on your browser vscode.dev differs from its desktop Code provides enhanced debugging
sibling. Instead of a fixed top-level capabilities through language and tool-
menu with File, Edit, Selection, and specific extensions. Figure 8 illustrates
so forth, the launching point to access a Python debugging session in a VS

Listing 1: VS Code Download Log


[rpc.0] Checking /home/kevin/.vscode/cli/servers/Stable-f1b07bd25dfad64b0167beb15359ae573aecd2cc/log.
txt and /home/kevin/.vscode/cli/servers/Stable-f1b07bd25dfad64b0167beb15359ae573aecd2cc/pid.txt for
a running server...
[rpc.0] Downloading Visual Studio Code server -> /tmp/.tmp9ANcrP/vscode-server-linux-x64.tar.gz
[rpc.0] Starting server...
[rpc.0] Server started
Figure 4: Feedback showing the VS Code
.. ..
Server download.

W W W. A D M I N - M AGA Z I N E .CO M A D M I N 78 65
S EC U R I T Y Visual Studio Code Server

Figure 5: Viewing running extensions (in particular, extensions installed by default).

Code for the Web context. Consider- a foreground context. To make this a code tunnel service install U
ing mainstream tasks and capabilities, robust configuration, place VS Code --accept-server-license-terms U
VS Code for the Web is on par with its Server in the background, configure it --name remote32
desktop sibling. (See also the “Manag- to auto-start from a user context, and
ing Long-Running Tasks” boxout.) verify that VS Code is available in the Similar to the earlier example, the argu-
shell search path: ments --accept-server-license-terms
VS Code Server in the and --name <NAME> are available for first-
which code time setup. Omitting either command-
Background line argument will result in interactive
The previous command series dem- To configure VS Code to run in the prompting before installation.
onstrated how to interact manually background, install the back-end pro- The output of this service-oriented
with VS Code for the Web running in gram as a service: installation step will generate

Figure 7: Cascading Open Folder submenus.

Figure 6: Access
the menu
hierarchy from
the hamburger
menu. Figure 8: The debugger facilities rendered in VS Code for the Web.

66 A D M I N 78 W W W. A D M I N - M AGA Z I N E .CO M
Visual Studio Code Server S EC U R I T Y

Managing Long-Running Tasks startup, after a Listing 2: Back-End Installation Output


reboot, or after *
By nature, remote tunnels are susceptible to
the user (or devel- * Visual Studio Code Server
disconnection events, especially if you are
oper) that initiated * ..
working in a dynamic environment where you
this tunnel service *
might need to close your notebook. Consider
logs out. [2023-10-07 22:40:26] info Successfully registered service...
the screen terminal multiplexer to manage
To keep track of [2023-10-07 22:40:26] info Successfully enabled unit files...
such long-running tasks. Figure 9 illustrates [2023-10-07 22:40:26] info Tunnel service successfully started
user sessions com-
the Redis project open in a VS Code for the [2023-10-07 22:40:26] info Tip: run `sudo loginctl enable-linger $USER` to
menced on a par-
Web session with the long-running make ensure the service stays running after you disconnect.
ticular Linux host,
process executing in an integrated terminal.
use this:
In this case, the build process is running for
The tunnel status output (Insiders
the Redis project, which is an extensive open
sudo loginctl list-sessions edition of VS Code) is shown in List-
source software system with many C source
ing 4. You will see an error status if
files and several build steps. In this scenario
Also, note that you can monitor and no tunnel process is running.
of using the screen utility, disconnection
observe the most recent log con- You can also check the status with
events will not cause the build process to
tents of the VS Code server with the systemctl by specifying the --user
bail out. Instead, the process can continue in
command: argument:
the background, and the user or developer
can later recall the screen as needed.
code tunnel service log systemctl --user status code-tunnel.service

additional details regarding systemd Listing 3 is example log output from Note that this status command’s out-
unit files (Listing 2). Addition- the VS Code Tunnel service. You can put is identical to the previous tunnel
ally, please pay attention to the check the status of the tunnel with log command.
loginctl tip it provided. Specifying the status argument, which is avail-
the enable-linger command will en- able to both the code tunnel and Bouncing VS Code Server
able this user-oriented background code tunnel service commands. This
service to pre-initialize before any example uses the tunnel service In some instances you might need to
user login activity. In other words, context: restart the VS Code Server process.
the tunnel will be available in the By bouncing the tunnel and monitor-
background after a complete system code tunnel service status ing the logs, you will observe that

Figure 9: Building the Redis open source project in VS Code for the Web.

W W W. A D M I N - M AGA Z I N E .CO M A D M I N 78 67
S EC U R I T Y Visual Studio Code Server

Listing 3: VS Code Tunnel Log Output


code-tunnel.service - VS Code Tunnel
Loaded: loaded (/home/kevin/.config/systemd/user/code-tunnel.service; enabled; vendor preset: enabled)
Active: active (running) since .. ..
CGroup: /user.slice/user-1000.slice/[email protected]/app.slice/code-tunnel.service
|-- 33011 /usr/local/bin/code --verbose --cli-data-dir /home/kevin/.vscode/cli tunnel service internal-run
|-- 35321 sh /home/kevin/.vscode/cli/servers/Stable-f1b07bd25dfad64b0167beb15359ae573aecd2cc/server/bin/code-server --connection-tok>
...
Oct 14 15:20:43 ubuntu22-vbox code[33011]: [2023-10-14 15:20:43] debug [tunnels::connections::ws] sent liveness ping
Oct 14 15:20:43 ubuntu22-vbox code[33011]: [2023-10-14 15:20:43] debug [tunnels::connections::ws] received liveness pong

this action initiates a new connection If you have opted for an Insiders edi- Installing Extensions
to vscode.dev. This process will also tion of Visual Code (i.e., the nightly
produce log messages that display the build), the child server subdirectory
from the CLI
vscode.dev URL. will begin with the Insiders prefix. Although many users opt to install
As previously mentioned, you can A breakdown of the directory struc- extensions in the Extensions pane of
use systemctl commands to manage ture is: the VS Code for the Web UI, perform-
the service. To restart the service and ing this task from the command line
check the service status, extracting is also possible. To do so, start by de-
the specific URL of the web-based termining the subdirectory location of
tunnel, use: the VS Code Server instance relevant
to the user context:
systemctl --user restart code-tunnel.service
systemctl U which code-insiders
--user status code-tunnel.service | U /home/kevin/.vscode-insiders/cli/U
grep vscode.dev servers/Insiders-U
c72447e8d8aaa7497c9a4bd68bc4301584b92bebU
On restarting the service, the console /server/bin/remote-cli/code-insiders
will display the link, typically format-
ted as https://vscode.dev/tunnel. The output provides the path, which
Additionally, a hidden subdirectory includes a unique 40-character hash,
VS Code Server Hidden Parts called .vscode-server contains folders indicating the location of the VS Code
for data and extensions. Extensions Server. Navigate to the directory as
Like SSH remote tunneling, web- added by the user will land in the ex- shown in the output:
based tunneling methods also use tensions subdirectory:
hidden subdirectories for VS Code cd /<path-to-directory>/code-insiders
Server installation and runtime
hosting. Initializing VS Code Server Use the code-insiders command to
for the first time creates hidden install your desired extension. For
subdirectories in the filesystem instance, to install the Python exten-
named .vscode-server and .vscode. sion for this specific user instance,
In this context, the main subdirec- use:
tory path that houses the server
components is: code-insiders U
--install-extension ms-python.python
.vscode/cli/servers/Stable-<40-char-hash>
Note that the installation of the ms-py-
Listing 4: Tunnel Status Output thon.python extension will trigger the
{"tunnel": installation of Pylance. These exten-
{"name":"remote32", sions will land in the .vscode-server/
"started_at":"2023-10-14T03:49:25.040529254Z", extensions subdirectory.
"tunnel":"Connected",
"last_connected_at":"2023-10-14T15:11:02.730231871Z",
"last_disconnected_at":null, Clean Up VS Code Server
"last_fail_reason":null},
The tunnel provides command-line
"service_installed":true}
capabilities to uninstall and delete VS

68 A D M I N 78 W W W. A D M I N - M AGA Z I N E .CO M
Visual Studio Code Server S EC U R I T Y

Code Server. From the command line, sudo rm -fr .vscode/ Info
uninstall the service and remove the sudo rm -fr .vscode-server/ [1] VS Code Marketplace: Remote – Tunnels:
server-side bits: [https://marketplace.visualstudio.
VS Code Server and its remote exten- com/items?itemName=ms-vscode.
code tunnel service uninstall sions provide significant advantages remote-server]
code tunnel unregister for remote development, but as the [2] Developing with Remote Tunnels:
code tunnel prune saying goes, with greater power comes [https://code.visualstudio.com/docs/
greater responsibility. Integration with remote/tunnels]
The last two commands unregister external vscode.dev cloud services [3] GitHub device login page:
the machine and environment (note are particularly beneficial for isolated, [https://github.com/login/device]
that this requires a connection to the transient cloud server environments,
network and Internet) and remove making it ideal for individual devel-
the (local) VS Code Server instances. opers or small teams in fast-paced
The hidden subdirectory will remain. development or testing settings. Such Author
Deleting the vscode-related hidden scenarios might be its most suitable Kevin Wittmer, a chief IT specialist at Bosch
directories will purge the last remain- application, keeping it distant from the Group, has a strong affinity for all aspects of
ing bits: critical systems of enterprise IT. Q Visual Studio Code.

Q
S EC U R I T Y Velociraptor

Incident response with Velociraptor

The Hunter
The software incarnation of the feared predator in the Jurassic Park movies has been on the hunt for
clues to cyberattacks and indicators of compromise. We show you how to tame the beast and use it for
your own purposes. By Matthias Wübbeling

From the IT department’s point of more than 275 tables – from CPU data command and control connection to
view, it always makes sense to have to network settings (e.g, DNS or static all the devices (clients) in your IT
an overview of your company’s IT routes) to installed Chrome exten- infrastructure. The entire installation
infrastructure – or at least be able sions – you can find out pretty much is controlled over a web interface,
to create one in a timely manner. In everything about your systems. which is where you configure set-
the immediate aftermath of an IT se- Velociraptor now aims to combine the tings, define and start hunts, and
curity incident, you need information capabilities of GRR and OSQuery into document and edit incidents.
quickly about which systems an at- one tool, while being faster, smaller, To test Velociraptor, first install the
tacker may have accessed and which more scalable, and easier to install. server in a Docker container with
systems are still operational. Depart- Like GRR and OSQuery, the software the Compose tool. Prepared files
ment staff can then look specifically works independent of the selected are available online [4]. Clone the
for indicators of compromise (IoCs) operating system and comes with vir- repository and adapt the ENV file to
with the help of Velociraptor [1]. tually no dependencies. Beyond the your environment. The web interface
The developers cite two well-known functionality of GRR and OSQuery, in the Docker container is acces-
tools as the basic idea for their own it is possible for defined events to sible over port 8889. After opening
software: the GRR Rapid Response trigger queries and to use the Veloci- it in the browser, you can accept the
(GRR) [2] incident response tool and raptor Query Language (VQL), both certificate self-signed by Velocirap-
the OSQuery [3] monitoring tool. to execute queries in the sense of tor and enter the credentials stored
GRR lets you hunt for IoCs and run OSQuery and to transfer files, modify in the ENV file for authentication.
Lead Image © Valentyn Ihnatkin, 123RF.com

them over a period of time on all systems and settings, and control the The default combination is admin/
clients connected to your network. entire client-server infrastructure. admin. Of course, you will want to
The reports are sent to a centralized change these credentials for produc-
server where they are available to Quick Install tion systems.
admins. OSQuery, on the other hand, To install, simply use the appropri-
lets you query information from your The architecture of a Velociraptor ate binary for your operating system
clients in a language similar to SQL. installation is simple: A central- from the container. The ./velocirap-
The tool provides information in ized server maintains a permanent tor/clients folder is included in the

70 A D M I N 78 W W W. A D M I N - M AGA Z I N E .CO M
Velociraptor S EC U R I T Y

container to help, and binaries with Logs link highlighted in green will and specify any runtime constraints,
the corresponding certificates and take you to detailed information of such as a share of processor time or
configuration are created and stored the runtime. a maximum runtime on individual
at startup. For Microsoft systems, an clients. After you have checked your
MSI file lets you distribute to your cli- Velociraptor Query Language JSON-formatted search once again,
ents by Active Directory. launch it by clicking Launch Hunt
For Linux systems, copy the ./velo- VQL is based on SQL and is used to and Start.
ciraptor/clients/linuxvelociraptor_ control the entire environment. You To search for the preselected artifacts
client and ./velociraptor/client. can use it to query information from and leverage the power of VQL at the
config.yaml files into a standard direc- the clients, control monitoring and same time, select Generic.Client.VQL
tory (e.g., /tmp). For macOS or Win- automated response technologies on as the artifact. You can enter arbitrary
dows, look for the appropriate binary. the clients, and control the Velocirap- queries in the Configure Parameters
Next, look into the configuration file tor server itself. Because of space item by clicking on the wrench icon.
and check the specified server URL limitations, I only allow myself to re- In the hunt overview you can then
and the settings that start with write- quest simple information. As before, monitor the progress of your hunts
back_. Your local user must have write you could send queries to a client in and view the results.
permissions to the directory config- >_Shell; instead, select VQL from the If you’re working on a recent inci-
ured there. If necessary, adjust the drop-down list next to the input field dent, you will want to do more than
path (e.g., to /tmp/etc/velociraptor. and execute the query run individual queries; in fact, you
writeback.yaml on Linux) and then will probably want to document
create the /tmp/etc/ folder with ap- SELECT * FROM info() them systematically. Velociraptor
propriate permissions. offers notebooks for this purpose.
Now start the client with the In the resulting table, you will see Select the appropriate item in the left
command information for your client system. menu and create a new notebook
Instead of an asterisk (*), you can by clicking the plus symbol (+).
./velociraptor_client U specify single fields or multiple fields Notebooks consist of various cells,
--config client.config.yaml client -v separated by commas, as in SQL. Un- such as Markdown cells for docu-
like SQL, however, you will not be mentation and VQL cells for query
which passes the configuration file using tables; rather, plugins present definition. The queries are executed
with --config and activates the de- the information to you as a table. In directly. When combined with the in-
tailed output in the terminal with -v. this example, the query uses the info formation from the Markdown cells,
Watch the client start up and then plugin. If you first enter a ? in the you can create complete reports for
check the web browser to see input line instead of a plugin name, your investigation in next to no time –
whether the client is connected. If you will see a list of available pl- and always with up-to-date results.
you cannot see the client directly in ugins from which to choose. You can Notebooks can also be shared with
the display, use the search function at specify arguments in the parentheses multiple users.
the top of the page. Just click on the of info() if the plugin requires addi-
magnifying glass without entering a tional information. Conclusions
search string, and you should get a As with SQL, you can use a filter ex-
list of connected clients. After clicking pression with WHERE to further narrow Velociraptor provides many useful
on the ID of the client, you are taken the results. VQL also supports aliases features and a powerful query lan-
to the detailed view with further in- or subqueries, as well as constructs guage to monitor and query almost
formation, such as when and with such as if-then-else or foreach. In all aspects of your IT infrastructure.
which IP address the client logged in. this way, even complex queries can The tool combines the functional-
You will also see the operating sys- be displayed in a simple, structured ity of GRR Rapid Response and
tem, hostname, and architecture manner. OSQuery and extends both in a very
of the system. If you click >_Shell useful way. Q
in the upper right corner, you can Gone Hunting
execute commands on the system if
it is connected at the time. Try the The Hunt Manager in the sidebar is Info
commands where you create hunts in a guided [1] Velociraptor: [https://github.com/
dialog; you might already be familiar Velocidex/velociraptor]
uname -a with this procedure from GRR. Af- [2] GRR: [https://github.com/google/grr]
id ter entering a short description and [3] OSQuery:
setting the selection criteria for the [https://github.com/osquery/osquery]
and look at the return values by click- clients, select one or more artifacts, [4] Velociraptor files: [https://github.com/
ing on the eye icon. Clicking on the configure parameters for the hunt, Velocidex/velociraptor/releases]

W W W. A D M I N - M AGA Z I N E .CO M A D M I N 78 71
M A N AG E M E N T Event-Driven Ansible

Agentless automation with


Event-Driven Ansible

On Call
Event-Driven Ansible is a reactive extension that uses events to launch automations. We explain the ruleset and
present examples that show how to monitor logs and call other tools. By Andreas Stolzenberger

The Ansible automation platform the AnsibleFest user and developer for example, ansible.eda.webhook,
does not require an agent on the sys- conference in Fall 2022. EDA adapts which lets Ansible tap into web ap-
tems to be managed, which is both the “state machine” concept as used plications such as GitHub. A Git com-
a curse and a blessing. Without a by ManageIQ, among others, wherein mit in a web repository can therefore
client, Ansible can control any sys- the state machine is triggered by an trigger a playbook in the data center.
tem directly by SSH and Python, or event (e.g., a log message with error). The powerful ansible.eda.kafka
even Windows Remote Management It then checks one or more condi- source plugin lets EDA tap into the
(WinRM) and PowerShell, whereas tions (e.g., if alerting.service=dns) Apache Kafka message bus. I use this
other platforms like Puppet or Salt and then initiates appropriate actions plugin in one of the examples here.
need their agents. On the other hand, (e.g., restart named.service). Additionally, some initial plugins
tools with agents can react quickly EDA listens for events from one or from commercial manufacturers have
to incidents on the managed system more sources. Like regular Ansible, already been made available, such
because the client is in constant com- EDA is modular and can handle dif- as ansible.eda.dynatrace, ansible.
munication with the automation tool. ferent sources. As of today, very few eda.aws_cloudrail, and ansible.eda.
These applications also take a “reac- of these source plugins are around; azure_service_bus.
tive” approach, which Ansible has then again, EDA is still in its infancy. Others under development as of May
been unable to do thus far because of The EDA website documents how 2023 were not in the beta build I
its push-only architecture. The com- you can program your own source used when I wrote this article, such
munity’s new Event-Driven Ansible plugins [2]. Therefore, the number as ansible.eda.journald for listening
Lead Image © Valery Kachaev, 123RF.com

(EDA) [1] project now aims to change of available sources should increase to systemd log messages. However,
this situation, while still doing with- steadily now that EDA has been of- by the publication date of this article,
out an agent. ficially released [3]. Spoiler: This this plugin should be usable.
article is based on a developer pre-
Modular Architecture release version of EDA. Some of the Ruleset
procedures described here may have
Red Hat first introduced the concept changed in the official release. EDA introduces a new variant of
of EDA as a Developer Preview at The plugins already available include, Ansible’s YAML-based programming

72 A D M I N 78 W W W. A D M I N - M AGA Z I N E .CO M
Event-Driven Ansible M A N AG E M E N T

language for playbooks in the form container also required version 17 of collect all logfiles in a centralized da-
of rulebooks, in which you describe the Java runtime for Drools. tabase anyway.
the sources to which you want EDA For the Python part of EDA to be able To ensure that the log data is not only
to listen and how it will react to to communicate with its Java part, available to a single service such as
events. They are executed by a pro- the Java-to-Python bridge jpy [6] is Logstash, you can deploy a message
gram that was also newly developed: also required. To compile jpy in the bus in the middle – in the best scale-
ansible-rulebook [4]. Rulebooks are container, you then need the Maven out style: Apache Kafka. All log infor-
not launched like playbooks. A rule software project manager, whose mation passes through the message
runs permanently, like a daemon, RPM package in Enterprise Linux 8 broker and target systems such as
because it permanently monitors the (EL8) requires the old Java 1.8 run- the log collection database, and EDA
specified source. time instead of 17. In other words, also connects to the message bus as a
In practice, users are likely to use the EDA Developer container came “subscriber.”
rulebooks in containers. In addition with Python 3.9 including the GCC If you are not yet familiar with this
to the rulebook runtime, the con- development environment, make, and message bus technology, just think
tainers also have ansible-runner to two Java runtimes. This was also to of Kafka as WhatsApp for applica-
execute the defined action playbooks change before the official release. tions. The logging systems join a
in the active container. Alternatively, Fortunately, at the time of writing, chat group (topic) and post all their
the rulebook addresses the API of an a ready-made container image for log entries in this group. Now “sub-
Ansible controller (AWX instance) EDA testing was already on quay.io/ scribers” such as EDA can also join
to launch a job template there. As of ansible/ansible-rulebook:main. The the group and view and evaluate
writing, EDA still worked separately main tag is important here; the im- all the messages. In parallel, a log
from these web user interface (UI) age with the latest tag used an even aggregator such as Logstash can re-
instances. Integration was planned to older version. trieve all the messages in the group
take place after the official release. and forward them to a database such
Log Monitoring by as Elasticsearch for archiving.
Numerous Dependencies In this example, the Filebeat [7] tool
Message Bus first runs as a log shipper on the serv-
The regular Ansible playbook gets by For this practical example, EDA ers to be monitored. It takes care of
with Python plus some Python librar- monitors the system logs of mul- turning unstructured log entries of
ies, but the current pre-build of EDA tiple servers and responds to a wide system services into semi-structured
took me into entirely different realms variety of log entries. To access the messages such as Service=sshd or
of code and runtime dependencies. Be- logs, EDA could use the ansible.eda. Message=Failed Login before Filebeat
cause Python (from version 3.9) alone journald source plugin; however, it forwards them. The tool also caches
is not sufficient for EDA, it uses the would then have to keep active con- all messages if the connection to the
free Drools [5] business rules manage- nections to all monitored servers message bus goes down so that noth-
ment engine. Besides Python, the EDA open. In many places, administrators ing is lost.

Figure 1: Logstash can retrieve logs from the Kafka message bus and back them up to Elasticsearch.

W W W. A D M I N - M AGA Z I N E .CO M A D M I N 78 73
M A N AG E M E N T Event-Driven Ansible

In many installations, Filebeat sends KAFKA_CFG_ADVERTISED_LISTENERS=U information is reaching the message


the data directly to Logstash, so it PLAINTEXT://192.168.2.12:9092 bus correctly. To do this, simply run
is not available to other services. the console consumer in the active
However, switching to the message (if your Docker/Podman server is Kafka container, kafka:
bus as a middleman does not in- running on 192.168.2.12). In a pro-
volve too much overhead (Figure 1). duction environment, of course, the podman exec U
It makes sense to do this in dis- Kafka setup would not run in plain- -it kafka kafka-console-consumer.sh U
tributed, hybrid environments with text and would only allow authorized --bootstrap-server localhost:9092 U
multiple clouds and data center set- and encrypted connections. Once the --topic journals --from-beginning
ups because it reduces the connec- Kafka container is running, configure
tions needed between the separate the Filebeat clients on the servers to Now you should see the JSON-
networks. Filebeat-Logstash turns send their log information to Kafka. formatted log entries of your systems
into Filebeat-Kafka-Logstash, with The /etc/filebeat/filebeat.yml file with the Filebeat setup. If you option-
the option for other services to lis- then looks something like: ally want to collect all the log infor-
ten in on Kafka. mation with Logstash, the lines:
filebeat.inputs:
- type: journald input {
From Filebeat to Kafka
id: everything kafka{
Although large installations run Kafka codec => json
as a cluster, a single container is fine output.kafka: bootstrap_servers => "192.168.2.12:9092"
for the test setup. You can optionally hosts: ["192.168.2.12:9092"] topics => ["journals"]
log the raw messages of the bus to a topic: 'journals' }
persistent volume. Messages usually partition.round_robin: }
end up with a log aggregator such as reachable_only: false
Logstash anyway, which is why the in the Logstash configuration are all
Kafka container works well in a state- Filebeat sends all the log messages it takes.
less setup (i.e., without permanent to the message broker journals topic.
storage). Alternatively, Filebeat could send Creating an Inventory
Follow the instructions on the page messages to different topics depend-
of the image [8] and use the setup ing on the content, such as with the In a terminal, you can start the EDA
from the “Using the command line” optional entry: container interactively with:
section, which will set up kafka with
“KRaft” as quorum instead of Zoo- topics: podman run -it --name eda U
keeper. Unlike the example, enter the - topic: "critical" --volume /home/user/rules:rules:Z U
external IP address of your server when.contains: quay.io/ansible/ansible-rulebook:main U
where you run the Kafka container message: "CRITICAL" /bin/bash
with Docker or Podman. Add the
an additional configuration line to As soon as Filebeat sends log data Now you can edit the rulebooks and
advertise the external IP of your you can check – on the host with playbooks on the host machine and
Kafka-Server. the Kafka container – whether the test them in the interactive container.

Figure 2: The Kafka messaging bus's Stopped DNS event triggers the Ansible rule. The triggered playbook restarts the failed service.

74 A D M I N 78 W W W. A D M I N - M AGA Z I N E .CO M
Event-Driven Ansible M A N AG E M E N T

By the way, this even works on Win- user directory with sudo privileges and - name: Kafka Monitor
dows if you use a WSL distribution the playbook would use become. hosts: all
with Podman. I list the hosts with their fully quali- sources:
To begin, create an inventory in / fied domain names (FQDNs) in the - name: Kafka
home/user/rules. EDA currently only inventory because that is how they ansible.eda.kafka:
supports the YAML format for inven- are listed in the log entries on the host: 192.168.2.12
tories and not the old INI format. In Kafka bus and can therefore be as- port: 9092
the test, two systems provide log in- signed to the inventory. However, topic: journals
formation to Kafka, so the inventory. you need to specify the IP address for
yml file looks like: the Ansible connection. The example With this rulebook, ansible-rulebook
here shows how EDA can react in later taps into the message bus and
all: case of a DNS failure; in that case, it forwards incoming messages in the
children: has to contact the host by the IP ad- hierarchical variable events. Sources
servers: dress, of course. also have filters that can change the
hosts: content or structure of the variables:
srv1.local.ip:
Setting Rules
ansible_host: 192.168.2.1 filters:
srv2.local.ip: The test setup can basically react to - json_filter:
ansible_host: 192.168.2.2 any log entry and run a matching exclude_keys: ['user']
vars: Ansible playbook. This example re-
ansible_user: root starts the dnsmasq DNS/DHCP server For example, it would remove all
if it fails for any reason. To do this, event.user values from the source
For the tests, Ansible logs in to the re- EDA monitors the log messages from event. An important filter is in-
mote systems as the root user. In a pro- systemd. The rule_dns.yml rulebook sert_hosts_to_meta, which takes one
duction environment, there would be a starts with the Kafka source: or more host names from the source
M A N AG E M E N T Event-Driven Ansible

message and then uses them as lim- name: dnsmasq target systems without an agent.
its when executing a playbook. An state: started However, any kind of practical
Ansible playbook started by EDA will implementation is still complicated
therefore only contact those hosts It simply takes the value from the at the current early stage. Drools
previously transferred to the event. event as the host variable and only is a powerful rules engine and was
meta.host variable by insert_hosts_ performs the actions on the host that definitely undertasked with EDA’s
to_meta. This example does not use triggered the event. previous simple capabilities. The
the filter, though, simply because it question inevitably arises as to
was not yet included in the test build Simple with Huge Potential whether EDA really needs the com-
of ansible-rulebook (0.11) used. plex setup with Python, Java, and
The test rule responds to the message The example given is quite simple, but the JPY bridge.
that the DNS server has been stopped: it shows the great potential of EDA. On the other hand, EDA is likely to
The rulebook used is, of course, not add massive functionality in future ver-
rules: limited to one rule. You can react with sions, precisely because it is based on
- name: Monitor DNS Service further rules to combinations of source such a powerful rules engine. Thanks
condition: event.message is search(U events and link the conditions to several to this modular, open concept, it will be
"Stopped DNS", ignorecase=true) sources (AND/OR) with something like: possible to use EDA in many scenarios
with many different sources in the fu-
If a message on the message bus now condition: ture – as long as users and developers
contains Stopped DNS (Figure 2), all: continue to develop the tool and pro-
EDA steps in and runs an action: - event.host.name == "srv1.local.ip" vide additional source plugins. Q
- event.journald.process.name == U
action: "systemd"
run_playbook: - event.systemd.unit == U Info
name: start_dns.yml "dnsmasq.service" [1] Event-Driven Ansible: [https://github.com/
extra_vars: - event.message is search(U ansible/event-driven-ansible]
event_host: "Stopped DNS", ignorecase=true) [2] Event source plugins:
"{{ event.host.name }}" [https://ansible.readthedocs.io/projects/
The event only triggers if all of the speci- rulebook/en/stable/sources.html]
In the example, EDA only starts one fied values match (AND). The any key- [3] EDA release: [https://www.ansible.com/
action. The rulebook could alterna- word starts the action if one of the speci- blog/event-driven-ansible-is-here]
tively use the Actions keyword and fied conditions is true (OR). Actions also [4] Ansible rulebook docs:
then perform several actions in se- have more options. While developing [https://ansible.readthedocs.io/projects/
quence. Because the complete event your own rulebooks, you will often use rulebook/en/stable/index.html]
variable of the rulebook is not auto- print_event to view the complete event [5] Drools: [https://www.drools.org]
matically available for the playbook, variable or parts of it and adjust your [6] jpy: [https://pypi.org/project/jpy/]
you have to use extra_vars to pass rules and playbooks accordingly. [7] Filebeat:
information from the rulebook vari- The start_playbook action uses An- [https://www.elastic.co/beats/filebeat]
able into the playbook. sible Runner to run a playbook in the [8] Apache Kafka:
The referenced playbook starts the EDA container. In the future, how- [https://github.com/bitnami/containers/
DNS server but needs to know on ever, it will be far more interesting to blob/main/bitnami/kafka/README.md]
which host to start the service. As launch an existing job template on an
mentioned before, because the limit Ansible controller or an AWX setup,
filter was not working when this ex- for which you can turn to the run_ The Author
ample was created, I used a simple job_template action; it connects to Andreas Stolzenberger
hack in the playbook instead, which an existing controller or AWX system worked as an IT magazine
explains why the start_dns.yml play- with a URL and a token. Sooner or editor for 17 years. He was
book looks like it does: later, EDA is likely to find its way into the deputy editor in chief
the Controller and AWX web UIs and of the german Network
- hosts: "{{ event_host }}" become a part of those tools. Computing Magazine from
gather_facts: no 2000 to 2010. After that, he worked as a solution
Conclusions engineer at Dell and Vmware. In 2012 Andreas
tasks: moved to Red Hat. There, he currently works
- name: Start DNS Service Event-Driven Ansible architecture as principal solution architect in the Technical
ansible.builtin.service: continues the idea of automating Partner Development department.
Q

76 A D M I N 78 W W W. A D M I N - M AGA Z I N E .CO M
M A N AG E M E N T Monitoring Servers and Services

What’s your status (page)?

Custodian
Businesses with modern IT infrastructures can keep track of internal and command to create a Docker network
for test container(s):
external servers and services with black box monitoring by Monitoror,
Vigil, and Statping-ng. By Ankur Kumar docker network create statuspage-demo

Keeping the lights on round the Monitoror Wallboard Next, create the monitoror_stack.
clock in a modern IT infrastructure is yml and config.json files (Listings 1
pretty complicated. The usual proce- The first free open source status and 2) [2] to launch a Monitoror
dure consists of running both internal page solution I will talk about stack and supply its configuration,
and external servers and services to is Monitoror [1]. It is know as a respectively.
deliver end products and services. monitoring wallboard because it is a The Monitoror configuration file de-
Keeping a close watch on each and single-page app comprising different fines an arrangement of desired moni-
every element of the running infra- colored rectangular tiles. Monitoror is toring tiles in a given number of col-
structure is a necessity for any tech- mainly concerned with three kinds of umns. If the columns are fewer than
nology-driven business. general-purpose monitoring checks: the number of tiles, then the screen is
The modern monitoring solutions are ping, port, and HTTP. filled vertically, too. An array of tiles
devised to address the critical need The ping check verifies connectivity is defined to contain various moni-
to be proactive, rather than reactive, to a configured host, the port check toring checks on the wallboard. The
and spot problems before failures can verifies port listening on a configured PING, PORT, and HTTP-STATUS tiles are
impair a business. Having a status endpoint, and the HTTP checks GET self-explanatory. A tile of type GROUP is
page for internal and external servers requests to a URL. It also has special defined that shows a single rectangular
and services that provides a quick built-in checks for Azure DevOps, area to represent multiple checks. This
overview of what’s failing where GitHub, GitLab,
can keep IT teams on top of their Jenkins, Ping- Listing 1: monitoror_stack.yml
infrastructure. dom, and Travis 01 version: '3.5'
Most enterprise monitoring solutions CI (continuous 02 services:
are overkill and way too expensive integration). 03 monitoror:
for many companies, especially small The wallboard 04 image: monitoror/monitoror:${MTRRTAG:-latest}
05 ports:
to medium-sized businesses. In this highlights the
06 - "38080:8080"
Lead Image © Tatiana Venkova, 123RF.com

article, I look at some amazing, free configured tiles 07 environment:


and open source solutions that set up either in green or 08 - "MO_CONFIG=/etc/config.json"
various kinds of status pages perform- red according to a 09 restart: unless-stopped
ing black box monitoring. The only respective check 10
requirement to test these solutions passing or failing. 11 networks:
on your IT infrastructure is a running To see Monitoror 12 default:
13 name: statuspage-demo
Docker engine, which is pretty com- yourself in action,
14 external: true
mon nowadays. use a terminal

78 A D M I N 78 W W W. A D M I N - M AGA Z I N E .CO M
Monitoring Servers and Services M A N AG E M E N T

tile will be red when single or multiple checks in the group fail. static binary from its GitHub releases page and run
This kind of tile makes use of the limited page area, so you can the binary after making it executable. To launch the
pack more checks into the wallboard. Monitoror container and supply the required Monito-
The Monitoror documentation has complete information about ror configuration to demonstrate the tiles monitoring
tiles that can be monitored, along with their respective param- localhost in its container and itself running on port
eters, the information displayed, and so on. Even if you need 8080, execute the two commands in Listing 3.
to run Monitoror natively, just grab the appropriate Golang Now when you access the Monitoror wallboard page
in your browser at localhost:38080, you should see a
Listing 2: ÚűťĘĥƒŋƯűť page with monitoring tiles (Figure 1).
01 { I intentionally provided false IP addresses in the
02 "version": "2.0", demo config file to show how failing tiles look on the
03 "columns": 2, wallboard. Monitoror picks up config changes during
04 "tiles": [
its periodic checks. To see how the tiles change, with
05 { "type": "PING", "params": {"hostname": "127.0.0.1"}},
06 { "type": "PORT", "params": {"hostname": "129.0.0.1", "port": 8080}}, changing input, correct the invalid IP addresses and
07 { "type": "HTTP-STATUS", "params": {"url": "https://google.com"}}, introduce a typo in the HTTP-STATUS tile by modifying
08 { the config.json as in Listing 4.
09 "type": "GROUP", When you provide Monitoror the new configuration
10 "label": "localhost PING/PORT/HTTP Tests",
(second line of Listing 3), the wallboard should re-
11 "tiles": [
12 { flect the new configuration (Figure 2). Correcting the
13 "type": "PING", typo in the HTTP tile URL and copying the new con-
14 "params": { figuration should turn all tiles on the wallboard green.
15 "hostname": "128.0.0.1" The second demo arrays the Monitoror wallboard with
16 }
17 },
tiles monitoring some popular modern cloud servers.
18 { To launch single monitoring instances for OpenSearch,
19 "type": "PORT", Kafka, and Redis, modify the monitoror_stack.yml file
20 "params": { as in Listing 5 and the config.json file as in Listing 6.
21 "hostname": "127.0.0.1",
Use the commands in Listing 3 to launch and copy
22 "port": 8080
23 } the Monitoror config. You should see the wallboard
24 },{ with the new server tiles (Figure 3). You should now
25 "version": "2.0", be feeling at home with the elegant capabilities of
26 "columns": 2, Monitoror that let you get a monitoring wallboard up
27 "tiles": [
and running. The command
28 { "type": "PING", "params": {"hostname": "127.0.0.1"}},
29 { "type": "PORT", "params": {"hostname": "129.0.0.1", "port": 8080}},
30 { "type": "HTTP-STATUS", "params": {"url": "https://google.com"}}, docker run -it --rm -v /var/run/docker.sock:/var/run/U
31 { docker.sock:ro -v ./monitoror_stack.yml:/etc/compose/U
32 "type": "GROUP", monitoror_stack.yml:ro docker docker compose U
33 "label": "localhost PING/PORT/HTTP Tests",
-f /etc/compose/monitoror_stack.yml down
34 "tiles": [
35 {
36 "type": "PING", Listing 3: Monitoror Container and Config
37 "params": {
docker run -it --rm -v /var/run/docker.sock:/var/run/docker.sock:ro -v
38 "hostname": "128.0.0.1"
./monitoror_stack.yml:/etc/compose/monitoror_stack.yml:ro docker docker
39 }
compose -f /etc/compose/monitoror_stack.yml up -d
40 },
41 { docker run -it --rm -v /var/run/docker.sock:/var/run/docker.sock:ro -v
42 "type": "PORT", ./config.json:/etc/monitoror/config.json:ro -v ./monitoror_stack.yml:/
43 "params": { etc/compose/monitoror_stack.yml:ro docker docker compose -f /etc/compose/
44 "hostname": "127.0.0.1", monitoror_stack.yml cp /etc/monitoror/config.json monitoror:/etc/config.json
45 "port": 8080
46 }
47 }, Listing 4: Diff of ÚűťĘĥƒŋƯűť
48 { 5,6c6,7
49 "type": "HTTP-STATUS", < { "type": "PORT", "params": {"hostname": "129.0.0.1", "port": 8080}},
50 "params": { < { "type": "HTTP-STATUS", "params": {"url": "https://google.com"}},
51 "url": "http://localhost:8080" ---
52 } > { "type": "PORT", "params": {"hostname": "127.0.0.1", "port": 8080}},
53 } > { "type": "HTTP-STATUS", "params": {"url": "https://gogle.com"}},
54 ] 15c15
55 } < "hostname": "128.0.0.1"
56 ] ---
57 } > "hostname": "127.0.0.1"

W W W. A D M I N - M AGA Z I N E .CO M A D M I N 78 79
M A N AG E M E N T Monitoring Servers and Services

cleans up the running stack once you’re done play- and other options. To bring up Vigil quickly to see it in action, create
ing with Monitoror. the YML and CFG files shown in Listings 7 and 8.
This Vigil configuration with minimal necessary settings is self-
Vigil Status Page explanatory. The [server] section controls on which IP and port

The Monitoror single-page wallboard is a quick and


nice solution but has limited capabilities and is suited
to a relatively smaller number of servers and services.
The next free open source status page solution, Vigil
[3], is more mature and capable of handling large
numbers of servers and services with additional capa-
bilities, including branding, alerting, announcements,

Listing 5: Diff for monitoror_stack.yml


8a10,27
> - "MO_MONITORABLE_HTTP_SSLVERIFY=false"
> restart: unless-stopped
>
> opensearch:
Figure 1: Monitoror wallboard page.
> image: opensearchproject/opensearch:${OSRHTAG:-latest}
> environment:
> - "discovery.type=single-node"
> restart: unless-stopped
>
> kafka:
> image: bitnami/kafka:${KFKATAG:-3.2.3}
> environment:
> - "ALLOW_PLAINTEXT_LISTENER=yes"
> - "KAFKA_CFG_LISTENERS=PLAINTEXT://0.0.0.0:9092,CONTROLLER://:9093"
> - "KAFKA_ENABLE_KRAFT=yes"
> restart: unless-stopped
>
> redis:
> image: redis:${RDSSTAG:-latest}
> command: "redis-server --save 60 1 --loglevel warning"
Figure 2: Monitoror wallboard page with corrected IPs.

Listing 6: Diff for ÚűťĘĥƒŋƯűť


< "columns": 2, < "params": {
--- < "url": "http://localhost:8080"
> "columns": 3, < }
5,7d4 < }
< { "type": "PING", "params": {"hostname": "127.0.0.1"}}, ---
< { "type": "PORT", "params": {"hostname": "127.0.0.1", "port": 8080}}, > {"type": "PING", "params": {"hostname": "opensearch"}},
< { "type": "HTTP-STATUS", "params": {"url": "https://google.com"}}, > {"type": "PORT", "params": {"hostname": "opensearch", "port": 9200}},
10c7 > {"type": "PORT", "params": {"hostname": "opensearch", "port": 9600}},
< "label": "localhost PING/PORT/HTTP Tests", > {"type": "HTTP-STATUS", "params": {"url": "https://admin:admin@
--- opensearch:9200"}}
> "label": "opensearch PING/PORT/HTTP Tests", > ]
12,30c9,28 > },
< { > {
< "type": "PING", > "type": "GROUP",
< "params": { > "label": "kafka PING/PORT Tests",
< "hostname": "129.0.0.1" > "tiles": [
< } > {"type": "PING", "params": {"hostname": "kafka"}},
< }, > {"type": "PORT", "params": {"hostname": "kafka", "port": 9092}}
< { > ]
< "type": "PORT", > },
< "params": { > {
< "hostname": "127.0.0.1", > "type": "GROUP",
< "port": 8080 > "label": "redis PING/PORT Tests",
< } > "tiles": [
< }, > {"type": "PING", "params": {"hostname": "redis"}},
< { > {"type": "PORT", "params": {"hostname": "redis", "port": 6379}}
< "type": "HTTP-STATUS",

80 A D M I N 78 W W W. A D M I N - M AGA Z I N E .CO M
Monitoring Servers and Services M A N AG E M E N T

Vigil is running with a defined num- Telegram, XMPP, Webex). The test provides. The probe section has vari-
ber of parallel workers. The [brand- configuration uses a random web- ous subsections to group and define
ing] section contains various settings hook (it will be different for you) gen- your various ICMP, TCP, and HTTP
for the status page header (e.g., com- erated through the random URL and probes against various hosts and
pany name, logo, website). The [met- email address generator Webhook.site, endpoints provided in the replica ar-
rics] section defines various polling so you can see some events generated ray. Vigil provides a script probe as
parameters for the Vigil probes. by Vigil during testing. well to cover monitoring checks not
Vigil notifies you of the different The Vigil GitHub project provides served by other probes. The Vigil
monitoring events emitted in differ- a complete configuration file [4], GitHub project page provides a de-
ent ways (e.g., email, Twilio, Slack, so you to move quickly through all tailed description of all the configu-
the settings it ration settings.
Listing 7: vigil_stack.yml
01 version: '3.5'
02 services:
03
04 vigil:
05 image: valeriansaliou/vigil:${VGILTAG:-v1.26.0}
06 ports:
07 - "48080:8080"
08 restart: unless-stopped
09
10 networks:
11 default:
12 name: statuspage-demo
13 external: true
Figure 3: Monitoror wallboard page with server tiles.

Listing 8: ÚűťĘĥƒÚēĥ
01 [server] 36 [notify]
02 log_level = "debug" 37 startup_notification = true
03 inet = "0.0.0.0:8080" 38 reminder_interval = 300
04 workers = 4 39
05 manager_token = "REPLACE_THIS_WITH_A_VERY_SECRET_KEY" 40 [notify.webhook]
06 reporter_token = "REPLACE_THIS_WITH_A_SECRET_KEY" 41 hook_url = "https://webhook.site/4406e2a4-13cd-4c99-975c-d3456
07 a148b26"
08 [assets] 42
09 path = "./res/assets/" 43 [probe]
10 44 [[probe.service]]
11 [branding] 45 id = "ping"
12 page_title = "Vigil Localhost Test Status Page" 46 label = "PING"
13 page_url = "https://teststatus.page/status" 47 [[probe.service.node]]
14 company_name = "RNG" 48 id = "invalidiping"
15 icon_color = "#1972F5" 49 label = "Invalid IP Ping"
16 icon_url = "https://avatars.githubusercontent.com/u/226598?v=4" 50 mode = "poll"
17 logo_color = "#1972F5" 51 replicas = ["icmp://129.0.0.1"]
18 logo_url = "https://avatars.githubusercontent.com/u/226598?v=4" 52
19 website_url = "https://teststatus.page/" 53 [[probe.service]]
20 support_url = "mailto:[email protected]" 54 id = "port"
21 custom_html = "" 55 label = "PORT"
22 56 [[probe.service.node]]
23 [metrics] 57 id = "localhostport"
24 poll_interval = 60 58 label = "Localhost Port 8080 Probe"
25 poll_retry = 2 59 mode = "poll"
26 poll_http_status_healthy_above = 200 60 replicas = ["tcp://localhost:8080"]
27 poll_http_status_healthy_below = 400 61
28 poll_delay_dead = 30 62 [[probe.service]]
29 poll_delay_sick = 10 63 id = "http"
30 push_delay_dead = 20 64 label = "HTTP"
31 push_system_cpu_sick_above = 0.90 65 [[probe.service.node]]
32 push_system_ram_sick_above = 0.90 66 id = "googlehttp"
33 script_interval = 300 67 label = "Google Http Probe"
34 local_delay_dead = 40 68 mode = "poll"
35 69 replicas = ["https://google.com"]

W W W. A D M I N - M AGA Z I N E .CO M A D M I N 78 81
M A N AG E M E N T Monitoring Servers and Services

The commands in Listing 9 bring up and you should see an updated page mouseover. The Vigil status page
the container, provide the required (Figure 5). enables you to add a large number of
configuration, and restart Vigil. When You can see for yourself that the status probe targets because of its vertical
you open localhost:48080 in your page is user friendly and interactive, layout. Please note that the Open-
browser, you will see the Vigil status instantly helping you figure out where Search HTTP probe is failing here
page (Figure 4). the probes are passing or failing so because Vigil has no way to turn off
To add more external servers in a you can dig in further. If you add the SSL certificate through the config
second test setup, as for Monitoror, file. However, you could solve this is-
change the YML file as in Listing 10. reveal_replica_name = true sue with the script probe provided by
Also, change the previous configura-
tion file to include probes for the in every Listing 10: Diff for vigil_stack.yml
OpenSearch, Kafka, and Redis con- [[probe.ser- 11a12,30
tainers (Listing 11). vice.node]] > opensearch:
> image: opensearchproject/opensearch:${OSRHTAG:-latest}
Execute the commands in Listing 9 subsection,
> environment:
to launch the containers, copy the tool tips will > - "discovery.type=single-node"
updated config, and restart Vigil, re- show replica > restart: unless-stopped
spectively. Now refresh your browser details on >
> kafka:
> image: bitnami/kafka:${KFKATAG:-3.2.3}
Listing 9: Launch and Configure Vigil Container
> environment:
docker run -it --rm -v /var/run/docker.sock:/var/run/docker.sock:ro > - "ALLOW_PLAINTEXT_LISTENER=yes"
-v ./vigil_stack.yml:/etc/compose/vigil_stack.yml:ro docker docker > - "KAFKA_CFG_LISTENERS=PLAINTEXT://0.0.0.0:9092,CONTROLLER://:9093"
compose -f /etc/compose/vigil_stack.yml up -d > - "KAFKA_ENABLE_KRAFT=yes"
docker run -it --rm -v /var/run/docker.sock:/var/run/docker.sock:ro -v > restart: unless-stopped
./vigil_stack.yml:/etc/compose/vigil_stack.yml:ro -v ./config.cfg:/ >
etc/vigil.cfg:ro docker docker compose -f /etc/compose/vigil_stack.yml > redis:
cp /etc/vigil.cfg vigil:/etc/vigil.cfg > image: redis:${RDSSTAG:-latest}
docker run -it --rm -v /var/run/docker.sock:/var/run/docker.sock:ro > command: "redis-server --save 60 1 --loglevel warning"
-v ./vigil_stack.yml:/etc/compose/vigil_stack.yml:ro docker docker > restart: unless-stopped
compose -f /etc/compose/vigil_stack.yml restart vigil >

Listing 11: Diff for ÚűťĘĥƒÚēĥ


30,34d29 < id = "port" < replicas = ["tcp://localhost:8080"]
< push_delay_dead = 20 < label = "PORT" ---
< push_system_cpu_sick_above = 0.90 --- > reveal_replica_name = true
< push_system_ram_sick_above = 0.90 > id = "opensearch" > replicas = ["https://admin:admin@
< script_interval = 300 > label = "OPENSEARCH" opensearch:9200"]
< local_delay_dead = 40 > [[probe.service.node]] 63,64c84,91
45,46c40,41 > id = "opensearchping" < id = "http"
< id = "ping" > label = "Opensearch Ping" < label = "HTTP"
< label = "PING" > mode = "poll" ---
--- > reveal_replica_name = true > id = "redis"
> id = "kafka" > replicas = ["icmp://opensearch"] > label = "REDIS"
> label = "KAFKA" > [[probe.service.node]] > [[probe.service.node]]
48,49c43,44 > id = "opensearchport9200" > id = "redisping"
< id = "invalidiping" > label = "Opensearch Port 9200" > label = "Redis Ping"
< label = "Invalid IP Ping" > mode = "poll" > mode = "poll"
--- > reveal_replica_name = true > reveal_replica_name = true
> id = "kafkaping" > replicas = ["tcp://opensearch:9200"] > replicas = ["icmp://redis"]
> label = "Kafka Ping" > [[probe.service.node]] 66,67c93,94
51c46,53 > id = "opensearchport9600" < id = "googlehttp"
< replicas = ["icmp://129.0.0.1"] > label = "Opensearch Port 9600" < label = "Google Http Probe"
--- > mode = "poll" ---
> replicas = ["icmp://kafka"] > reveal_replica_name = true > id = "redisport6379"
> reveal_replica_name = true > replicas = ["tcp://opensearch:9600"] > label = "Redis Port 6379"
> [[probe.service.node]] 57,58c77,78 69c96,97
> id = "kafkaport9092" < id = "localhostport" < replicas = ["https://google.com"]
> label = "Kafka Port 9092" < label = "Localhost Port 8080 Probe" ---
> mode = "poll" --- > reveal_replica_name = true
> reveal_replica_name = true > id = "opensearchttp9200" > replicas = ["tcp://redis:6379"]
> replicas = ["tcp://kafka:9092"] > label = "Opensearch Http 9200"
54,55c56,75 60c80,81

82 A D M I N 78 W W W. A D M I N - M AGA Z I N E .CO M
Monitoring Servers and Services M A N AG E M E N T

Vigil by creating an inline script that local services on the status page. Last Listing 12: Diff for vigil_stack.yml
makes use of curl with a flag to skip but not the least, Vigil Reporter librar-
< image: valeriansaliou/vigil:${VGILTAG:-v1.26.0}
SSL certificate checks. To obtain a ies are provided for various program-
---
new image for Vigil, modify the YML ming languages to submit health in-
> image: vigilsci:${VGILTAG:-v1.26.0}
and CFG files, as shown in Listings formation to Vigil from your apps. All
12 and 13, create the Dockerfile_Vig- of this information should be enough
ilSSLCertIgnore file in the current for you to make full use of the Vigil Listing 13: Diff for ÚűťĘĥƒÚēĥ
working directory with the lines, capabilities to craft a powerful black 79c79
box monitoring status page. < mode = "poll"
---
FROM valeriansaliou/vigil:v1.26.0
> mode = "script"
RUN apk --no-cache add curl
Statping-ng Status Page 81c81,86
< replicas = ["https://admin:admin@opensearch:9200"]
and run the command
and Monitoring Server ---
> scripts = [
Finally, I look at a black box monitor-
> '''
docker build U ing status page solution full of features > /usr/bin/curl -k https://admin:admin@opensearch:9200
-f Dockerfile_VigilSSLCertIgnore . U known as Statping-ng [5]. To explore > return $?
-t vigilsci:v1.26.0 its vast array of functionalities, create > '''
> ]
a statpingng_stack.yml file in your cur-
Now execute the Docker commands rent directory (Listing 14), and execute
used previously to launch the Vigil the command present you with the Statping-ng setup
service, copy the new Vigil config, and (Figure 6). Just fill in the necessary
restart the Vigil service. Voilà, the script docker run -it --rm U details and hit Save Settings. It should
probe fixes the limitation of the HTTP -v /var/run/docker.sock:/var/run/U now proceed to another page with
probe and all the tiles are now green. docker.sock:ro U your entered Name and Description
You also can administer Vigil through -v ./statpingng_stack.yml:/etc/compose/U and be populated with multiple kinds
its APIs to publish public announce- statpingng_stack.yml:ro U of demo probes supported by Statping-
ments, manually report node metrics, docker docker compose U ng (Figure 7).
and so on. The Vigil GitHub project -f /etc/compose/statpingng_stack.yml up -d It’s pretty cool that the crisp-looking
page has relevant information for you status page not only provides demo
to make use of the Manager HTTP in the current directory to launch the probes to familiarize yourself with the
and Reporter HTTP APIs. A related Statping-ng Docker container. solution right away, but on scrolling
optional component known as Vigil Accessing localhost:58080 in your web down, you’ll find monitoring graphs for
Local can be used to add the health of browser should

Figure 4: Vigil status page. Figure 5: Updated Vigil status page with servers’ probes.

W W W. A D M I N - M AGA Z I N E .CO M A D M I N 78 83
M A N AG E M E N T Monitoring Servers and Services

these demo services (Figure 8). It’s just pages. In the Services tab try adding programmatically through its configura-
the tip of the iceberg; you can dig into and removing some of the probes by tion settings – but without involving
the fine details about the monitored selecting the appropriate drop-down any manual steps: Create the Docker-
endpoint by clicking on the View but- items for HTTP, TCP, UDP, gRPC, and file_MyStatpingNG file with the lines
tons located on the respective graphs. Static services; ICMP Ping; and various
A Dashboard link at the top of the other applicable settings. You could FROM adamboutcher/statping-ng
status page takes you to another page also set up various Notifiers to receiv-
(after entering the admin credentials ing online and offline alerts, post An- CMD statping --port $PORT -c /app/config.yml
you already set in the Statping-ng nouncements for respective services,
setup page), presenting each and ev- browse through the Statping-ng logs, and create a new Docker image with
ery possible setting provided for the add different kinds of users, and so on. the command:
Statping-ng configuration. An important feature of Statping-ng
In the Services tab you can see and is the ability to back up and restore docker build -f Dockerfile_MyStatpingNG . U
modify the demo services. You don’t current Statping services, groups, noti- -t mystatpingng
need to learn anything else to make fiers, and other settings to and from a
use of Statping-ng because every oper- JSON file, respectively. The Statping-ng Now modify statpingng_stack.yml as
ation is driven by its user-friendly tab wiki [6] provides more detailed info shown in Listing 15 to include the
about its various servers to be monitored, and then cre-
aspects. ate the required bind mount directory
Finally, shift gears for Statping-ng with
to start and con-
figure Statping-ng mkdir config

Listing 14: statpingng_stack.yml


01 version: '3.5'
02 services:
03
04 statping:
05 container_name: statpingng
06 image: adamboutcher/statping-ng:${SPNGTAG:-latest}
07 ports:
08 - 58080:8080
09 restart: unless-stopped
10
11 networks:
12 default:
13 name: statuspage-demo
Figure 6: Statping-NG setup page. 14 external: true

Figure 7: Statping-NG status page with demo services. Figure 8: Statping-NG demo services graphs.

84 A D M I N 78 W W W. A D M I N - M AGA Z I N E .CO M
Monitoring Servers and Services M A N AG E M E N T

and create a services.yml file (List- Info Listing 16: services.yml


ing 16) in the config directory. Finally, [1] Monitoror:
01 x-tcpservice: &tcpservice
set up the correct owner for the bind [https://github.com/monitoror/monitoror]
02 type: tcp
mount directory with the command [2] Code for this article: 03 check_interval: 60
[https://linuxnewmedia.thegood.cloud/s/ 04 timeout: 15
chown -R root:root config 9nFQcFb2p8oRMEJ] 05 allow_notifications: true
06 notify_after: 0
[3] Vigil:
07 notify_all_changes: true
and bring up the new Statping-ng con- [https://github.com/valeriansaliou/vigil] 08 public: true
tainers with the command used earlier. [4] Vigil sample config file: 09 redirect: true
When you refresh the Statping-ng [https://github.com/valeriansaliou/vigil/ 10
status page, you should see the new blob/master/config.cfg] 11 x-httpservice: &httpservice
12 type: http
servers being monitored. [5] Statping-ng: [https://github.com/
13 method: GET
You should feel confident now to statping-ng/statping-ng] 14 check_interval: 45
start using Statping-ng for your [6] Statping-ng wiki: [https://github.com/ 15 timeout: 10
production-level status pages. The statping-ng/statping-ng/wiki] 16 expected_status: 200
17 allow_notifications: true
server provides many additional
18 notify_after: 2
features, including a choice to use Author 19 notify_all_changes: true
Postgres or MySQL for the production Ankur Kumar is a passionate free and open source 20 public: true
back end, a server configuration with software (FOSS) hacker and researcher and seeker 21 redirect: true
more environmental variables, a full- of mystical life knowledge. He explores cutting- 22
23 x-icmping: &icmping
fledged API to access data on your edge technologies, ancient sciences, quantum
24 type: icmp
Statping server programmatically, a spirituality, various genres of music, mystical 25 check_interval: 60
Lets Encrypt-enabled automatic SSL literature, and art. You can connect with Ankur on 26 timeout: 15
certificate, exporting your status page [https://www.linkedin.com/in/richnusgeeks] and 27 allow_notifications: true
28 notify_after: 0
to a static HTML file, and so on. The explore his GitHub site at [https://github.com/
29 notify_all_changes: true
Statping-ng wiki provides relevant richnusgeeks] for other useful FOSS pieces. 30 public: true
documentation for most 31
of these features. Listing 15: Diff for statpingng_stack.yml 32 services:
6c6 33 - name: ICMP Kafka
< image: adamboutcher/statping-ng:${SPNGTAG:-latest} 34 domain: kafka
Conclusion --- 35 <<: *icmping
36
> image: mystatpingng:${SPNGTAG:-latest}
The problem of seg- 37 - name: TCP Kafka 9092
8a9,36
regating decisions on 38 domain: kafka
> volumes:
39 port: 9092
a status page with > - ./config:/app
40 <<: *tcpservice
black box monitoring > environment:
41
> - "DB_CONN=sqlite"
is a logical first step 42 - name: ICMP opensearch
> - "STATPING_DIR=/app"
when running modern 43 domain: opensearch
> - "SAMPLE_DATA=false"
complicated IT setups. 44 <<: *icmping
> - "GO_ENV=test"
45
Monitoror is a quick > - "NAME=StatpingNG Probes Demo"
46 - name: TCP opensearch 9200
solution to set up for a > - "DESCRIPTION=StatpingNG Probes Configuration Demo"
47 domain: opensearch
> restart: unless-stopped
few tens of servers and 48 port: 9200
> 49 <<: *tcpservice
services to make binary > opensearch: 50
decisions about what > image: opensearchproject/opensearch:${OSRHTAG:-latest} 51 - name: TCP opensearch 9600
and where things are > environment: 52 domain: opensearch
failing. Vigil adds more > - "discovery.type=single-node" 53 port: 9600
> restart: unless-stopped 54 <<: *tcpservice
functionality over Moni-
> 55
toror to probe a large > kafka: 56 - name: HTTP opensearch
number of infrastruc- > image: bitnami/kafka:${KFKATAG:-3.2.3} 57 domain: https://admin:admin@opensearch:9200
ture endpoints, and > environment: 58 <<: *httpservice
Statping-ng goes even > - "ALLOW_PLAINTEXT_LISTENER=yes" 59
> - "KAFKA_CFG_LISTENERS=PLAINTEXT://0.0.0.0:9092, 60 - name: ICMP redis
further and provides
CONTROLLER://:9093" 61 domain: redis
more user friendliness > - "KAFKA_ENABLE_KRAFT=yes" 62 <<: *icmping
and professional black > restart: unless-stopped 63
box monitoring, pre- > 64 - name: TCP redis 6379
senting tough competi- > redis: 65 domain: redis
> image: redis:${RDSSTAG:-latest} 66 port: 6379
tion even to expensive
> command: "redis-server --save 60 1 --loglevel warning" 67 <<: *tcpservice
enterprise solutions. Q

W W W. A D M I N - M AGA Z I N E .CO M A D M I N 78 85
N U TS A N D B O LTS Updating Azure VMs

Keeping Azure VMs up to date

New Models
The operating system of an Azure virtual machine can be kept up to date by a number of methods; we
provide an overview and look in detail at Azure Automation Update Management, the Azure Update
Management Center, automation options, and other related topics. By Thomas Joos
Both Windows and Linux can be VMs right away in this context. solution to run Azure Stack HCI on
virtualized in different orientations Even if you are not moving to the your premises is required. You can
by Microsoft Azure with the use of cloud, Microsoft offers customers build an on-premises cluster that is
various prebuilt images. Azure virtual with Software Assurance and various connected to Azure but running in-
machines (VMs) can be virtualized subscriptions the option of purchas- house. The connection to Azure does
not only in the cloud, but also di- ing extended support that is valid for not need to be persistent if the system
rectly in an on-premises data center three years and continues to provide exclusively relies on on-premises
with Azure Stack hyperconverged in- security updates. However, ESUs are services.
frastructure (HCI), while retaining all not cheap. In the first year, 75 percent
the benefits familiar from the Azure of the license fees of the current ver- Update Management from
cloud. The benefits Microsoft cites in- sion are due, which rises to 100 per-
clude simpler licensing, effective high cent in the second year. In the third
Azure
availability, and a simpler update year, costs rise to 125 percent. In Updating servers is not just about
management process, all of which are comparison, if you migrate to Azure extended support, of course, but also
discussed here. VMs, you will receive ESU security about patches for current servers that
updates free of charge for the next you run as VMs in Azure. Microsoft
Free Extended Security three years, and extended support is offers Azure Automation Update
included in the cost of ownership. Management, which can automate
Updates This policy applies to all operating patching of servers in on-premises
Extended Security Updates (ESUs) systems that are no longer supported data centers and virtual servers in
after the end of support (e.g., for – licenses for SQL Server 2012 and Azure and other cloud services (Fig-
Windows Server 2008/2008 R2 and Windows Server 2012, for example, ure 1). This service is ideal for Azure
Windows Server 2012/2012 R2) for can now be used in the cloud. VMs because all services run directly
up-to-date Azure VM operating sys- Strictly speaking, organizations will in Azure and no other services are
Lead Image © Brian Welker, 123RF.com

tems have been free thus far. This benefit from the free updates if they needed. Azure Update Management
point is interesting because even if rely on Azure VMs, Azure Dedicated is also capable of updating Linux
companies use Windows Server 2016, Host, Azure VMware Solution, Azure servers running as Azure VMs. The
they will slowly but surely have to Nutanix Solution, or Azure Stack service is available free of charge, but
start worrying about expiring support. HCI. Servers with Azure Stack HCI charges are incurred if you store logs.
Organizations migrating to Azure will can remain in the on-premises data Azure stores the monitoring agent
therefore want to look into updating center, although the use of a certified logs for update management in Log

86 A D M I N 78 W W W. A D M I N - M AGA Z I N E .CO M
Updating Azure VMs N U TS A N D B O LTS

Analytics, but you need to create Update Manager is particularly easy the individual VMs in Azure; you can
your own workspace there, where because both resources reside in the see which VMs are already integrated
the Azure Monitor data for telemetry cloud. The Updates item is available with Azure Update Management.
and for logging-connected servers is for this purpose on the Azure portal’s Azure does not differentiate between
written by default. The monitoring Azure VM dashboard. You can create the various operating systems. Select-
tool can perform automated queries, the link there by choosing to update ing Enable simply adds the computers.
with the logs of the connected serv- with automation, and you can remove
ers stored in Log Analytics, which servers from update management in Scheduling and Automating
will give you information about your the same way. Azure VMs and on-
servers, including missing updates. In premises VMs are visible in the web
Updates
turn, this information can be used by interface, allowing update rules to be On the Azure Update Management
other services in Azure – Azure Up- applied by location. web portal, you can see which serv-
date Management in this case. One of the strengths of Azure Update ers are not up to date by clicking the
Azure Monitor and Azure Update Man- Management is that it can also inte- Automation Account you created in
agement can be connected to create grate Linux servers and verify that the resource group where you inte-
server logs and install updates at the they are correctly configured and grated Azure Update Management,
same time. Simply put, Azure Update have all updates. The configuration is and then click Update management.
Management extends the capabilities of similar to managing updates for Win- You can view the noncompliant serv-
Azure Monitor to include update man- dows servers. To integrate Linux serv- ers (i.e., the servers that are missing
agement. Besides Windows Server, the ers, open your Azure Update Man- updates), the compliant servers, and
supported operating systems include agement account and click Update other information here.
CentOS, RHEL, and SUSE version 12 or Management. Use Add Azure VMs to Integrating computers with Azure
newer, as well as Ubuntu. You cannot add VMs to Azure, whether these be Update Management is the first step
update Windows 7, 8.1, 10, and 11 with Windows or Linux machines. in providing updates to those comput-
the tool. Microsoft recommends the use If you want to add computers outside ers. A deployment schedule then lets
of Endpoint Manager for this. Azure, use Add non-Azure machine. you automate update controls on the
These can be physical computers or integrated computers. You can define
Integrating Azure VMs VMs in Amazon AWS or Google Cloud schedules, release specific updates, and
Platform (GCP). When adding VMs, configure which updates you want the
For Azure Update Management, it in the new window, first select which servers to install automatically – com-
does not matter whether the con- Azure subscription and locations you pletely independent of the data center
nected computer is a physical or vir- want to use; then, choose the resource in which the computers are running.
tual server and whether it resides in groups in which the servers you are You can create schedules from the
the local data center or in the cloud. currently integrating reside. At the bot- Schedule update deployment item,
Integrating Azure VMs with Azure tom of the window, the portal shows which can be found under Update

Figure 1: Azure VMs, VMs in other cloud services, and servers in the on-premises data center can be added to Azure Update Management.

W W W. A D M I N - M AGA Z I N E .CO M A D M I N 78 87
N U TS A N D B O LTS Updating Azure VMs

Management in the Azure Update schedules. Of course, multiple schedules Automatic VM Guest Patches
Management account area. To begin, are possible; you can click on a sched-
Choosing the option to enable automatic VM
assign a name to the schedule (e.g., ule to customize its settings.
guest patches for Azure VMs in the settings
Monthly Patch Day), and then select The History tab tells you whether the
enables the automatic installation of patches
whether the schedule is for Windows deployment schedules are working on for Azure VMs. This option installs all critical
or Linux computers. When done, de- the computers. In deployment sched- updates and security patches, but not the
cide on the computer groups you want ules you can add specific updates on definition files for Microsoft Defender. To do
to connect. On the Groups configura- the basis of knowledgebase IDs or this, Azure automatically finds a time when a
tion page, you can configure whether exclude specific IDs. You can see the VM has a low load level and installs the updates
you want to integrate VMs from Azure exact IDs again in Missing updates. in the background. All actions for this update
or from outside. Groups can be filtered Clicking on an update opens the Micro- run automatically in Azure. The system regu-
by Azure subscription, location, stor- soft support page with detailed instruc- larly checks for updates for VMs and installs
age locations, and tags. After defining tions on the update in question. If you them. Restarts occur outside of peak periods.
the groups, add the machines you double-click on a row with an update, However, the feature does not support every
want to update with the schedule. the window changes to the Log Analyt- image. Microsoft explains the options capabili-
The section where you select indi- ics area for update management. ties and tells what to watch out for on the
automatic VM guest patching website [1].
vidual update classifications is impor-
tant. Conventional updates, roll-ups, VM-Specific Update Methods
security updates, critical updates, Q Automatic by OS (Windows Auto-
and feature packs are available, and As part of the VM creation process in matic Updates)
you can exclude or include individual Azure, you can make further adjust- Q Azure-orchestrated
updates from installation by their ments that update the VMs – in Azure Q Manual updates
knowledgebase IDs. Update Management, in part, but also Q Image default
In the update management Overview, with functions that have nothing to Not all options are applicable to all
below the update management ac- do with Azure Update Management. images, however. If you select Auto-
count, several menu items are listed For example, if you use Windows matic by OS, Windows servers can be
for each computer; they play an essen- Server 2022 Datacenter: Azure Edition, updated automatically by the Win-
tial role in managing the computers. you can set the hot patch function for dows Automatic Updates feature. One
The Machines tab lists the computers Azure VMs in the cloud or on Azure example of provisioning this is the
integrated with Azure Update Manage- Stack HCI. Hot patching lets you in- update built into the VM operating
ment along with some basic informa- stall updates without having to restart system, or you can use Azure Update
tion. The information includes the the entire server each time. If individ- Management. (See also the “Auto-
number of updates missing on the ma- ual services or areas of a server require matic VM Guest Patches” box.)
chine and whether the management a restart after installing updates, then The Azure-orchestrated option lets
agent can currently connect to Azure, only those restart. This process takes a you specify that Windows and Linux
if the machine is not an Azure VM. fraction of a second, and users do not servers are no longer updated by the
The Missing updates tab shows which notice any interruptions in most cases. operating systems’ built-in update
updates are currently not installed on In other words, the workloads remain functions, but only by Azure itself.
the computers and how many comput- permanently active. However, this only works for selected
ers are missing updates. A distinction From the Azure portal when creating images in Azure. If the feature is not
is made between updates for Windows a VM, four settings in Patch orchestra- available for a particular image, the
and updates for Linux. If you have cre- tion options (Figure 2) also have a setup wizard grays out this option.
ated a provisioning schedule, then it can permanent effect on how updates are Manual updates means that Azure
be seen in the menu item Deployment installed for the various sources: does not install any updates auto-
matically, so manual work is required
involving the use of policies to man-
age updates. You can do this in Azure
with Azure Update Management,
for example, but also with Windows
Server Update Services (WSUS). In
this case, you need a WSUS server
in Azure or in the local data center if
you are using Azure Stack HCI. The
server then supplies the Azure VMs
with updates. The option for Image
default is used for Linux servers if
Figure 2: The Guest OS updates section presents four patch orchestration options. Azure-orchestrated is not available.

88 A D M I N 78 W W W. A D M I N - M AGA Z I N E .CO M
N U TS A N D B O LTS Updating Azure VMs

After creating the VM, you can change it makes perfect sense to refresh it VMs. To do this, in the Azure Update
the settings retroactively in many first to make sure that the options Management Center, enable periodic
cases. To do this, look for the update work. After selecting One-time up- assessment in Machines. After doing
settings button in the Updates menu. date, select the VMs you want to up- so, Azure automatically scans all your
When you get there, select the update date in this step. For each VM, Azure Azure VMs for missing updates and
approach for your various Azure VMs. shows the status and how many up- installs them according to the settings
You need to use the Try new Update dates are missing. Next, define which you stored in the Update Management
Management Center link for this menu updates you want to apply. You can Center and in the VM settings.
item to appear. (I used a Preview select the Include update classification
edition of the Update Management item and check Select all. Conclusions
Center, so slight changes in options When done, you still need to decide
and arrangements might occur as the whether the VMs will always restart You have numerous ways at your
product matures.) After Microsoft has or whether you leave the restart deci- disposal to update Azure VMs, some
activated this new view, the menu sion to the VM. Finally, you will see of which can be defined when a VM
item immediately becomes available. a summary and Azure will proceed is created. In general, all Azure VMs
to install the updates on the VM. You and all supported operating systems
Azure Update Management can view the status on the Azure por- let you deploy updates with on-board
tal. You do not need to switch to the operating system resources, but run-
Center operating system interface to do this. ning Azure Update Management will
Once you have created an Azure VM, If you use the Azure Update Manage- make more sense in many cases. Q
the Updates item is available on the ment Center, all connected Azure
dashboard; you can use it to connect VMs can be managed with a single Info
the VM to Azure Update Manage- action in the portal. You can view [1] Automatic VM guest patching for
ment. More menu items appear here various charts on the Update Man- Azure VMs: [https://learn.microsoft.
after enabling the new user inter- agement Center dashboard show- com/en-us/azure/virtual-machines/
face, and if you don’t need the new ing which VMs are missing updates automatic-vm-guest-patching]
interface anymore, you can easily and how many VMs are connected
return by choosing the link to exit. to the environment (Figure 3). For The Author
In parallel, you can open the new selected machines, select Machines | Thomas Joos is a freelance IT consultant and has
Azure Update Management Center at Update settings and enable periodic been working in IT for more than 20 years. In addi-
this point. This is where you control assessment. tion, he writes hands-on books and papers on Win-
the installation of VM updates in Azure Policy lets you run an au- dows and other Microsoft topics. Online you can
the Azure portal. First, let Check for tomated scan across all connected meet him on [http://thomasjoos.spaces.live.com].
updates scan
the VMs to
check for miss-
ing updates. If
some updates
are missing, the
portal displays
them. You can
then specify
whether you
want to handle
the update
process manu-
ally once only.
In this case,
select One-time
update. If you
want to install
the updates at
a later time,
choose Sched-
uled updates.
After creating
an Azure VM, Figure 3: Microsoft visualizes update management in the Azure Update Management Center, which is still in preview.

90 A D M I N 78 W W W. A D M I N - M AGA Z I N E .CO M
N U TS A N D B O LTS Ripgrep

Accelerated and targeted search and find with Ripgrep

Rusty Finds
Ripgrep combines the best features of tools like Grep, Ack, and Silver Searcher when it comes to using
search patterns in a terminal window. By Ferdinand Thommes
If you want to search for specific and Silver Searcher, such as search- and binary files by default and helps
strings in code or files, you can ing a complete directory tree. How- Git users search for code by sup-
turn to a number of powerful Unix ever, it does not try to be a complete porting .gitignore [6].
tools for the command line, such replacement for Grep, because it Another advantage is that Ripgrep is
as Ack [1] and Grep [2]. Both use does not cover 100 percent of Grep’s available on macOS and Windows,
regular expressions for the search use cases. not just on Linux, so if you work on
patterns. Grep is often used for this One advantage of the Rust program- multiple platforms, you just need to
type of search, although Ack has a ming language is its speed, which is learn the syntax for one search utility.
slight edge in terms of functionality. why many Linux tools have been re- Moreover, the current version of Rip-
Silver Searcher [3], with a similar written in Rust in recent years. They grep [7] is available in the reposito-
orientation, is an Ack fork that aims include Ripgrep, which is specifi- ries of many distributions. On macOS
to boost the search speed. But these cally designed to make searching for you can use Homebrew to install the
old-timers are not the subject of this strings in large files or directories package, whereas Chocolatey, Win-
article. In fact, I’m only mentioning as efficient as possible. The applica- get, or Scoop do the job on Windows.
them because they form the basis of tion’s syntax is intuitive and easy As already mentioned, Ripgrep uses
Ripgrep. to learn. The tool offers clear and fairly easy to learn syntax. I will be
Ripgrep [4] is a speedy implementa- consistent output that highlights the looking at a few simple examples to
tion of Grep in the Rust language. lines found and presents the results get you started. The name of the Rip-
The tool searches directories recur- clearly, without requiring actions grep executable is rg. In the simplest
Photo by Andrew Tom on Unsplash

sively with a pattern of regular ex- you would need to achieve com- case, you would use the command
pressions (regexes) and outputs all parable results in Grep, including
the matches it finds sorted by file. It searching for regular expressions, rg -i <search key>
is part of a collection of modernized ignoring certain file types, or recur-
Unix tools [5]. Ripgrep additionally sively searching directories. Ripgrep to find a string in the current di-
adopts some of the features of Ack ignores symbolic links and hidden rectory and its subdirectories. As

92 A D M I N 78 W W W. A D M I N - M AGA Z I N E .CO M
Ripgrep N U TS A N D B O LTS

with Grep, the -i option stands their line numbers. The application If you don’t know the exact path or
for ignoring case. If you explicitly highlights the term in red (Figure file name but know what type of file
want to see only matches with the 1). Depending on the search term it is, the -g option is useful:
upper- or lower-case spelling of a entered, the output can be very long,
term, use the -s parameter followed even without path specification. In $ rg <Search key> -g '*.<Type>'
by the search term with the desired this case, it makes sense to limit the
spelling. search command to the extent pos- Conversely, you can exclude file types
Ripgrep first displays the path and file sible by entering a path (Figure 2): with the --type parameter:
name of the match before listing all
occurrences of the search term with $ rg -i bicycle Nextcloud3/Linux-User/ $ rg -i bicycle --type not txt

Figure 1: You should only use the search without a path specification if you know that the output will be manageable.

Figure 2: Narrowing down by stating the path gives you more targeted output. In addition to the path to the file, the tool displays the
number of the line where the paragraph with the search term starts.

W W W. A D M I N - M AGA Z I N E .CO M A D M I N 78 93
N U TS A N D B O LTS Ripgrep

If you don’t need the individual appending --sort path to the com- but also the file containing the term.
matches, but only the files in which mand (Figure 3). First, use
they occur, the -l option is useful: Assume you want to change the
SSH port for security reasons. To $ rg -i port /etc/ssh/sshd_config
$ rg -l <Search key> do this, you want to find the line in
the configuration file and jump to to determine the line number. The
You also can achieve alphabeti- it directly in an editor. In this case, port specification in the file shown in
cal sorting in the context of -l by you would not just specify the path, Figure 4 occurs in line 40. In editors
like Nano or Vim, you can stipulate
+40 in the call to jump directly to this
line and change the port.
Additionally, statistical values often
play a key role. For example: How often
does a term occur in how many lines?
How long did the search take? How
many files were browsed? This infor-
mation appears at the end of the output
if you append the --stats option to the
search command (Figure 5).
If you want to have a little more context
than Ripgrep gives you by default, set
the -C <n> option, where <n> denotes
the number of lines before and after
the location you want to view. To check
a certain number of lines exclusively
before the find location, use -B <n>; -A
<n> does the same thing for lines after
the find. You can search compressed
text archives with -z -a and a combina-
tion of options (Figure 6).

Conclusions
Ripgrep does not claim to replace
Figure 3: If you want to know which files contain a search term, the -l --sort path Grep. On the one hand, it behaves
option sorts the results alphabetically. differently in some cases; on the
other hand, it does not cover all the
functions of its role model.
The tool is aimed more at pattern
searching, whereas Grep developer
Ken Thompson designed his tool
mainly for stream processing [8] on
AT&T Unix v6. For example, you need
to run GNU Grep in combination
with find for recursive searching in
directories; Ripgrep handles this task
without external support.
Ripgrep offers many more functions
than I can hope to describe in this ar-
ticle. The detailed documentation [9]
will help you explore the full feature
set. If you are interested in the differ-
ences between various search tools, it
is also worth visiting Beyondgrep.com
[10]: The site compares the features
Figure 4: If you know the line number of the search term in a configuration file, you can of Ack, Silver Searcher, Git-Grep,
jump to it directly with Nano or Vim. GNU Grep, and Ripgrep in detail. Q

94 A D M I N 78 W W W. A D M I N - M AGA Z I N E .CO M
Ripgrep N U TS A N D B O LTS

Info [3] Silver Searcher: [5] Modern Unix: [https://github.com/


[1] Ack: [https://beyondgrep.com/ [https://github.com/ggreer/the_sil- ibraheemdev/modern-unix]
documentation/] ver_searcher] [6] gitignore: [https://www.atlassian.com/
[2] Grep: [https://www.man7.org/linux/ [4] Ripgrep: [https://github.com/BurntSushi/ git/tutorials/saving-changes/gitignore]
man-pages/man1/grep.1.html] Ripgrep#installation] [7] Versions: [https://repology.org/project/
ripgrep/versions]
[8] Data stream: [https://en.wikipedia.org/
wiki/Data_stream]
[9] Documentation:
[https://github.com/BurntSushi/Ripgrep/
blob/master/GUIDE.md]
[10] Comparison: [https://beyondGrep.com/
feature-comparison/]

The Author
Figure 5: Statistical values for the number of finds, the number of files searched, the time Ferdinand Thommes lives and works as a Linux
required, and more are provided by the --stats option. developer, freelance writer, and tour guide in Berlin.

Figure 6: If so desired, Ripgrep will also dig into archives to find your quarry.

W W W. A D M I N - M AGA Z I N E .CO M A D M I N 78 95
Back Issues S E RV I C E

ADMIN Network & Security

NEWSSTAND Order online:


https://bit.ly/ADMIN-library

ADMIN is your source for technical solutions to real-world problems. Every issue is packed with practical
articles on the topics you need, such as: security, cloud computing, DevOps, HPC, storage, and more!
Explore our full catalog of back issues for specific topics or to complete your collection.

#77 – September/October 2023


Secure CI/CD Pipelines
DevSecOps blends security into every step of the software development cycle.
On the DVD: IPFire 2.27

#76 – July/August 2023


Energy Efficiency
The storage share of the total data center energy budget is expected to double by
2030, calling for more effective resource utilization.
On the DVD: Finnix 125 (Live boot

#75 – May/June 2023


Teamwork
Groupware, collaboration frameworks, chat servers, and a web app package
manager allow your teams to exchange knowledge and collaborate on projects in a
secure environment.
On the DVD: Ubuntu 23.04 “Lunar Lobster” Server Edition

#74 – March/April 2023


The Future of Software-Defined Networking
New projects out of the Open Networking Foundation provide a glimpse into the
5G network future, most likely software based and independent of proprietary
hardware.
On the DVD: Kali Linux 2022.4

#73 – January/February 2023


Databases
Cloud databases can be useful in virtually any conceivable deployment scenario,
come in SQL and NoSQL flavors, and harmonize well with virtualized and
containerized environments.
On the DVD: Manjaro 22.0 Gnome

#72 – November/December 2022


OpenStack
Find out whether the much evolved OpenStack is right for your
private cloud.
On the DVD: Fedora 36 Server Edition

W W W. A D M I N - M AGA Z I N E .CO M A D M I N 78 97
S E RV I C E Contact Info / Authors

WRITE FOR US
Admin: Network and Security is looking • unheralded open source utilities
for good, practical articles on system ad- • Windows networking techniques that
ministration topics. We love to hear from aren’t explained (or aren’t explained
IT professionals who have discovered well) in the standard documentation.
innovative tools or techniques for solving We need concrete, fully developed solu-
real-world problems. tions: installation steps, configuration
Tell us about your favorite: files, examples – we are looking for a
• interoperability solutions complete discussion, not just a “hot tip”
• practical tools for cloud environments that leaves the details to the reader.
• security problems and how you solved If you have an idea for an article, send
them a 1-2 paragraph proposal describing your
• ingenious custom scripts topic to: [email protected].

Contact Info
Editor in Chief While every care has been taken in the content of
Joe Casad, [email protected] the magazine, the publishers cannot be held re-
Managing Editors sponsible for the accuracy of the information con-
Rita L Sooby, [email protected] tained within it or any consequences arising from
Lori White, [email protected] the use of it. The use of the DVD provided with the
magazine or any material provided on it is at your
Senior Editor
own risk.
Ken Hess
Copyright and Trademarks © 2023 Linux New
Localization & Translation
Media USA, LLC.
Ian Travis
Authors No material may be reproduced in any form
News Editor whatsoever in whole or in part without the writ-
Amber Ankerholz 6 Amber Ankerholz
ten permission of the publishers. It is assumed
Tam Hanna 22 Copy Editors that all correspondence sent, for example, let-
Amy Pettle, Aubrey Vaughn ters, email, faxes, photographs, articles, draw-
Ken Hess 3 Layout ings, are supplied for publication or license to
Stefan Hofer 10 Dena Friesen, Lori White third parties on a non-exclusive worldwide
basis by Linux New Media unless otherwise
Cover Design
Thomas Joos 86 Lori White, Illustration based on graphics by
stated in writing.
Joydip Kanjilal 38 armmypicca, 123RF Free Images All brand or product names are trademarks
of their respective owners. Contact us if we
Advertising
Ankur Kumar 78 Brian Osborn, [email protected]
haven’t credited your copyright; we will always
correct any oversight.
Jeff Layton 32 phone +49 8093 7779420
Printed in Nuremberg, Germany by Kolibri Druck.
Publisher
Dr. Carola Lilienthal 18 Distributed by Seymour Distribution Ltd, United
Brian Osborn
Martin Gerhard Loschwitz 26, 42 Kingdom
Marketing Communications
Gwen Clark, [email protected] ADMIN (Print ISSN: 2045-0702, Online ISSN: 2831-
Benjamin Pfister 56 9583, USPS No: 347-931) is published bimonthly by
Linux New Media USA, LLC
Henning Schwentner 18 4840 Bob Billings Parkway, Ste 104 Linux New Media USA, LLC, and distributed in the
Lawrence, KS 66049 USA USA by Asendia USA, 701 Ashland Ave, Folcroft PA.
Artur Skura 48 November/December 2023. Application to Mail at
Customer Service / Subscription Periodicals Postage Prices is pending at
Andreas Stolzenberger 72 For USA and Canada: Philadelphia, PA and additional mailing offices.
Email: [email protected] POSTMASTER: send address changes to Linux
Ferdinand Thommes 92 Phone: 1-866-247-2802 Magazine, 4840 Bob Billings Parkway, Ste 104,
Kevin Wittmer 62 (Toll Free from the US and Canada) Lawrence, KS 66049, USA.
Eberhard Wolff 14 For all other countries: Represented in Europe and other territories by:
Email: [email protected] Sparkhaus Media GmbH, Bialasstr. 1a, 85625
Matthias Wübbeling 70 www.admin-magazine.com Glonn, Germany.

98 A D M I N 78 W W W. A D M I N - M AGA Z I N E .CO M

You might also like