0% found this document useful (0 votes)
174 views77 pages

Lenovo Reference Architecture For OpenShift 4.13

This reference architecture document provides an overview of deploying Red Hat OpenShift Container Platform on Lenovo ThinkSystem, ThinkEdge, and ThinkAgile HX servers. It describes the component models, operational models, deployment examples, and management of OpenShift clusters on Lenovo hardware platforms.

Uploaded by

WS
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
174 views77 pages

Lenovo Reference Architecture For OpenShift 4.13

This reference architecture document provides an overview of deploying Red Hat OpenShift Container Platform on Lenovo ThinkSystem, ThinkEdge, and ThinkAgile HX servers. It describes the component models, operational models, deployment examples, and management of OpenShift clusters on Lenovo hardware platforms.

Uploaded by

WS
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 77

Reference Architecture:

Red Hat OpenShift Container


Platform on Lenovo
ThinkSystem, ThinkEdge and
ThinkAgile HX Servers
Last update: 15 Dec 2023
Version 3.8

Provides overview of Describes container


application containers using orchestration, virtualization,
Lenovo ThinkSystem, and security technologies on
ThinkEdge and ThinkAgile HX OpenShift

Describes DevOps and Introduces OpenShift site


Continuous Integration & configuration samples and
Continuous Delivery Deployment Ready
Solutions

Xiaotong Jiang, Lenovo


Ajay Dholakia, Lenovo
Kai Chi, Lenovo
Gareth Jenkins, Red Hat
Alex Kretzschmar, Red Hat

Click here to check for updates


Table of Contents
1 Introduction ............................................................................................... 4
2 Business problem and business value................................................... 5
2.1 Business problem .................................................................................................... 5
2.2 Business value ......................................................................................................... 6
2.2.1 DevOps Overview ........................................................................................................................ 6
2.2.2 Monolithic vs Microservices Architecture ..................................................................................... 6
2.2.3 Continuous Integration/Continuous Delivery (CI/CD) .................................................................. 7
2.2.4 Container Orchestration for Edge Infrastructure ......................................................................... 8

3 Requirements .......................................................................................... 10
3.1 Functional requirements ........................................................................................ 10
3.2 Non-functional requirements .................................................................................. 10
4 Architectural overview ........................................................................... 12
5 Component model .................................................................................. 14
5.1 OpenShift infrastructure components..................................................................... 15
5.1.1 Bastion node .............................................................................................................................. 16
5.1.2 Infrastructure node .................................................................................................................... 16
5.1.3 Bootstrap node .......................................................................................................................... 17
5.1.4 Control plane node .................................................................................................................... 17
5.1.5 Compute/Worker node............................................................................................................... 17
5.2 OpenShift components architecture ....................................................................... 17
6 Operational model .................................................................................. 20
6.1 Hardware components ........................................................................................... 20
6.1.1 Lenovo ThinkSystem SR630 V3 1U Server .............................................................................. 20
6.1.2 Lenovo ThinkSystem SR650 V3 2U Server .............................................................................. 21
6.1.3 Lenovo ThinkSystem SR630 V2 1U Server .............................................................................. 22
6.1.4 Lenovo ThinkSystem SR650 V2 2U Server .............................................................................. 24
6.1.5 Lenovo ThinkSystem SE350 Edge Server ................................................................................ 25
6.1.6 Lenovo ThinkEdge SE450 Server ............................................................................................. 31
6.1.7 Lenovo ThinkSystem SR645 V3 1U Server .............................................................................. 36
6.1.8 Lenovo ThinkSystem SR665 V3 2U Server .............................................................................. 37
6.1.9 Lenovo ThinkSystem SR635 V3 1U Server .............................................................................. 38
6.1.10 Lenovo ThinkSystem SR655 V3 2U Server .............................................................................. 40
6.1.11 Lenovo ThinkAgile HX series .................................................................................................... 41
6.2 Hypervisor supported by ThinkAgile HX................................................................. 44
6.3 Deployment models ............................................................................................... 44
6.4 Deployment Ready Solution................................................................................... 45
6.5 Auto-Deployment by Lenovo Open Cloud Automation ........................................... 46
6.6 Compute/worker node ............................................................................................ 48
6.7 Persistent storage for containerized workloads...................................................... 49
6.7.1 Container Storage Interface (CSI) ............................................................................................. 49
6.7.2 ThinkSystem DM series ............................................................................................................. 50

2 Reference Architecture: Red Hat OpenShift Container Platform on Lenovo


ThinkSystem/ThinkEdge/ThinkAgile HX Servers
Version 3.8
6.7.3 Red Hat OpenShift Data Foundation (ODF).............................................................................. 51
6.8 Networking for OpenShift deployment on bare metal............................................. 53
6.9 Networking for OpenShift deployment on ThinkAgile HX ....................................... 54
6.10 Hardware management network ............................................................................ 55
6.11 Network redundancy .............................................................................................. 55
6.12 Networking switch configurations ........................................................................... 56
6.13 Edge computing ..................................................................................................... 57
6.14 Systems management ........................................................................................... 57
6.15 Deployment examples with ThinkSystem server.................................................... 58
6.15.1 Typical and enhanced OpenShift configurations ....................................................................... 58
6.15.2 OpenShift configurations for edge computing ........................................................................... 60
6.16 Deployment examples with ThinkAgile HX server.................................................. 62
6.17 Software and subscription ...................................................................................... 64
6.18 Deployment validation ............................................................................................ 66
6.19 Multi-cluster management ...................................................................................... 67
6.20 Virtualization on OpenShift cluster ......................................................................... 68
6.21 OpenShift security.................................................................................................. 69
6.22 OpenShift data science .......................................................................................... 70
Resources........................................................................................................................ 74

3 Reference Architecture: Red Hat OpenShift Container Platform on Lenovo


ThinkSystem/ThinkEdge/ThinkAgile HX Servers
Version 3.8
1 Introduction

The target audience for this Reference Architecture (RA) is system administrators or system architects. Some
experience with Docker and OpenShift technologies may be helpful, but it is not required.
Emerging software applications are making use of containerization to enable rapid prototyping, testing, as well
as deployment to the cloud. The microservice revolution introduced container-based service architecture,
which offers many benefits when compared to traditional virtualization technologies. Containers provide a
more portable and faster way to deploy services on cloud infrastructures compared to virtualization.
While containers themselves provide many benefits, they are not easily manageable in large environments.
Hence, many container orchestration tools have increased in momentum and gained popularity. Each
orchestration tool is different; hence they should be chosen individually for specific purposes. The Red Hat
OpenShift® Container Platform uses Kubernetes which is an orchestration framework based on container-
deployment practices. Kubernetes has gained popularity in the cloud community due to its maturity, scalability,
performance, and many built-in tools that enable production-level container workload orchestration.
Enterprises are also seeing increasing expansion of their IT infrastructure into edge locations fuelled by the
need to enable customer and partner interactions. This growth is being driven by Edge / IoT (Internet of
Things) technologies that enable new business opportunities but also pose new challenges. The use of
containers has emerged as the de facto mechanism for deploying edge IT infrastructure.
Red Hat OpenShift Container Platform 4.13 is built around a core of application containers powered by CRI-
O, with orchestration and management provided by Kubernetes, on a foundation of Red Hat® Enterprise
Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS). It provides many enterprise-ready features like
enhanced security, multitenancy, simplified application deployment, and continuous integration/continuous
deployment tools. With Lenovo™ servers and Lenovo Open Cloud – Automation (LOC-A) technologies,
deployment, provisioning and managing the Red Hat OpenShift Container Platform infrastructure becomes
effortless and produces a resilient solution.
This RA describes the system architecture for the Red Hat OpenShift Container Platform 4.13 based on
Lenovo ThinkSystem/ThinkEdge/ThinkAgile HX servers. It provides details of the hardware requirements to
support various OpenShift node roles and the corresponding configuration of the systems. It also describes
the network architecture and details for the switch configurations. The hardware bill of materials is provided
for all required components to build the OpenShift cluster.
An example deployment configuration is described for a typical configuration. The hardware bill of materials is
provided for all required components to build the OpenShift cluster. A deployment guide is provided to show
how to prepare, provision, deploy, and manage the Red Hat OpenShift Container Platform on Lenovo
ThinkSystem/ThinkEdge/ThinkAgile HX servers.

4 Reference Architecture: Red Hat OpenShift Container Platform on Lenovo


ThinkSystem/ThinkEdge/ThinkAgile HX Servers
Version 3.8
2 Business problem and business value

2.1 Business problem

Businesses today want to deliver new features and updates to their products for their internal users as well as
external stakeholders quickly and with high quality. Every industry today is seeing a transformation, which is
predominantly driven by advances in technology. In order to stay competitive and relevant in their respective
industry and marketplace, every business needs to take advantage of new technologies quickly and adopt
them to their products and solutions. Today, much of the technology advancement and innovation is driven
through a combination of software and hardware. More importantly, emerging technologies such as artificial
intelligence (AI) and machine learning (ML) are fuelled by rapid advancements in software. In addition, many
of the IT infrastructure and data center advancements are driven through software defined technologies such
as software defined storage (SDS) and software defined networking (SDN). Hence software is a key driver in
pushing forward various technologies in all industries.
Container technology has picked up momentum in the software development area and enabled developers to
take advantage of several benefits from packaging their applications as containers:
● Containers are light-weight application run-time environments compared to virtual machines and are
therefore less resource intensive and highly efficient.
● Containers enable developers to package their applications as well as all the library dependencies to
properly run them so that a container image provides a completely self-sufficient environment to
execute the application code. This also means that multiple application instances requiring different
versions of the same libraries can be packaged into different containers and run side-by-side on the
same operating system instance without any interference.
● Containers are portable across different platforms (as long as the underlying operating systems are
compatible). CRI-O is a modern implementation of the Kubernetes Container Runtime Interface (CRI)
which enables using Open Container Initiative (OCI) compatible runtimes. It allows Kubernetes to use
any OCI compliant runtime for running pods. It is a lightweight alternative to the legacy docker runtime
previously used for containers.
● Containers are now the de facto standard of operation for some of the well-known public cloud
environments including Google, Amazon AWS, and Microsoft Azure. Hence, applications packaged as
containers can be executed on-prem or on a public cloud without any modifications (other than
additional steps to secure the software for use on the public cloud).
● There are many open source tools available now to help developers easily create, test, and deploy
containerized applications. In addition, all the well-known protocols for security, authentication,
authorization, storage, etc., can be applied to containerized workloads without any modifications to
applications. In other words, you can take a legacy application written in a language such as Java and
package it, as is, into a container image and run it.
● Containers are now the way to implement a continuous integration/continuous delivery (CI/CD)
development pipeline and the DevOps paradigm of combining software development and
infrastructure operations.
According to IDC’s prediction, over 50% of new Enterprise IT Infrastructure will be deployed at edge sites by
2023. By 2024, 75% of new operational applications deployed at the edge will leverage containerization.
Organizations deploy workloads in physical locations that are near the places where data is produced or
consumed when they use edge computing. The fast and huge incremental growth of edge sites and edge
Apps bring big challenge to IT infrastructures, platforms, and applications.

5 Reference Architecture: Red Hat OpenShift Container Platform on Lenovo


ThinkSystem/ThinkEdge/ThinkAgile HX Servers
Version 3.8
2.2 Business value

Software development life cycle (SDLC) practices have evolved to achieve high velocity and efficiency of
development. Organizations today implement Agile/Scrum as the primary methodology to create cohesive
development teams that work close to their customers, gather incremental product requirements, and deliver
new features in short development cycles.

2.2.1 DevOps Overview

DevOps is evolving as a standard practice in many organizations to bring together software development and
IT operations teams for the goal of eliminating process bottlenecks in development, quality assurance (QA),
and delivery cycles, with a goal of providing services to their end customers efficiently. Implementing a proper
DevOps process requires careful planning and an assessment of the end-to-end pipeline from development to
QA to delivery. Automation is a key aspect of DevOps. Traditional software development processes were
handled mostly manually. When developers commit code to the source code repository, the test engineer or a
build engineer would then checkout the code and build the project, resolving any conflicts. After the QA
iterations, the release engineer would be responsible to take the release branch code and build the final
shippable product. Along this pipeline, many of the steps were handled manually by people, which introduced
the delays in the release cycles. Agile development now takes advantage of new automation tools that
remove the manual steps.
Another core aspect of DevOps is providing the necessary freedom and resources to develop and test code
without having to rely on IT operations teams to re-provision or reconfigure hardware every time. With the
advances in technologies including virtualization, containers, Cloud multi-tenancy, self-service, and so on, it is
now possible to detach applications and end-users from physical hardware and provide the necessary tools
for them to create the right virtual environment to run their applications without directly modifying the physical
hardware or interfering with other users’ applications. With cloud self-service, users can request and provision
the hardware to meet their application specific requirements. Cloud administrators create the proper policy
and authorization workflows such that the provisioning process does not require manual steps. DevOps
essentially combines the role of the software engineer with that of the IT operator so that the end-to-end
software pipeline can be implemented with automation.

2.2.2 Monolithic vs Microservices Architecture

Software architecture over the last two to three decades has evolved from a monolithic application that
essentially delivered all feature functions in a single package to service-oriented architectures where the
application is divided up into multiple tiers with each tier providing programming interfaces (APIs) for its clients
to access the features via service calls. Software principles such as modularity, coupling, code reuse, etc.,
have remained the core principles that people still use, however new programming languages, runtime
facilities, mobile versus cloud native techniques, etc., have evolved in the recent years to shift software from
the traditional architectures to more of a microservice architecture.
Microservices are software applications that are organized around smaller subsets of functionality of the
overall application such that they are much more manageable than a bulky piece of software developed by
10s of developers and coordinated in a complex dev/test process. microservices are software modules that
run as services with open APIs. They use open protocols, e.g. HTTP, and expose REST based APIs so that
the services can run anywhere - on-premises or on the public cloud, and still be locatable via well-defined
service end-points. Due to this design, microservices provide a loosely-coupled architecture that can be
maintained by smaller development teams and can be independently updated.
Containers provide a natural mechanism to implement microservices because they allow you to package the
application code and all its runtime library dependencies into a single image, which is portable across various
platforms. In addition, container orchestration platforms such as Kubernetes provide the mechanisms for

6 Reference Architecture: Red Hat OpenShift Container Platform on Lenovo


ThinkSystem/ThinkEdge/ThinkAgile HX Servers
Version 3.8
service location, routing, service replication, etc., which helps microservices development and runtime.
Developers do not need to explicitly write additional code for these types of services because the platform
provides these facilities.

Figure 1. Moving from a Monolithic to Microservices architecture

2.2.3 Continuous Integration/Continuous Delivery (CI/CD)

As discussed in the previous section, successful DevOps practice requires a good amount of automation of
the development, test, QA, and delivery pipeline. This is where the CI/CD comes into play.
Continuous integration is the process by which new code development through build, unit testing, QA, and
delivery is automated end-to-end using build tools and process workflows. CI enables rapid integration of
code being developed by multiple engineers concurrently and committed into a source code repository. CI
enables rapid build and test of code so that software bugs and quality issues are identified quickly. Once the
code passes the test plan and QA it can then be pushed to the release branches for release integration.
Continuous Delivery (CD) enables automation around delivering code to production systems after performing
the necessary functional, quality, security, and performance tests. Continuous delivery enables bringing new
features in the software to the end users faster without going through the manual release test and promotion
steps.
More information on CI/CD with OpenShift is available in the following online book:
assets.openshift.com/hubfs/pdfs/DevOps_with_OpenShift.pdf
Figure 2 shows a high-level view of DevOps pipeline.

7 Reference Architecture: Red Hat OpenShift Container Platform on Lenovo


ThinkSystem/ThinkEdge/ThinkAgile HX Servers
Version 3.8
Figure 2. DevOps pipeline from a high-level

As described previously, source code from multiple concurrent developers is integrated, tested, and deployed
to production through automation tools. The OpenShift Container Platform provides the mechanisms to
implement the CI/CD pipelines with tools such as Jenkins, Tekton, etc. See the following post on how to do
this: cloud.redhat.com/blog/cicd-with-openshift.
Note: Latest version of OpenShift Pipelines is Introduced in OpenShift 4.13. See the more detailed description
in the link: container-platform/latest/cicd/pipelines/understanding-openshift-pipelines.

2.2.4 Container Orchestration for Edge Infrastructure

As described in the previous section, the expected growth of edge sites to deploy business applications at the
edge poses a big challenge to IT infrastructures, platforms, and applications.
Red Hat offers small footprint edge OS (Core OS) and OpenShift edge clusters in their edge solution. Red Hat
OpenShift extends the capabilities of native Kubernetes to edge sites. It supports a flexible edge site
configuration, from single node to three nodes, from regional location to far edge, let organization select
mixed and matched topology at their edge sites. Red Hat OpenShift can help organizations consistently
manage infrastructure at scale, even to the most remote edge locations, without sacrificing security or
stability. With Red Hat OpenShift, applications can be developed, deployed and life-cycle managed at scale in
a secure, consistent, reliable way of crossing a wide variety of systems.

8 Reference Architecture: Red Hat OpenShift Container Platform on Lenovo


ThinkSystem/ThinkEdge/ThinkAgile HX Servers
Version 3.8
Figure 3. Red Hat Edge solution on Lenovo Servers

Red Hat OpenShift supports hardware acceleration for inference use cases, a broad ecosystem of AI/ML and
application development tools, and integrated security and operations management capabilities. AI/ML
applications can be deployed at edge sites to gather and analyse data faster, while AI models can be updated
frequently to match the case more accurately by integrated DevOps capabilities in OpenShift. Organizations
use OpenShift supported edge sites to deliver best delay-sensitive and data-driven application experiences to
customers.

9 Reference Architecture: Red Hat OpenShift Container Platform on Lenovo


ThinkSystem/ThinkEdge/ThinkAgile HX Servers
Version 3.8
3 Requirements

The functional and non-functional requirements for this reference architecture are described below.

3.1 Functional requirements

Table 1 lists the functional requirements.


Table 1. Functional Requirements
Requirement Description
Container Red Hat OpenShift Container Platform is designed to run workload container
orchestration images at scale using the Kubernetes container orchestrator, and container runtime
services interface (CRI-O).
User self-service OpenShift supports a Web based UI console that allows users to login and manage
their containerized workloads.
Policy management OpenShift allows administrators to configure role-based authorization to manage
the system resources such as compute, networking, storage and application
workloads.
Cloud integration OpenShift supports an integrated container registry, the Quay container registry, or
public registries which allow users to pull down container images from other places.
In addition, building container images on OpenShift platform allows portability to
other clouds such as Google container engine.
Network and Through built-in OpenShift networking and storage services for Kubernetes, users
Storage can access these abstracted resources through their container applications. In
virtualization addition, OpenShift provides network infrastructure services through open protocols
such as VXLAN. A variety of storage facilities can be exposed to container
applications via the Kubernetes persistent volume plug-ins and stateful sets.
Command line tools OpenShift container platform provides CLI tools for almost all cluster operations and
for container image operations. In addition, administrators can use Kubernetes CLI
tools to directly access its services.
CI/CD tools A variety of open source and commercial tools are available such as Jenkins build
server and integration with GitHub source code repository to implement CI/CD
pipelines.
Open source tools Red Hat container registry and other open container repos such as quay.io and
Docker Hub are available to OpenShift users to access open source tools such as
nginx, apache httpd, mysql, postgres, Cassandra, etc.
Automation tools Many tools are available for automation including Ansible, Chef, Puppet, etc.
Service mesh OpenShift Service Mesh is based on Istio along with the Kiali and Jaeger projects
and delivered via Operator. OpenShift Service Mesh provides traffic monitoring,
access control, discovery, security, resiliency, tracing, and reporting to a group of
services by running as container sidecars.

3.2 Non-functional requirements

Table 2 lists the non-functional requirements that are needed for typical OpenShift deployments

10 Reference Architecture: Red Hat OpenShift Container Platform on Lenovo


ThinkSystem/ThinkEdge/ThinkAgile HX Servers
Version 3.8
Table 2. Non-functional Requirements
Requirement Description
Scalability The OpenShift Container Platform is designed for scale. The platform allows for
hundreds of containerized workloads to be scheduled and run without any
performance bottlenecks. The physical resources such as compute/worker nodes
and storage can be scaled as the workload and user base grows.
Load balancing OpenShift control plane nodes provide the core API and management services for
the Kubernetes cluster. For bare metal/VMware installation, OpenShift 4.13
provides Metal LB as an option. For production environments, Users can use
other commercial LB such as F5/A10/etc for their production environment. In
addition, Kubernetes handles load-balancing of the workload containers through
built-in scheduler features, network routing, replication services, etc.
Fault tolerance Fault tolerance can be provided to critical container workloads such as databases via
Kubernetes built-in mechanisms. In addition, data and configuration settings for
container images can be persisted across instances via persistent volume claims
and stateful sets.
Physical footprint OpenShift container platform can be implemented with 3 control plane nodes (for
control plane) and 2 compute/worker nodes where all services are consolidated and
scaled through multiple physical nodes to distribute services and containers. Hence,
the architecture is quite flexible and allows to start small and then scale out.
Ease of installation Three installation methods for OCP 4, IPI, UPI, and The Assisted Installer.
● IPI (Installer Provisioned Infrastructure) is the automated installer method,
where the OpenShift installer provisions the entire infrastructure, including
control plane and compute/worker nodes automatically using Red Hat
Enterprise Linux CoreOS (RHCOS) and ignition files. IPI is the preferred and
best long term method for OCP deployment.
● UPI (User Provisioned Infrastructure) uses pre-existing infrastructure nodes
provisioned in advance of OCP installation.
● the Assisted Installer provides a more user friendly and end-to-end
deployment experience on bare metal/VMware environment. It provides an
interactive experience where you are able to see, modify, and configure the
cluster options then boot the nodes to an ISO and let the install run
OCP 4 uses RHCOS for the Control Plane nodes in all cases. RHCOS is
recommended for worker nodes as well. RHEL is available to be used as worker
nodes. Bare metal (or other installations) start by installing RHCOS, only the actual
provisioning node usually runs RHEL.
Ease of Administrator tools and OpenShift Web console allow day 2 management operations
management/opera to be performed. In addition, the Lenovo XClarity Administrator tool enables
tions hardware monitoring and management.
Flexibility OpenShift container platform can be deployed both in a development/test settings
and production settings. Various options are available for third party network and
storage implementation for OpenShift.
Security Red Hat OpenShift Container Platform has built-in enterprise grade security, all the
way from the operating system layer up to the container registries. Both built-in
authentication/authorization facilities and external authentication/authorization
integration with tools such as OpenLDAP are supported.
High performance OpenShift and Kubernetes have achieved wide industry adoption due to the
robustness of the platform and high-performance. Enterprises can implement very
large-scale OpenShift environments to support hundreds of users and thousands of
container workloads with no performance bottlenecks.

11 Reference Architecture: Red Hat OpenShift Container Platform on Lenovo


ThinkSystem/ThinkEdge/ThinkAgile HX Servers
Version 3.8
4 Architectural overview

The OpenShift Container Platform is a complete container application platform that provides all aspects of the
application development process in one consistent solution across multiple infrastructure footprints. OpenShift
integrates all of the architecture, processes, platforms, and services needed to help development and
operations teams traverse traditional siloed structures and produce applications that help businesses
succeed. Figure 4 below shows the high level architecture of the Red Hat OpenShift Container Platform and
the core building blocks with Lenovo ThinkSystem/ThinkEdge/ThinkAgile HX servers. OpenShift is a platform
designed to orchestrate containerized workloads across a cluster of nodes. The system uses Kubernetes as
the core container orchestration engine, which manages the container images and their lifecycle.

Figure 4. Red Hat OpenShift Container Platform Architecture

The physical configuration of the OpenShift platform with ThinkSystem series is based on the Kubernetes
cluster architecture. The control plane node is the primary node on which the Kubernetes scheduler, along
with the distributed cluster data store (etcd), the REST API services, and other associated management
services run. OpenShift uses three control plane nodes in all clusters except for the single node cluster
available for edge deployments. Compute/worker nodes run the users’ containerized applications on top of the
CRI-O container runtime environment.
ThinkAgile HX Series provides a hyper-converged infrastructure with Nutanix’s industry-leading software
preloaded on Lenovo platforms. A hyper-converged infrastructure seamlessly pools compute and storage to
deliver high performance for the virtual workloads and provides flexibility to combine the local storage using a
distributed file system to eliminate shared storage such as SAN or NAS. These factors make the solution cost
effective without compromising the performance. They offer the scalability and resilience that enterprise
Kubernetes demands, and seamlessly integrate infrastructure layers and cloud-native platform engine.
ThinkAgile HX uses virtualized instances of OpenShift whereas ThinkEdge and ThinkSystem can use either
virtualized or bare metal (while bare metal is preferred).
Cloud-native Platform services, Application services, and Developer services are running on top of OpenShift
Kubernetes Engine. They provide services to manage workloads, build cloud-native apps, and enhance
developer productivity.

12 Reference Architecture: Red Hat OpenShift Container Platform on Lenovo


ThinkSystem/ThinkEdge/ThinkAgile HX Servers
Version 3.8
13 Reference Architecture: Red Hat OpenShift Container Platform on Lenovo
ThinkSystem/ThinkEdge/ThinkAgile HX Servers
Version 3.8
5 Component model

As shown in Figure 5, this chapter describes the components and logical architecture of the Red Hat
OpenShift solution.

Figure 5. Red Hat OpenShift Container Platform logical architecture

All the OpenShift nodes are connected via the internal network, where they can communicate with each other.
Furthermore, OpenShift SDN (based on Open vSwitch) creates its own network for OpenShift pod-to-pod
communication. Because of the multi-tenant plugin, Open vSwitch pods can communicate to each other only
if they share the same project namespace. There is a virtual IP address managed by Keepalived on two load
balance (LB) hosts for external access to the OpenShift web console and applications. Applications can use
local storage operator, storages support CSI interface, OpenShift Data Foundation (ODF), etc. as backend
storage in OpenShift cluster. OVN-Kubernetes is available in OCP 4.13, and it is the default network
component in OpenShift.

14 Reference Architecture: Red Hat OpenShift Container Platform on Lenovo


ThinkSystem/ThinkEdge/ThinkAgile HX Servers
Version 3.8
5.1 OpenShift infrastructure components

Figure 6 shows the four types of nodes in an OpenShift cluster deployed on bare-metal server: bastion,
infrastructure, control plane, and compute/worker (or enhanced HCI worker node with OpenShift Data
Foundation). A temporary bootstrap node is not shown in Figure 6. The Bootstrap node can be removed from
the cluster after deployment. Bootstrap node introduction is in section 5.1.3. Backend Storage is not listed in
this section. Storage introduction is in section 6.7.

Figure 6. OpenShift Nodes with bare metal deployment

For edge sites, it’s recommended to use a lightweight, cost saving and environment adaptation approach to
deploy the logical function nodes into physical servers. Find more detailed information in section 6.3, 6.13 and
6.15.2.

Figure 7 shows the three types of nodes in an OpenShift cluster deployed in VMs in Nutanix cluster on
Lenovo ThinkAgile HX server: infrastructure, control plane, and compute/Application node.

15 Reference Architecture: Red Hat OpenShift Container Platform on Lenovo


ThinkSystem/ThinkEdge/ThinkAgile HX Servers
Version 3.8
Figure 7. OpenShift Nodes on ThinkAgile HX

5.1.1 Bastion node


This is a dedicated node that serves as the main deployment and management server for the OpenShift
cluster. This is used as the logon node for the cluster administrators to perform the system deployment and
management operations. OpenShift installation files or Lenovo Open Cloud-Automation (LOC-A) tools are
running on the Bastion node for manual deployment or auto-deployment of OpenShift Container Platform. In
addition, this node is also used for hardware management via tools such as xCAT and Lenovo XClarity
Administrator. The Bastion node runs RHEL 8.6 Server with the Linux KVM packages installed.

5.1.2 Infrastructure node


The OpenShift infrastructure node runs load balancing services such as the Keepalived and the HAProxy
router. The HAProxy router provides routing functions for OpenShift applications. It currently supports
HTTP(S) traffic and TLS-enabled traffic via Server Name Indication (SNI). Additional applications and services
like container image registry, and cluster metrics and monitoring, can be deployed on OpenShift infrastructure
nodes. The OpenShift infrastructure node runs RHEL Server 8.6. Other commercial load balancers, such as
F5/NGINX/Avi networks/Fortinet/etc, can be deployed here in production environment.

16 Reference Architecture: Red Hat OpenShift Container Platform on Lenovo


ThinkSystem/ThinkEdge/ThinkAgile HX Servers
Version 3.8
5.1.3 Bootstrap node
OpenShift Container Platform uses a temporary bootstrap node during initial configuration to provide the
required information to the control plane node. It boots by using an Ignition config file that describes how to
create the cluster. The bootstrap node creates the control plane nodes, and control plane nodes create the
compute/worker nodes. The control plane nodes install additional services in the form of a set of Operators.
The OpenShift bootstrap node runs RHCOS 4.13.

5.1.4 Control plane node


The OpenShift Container Platform control plane node is a server that performs control functions for the whole
cluster environment. The control plane is responsible for the creation, scheduling, and management of all
objects specific to OpenShift. It includes API, controller manager, and scheduler capabilities in one OpenShift
binary. Both OpenShift control plane and etcd are running in highly available environments. OpenShift uses
the lightweight RHCOS for all control plane nodes, a container-optimized OS based on RHEL 8.

5.1.5 Compute/Worker node


The OpenShift compute/worker nodes run containerized applications created and deployed by developers. An
OpenShift compute/worker node contains the OpenShift node components, including the container engine
CRI-O, the node agent Kubelet, and a service proxy, kube-proxy. An OpenShift application node runs RHCOS
4.13 or RHEL 8.6. OpenShift Data Foundation (ODF) can run along with OpenShift components and
applications on compute/worker nodes to achieve a hyperconverged-like experience. On ThinkAgile HX
platform, Nutanix storage can be used by Applications running on OpenShift via Nutanix CSI.

Lenovo provides a solution to install three-node cluster by leveraging Lenovo Open Cloud Automation (LOC-
A). Three-node cluster allows roles of control plane, compute running on the same servers. This small
footprint can support customer to extend their IT capabilities to edge locations. Detailed information about
Lenovo Open Cloud Automation (LOC-A) can be found in section 6.5.

5.2 OpenShift components architecture

Kubernetes (also known as k8s or simply “kube”) is an open source container orchestration engine that
automates many of the manual processes involved in deploying, managing, and scaling containerized
applications at massive scale. Kubernetes is designed to take your input on where you want your software to
run, and the platform takes care of almost everything else.
Kubernetes was originally developed and designed by engineers at Google. Red Hat was an early adopter,
one of the first companies to work alongside Google on Kubernetes, even prior to launch, and is the 2nd
leading contributor to the Kubernetes upstream project. Google donated the Kubernetes project to the newly
formed Cloud Native Computing Foundation (CNCF) in 2015.”
K8s allows you to cluster together groups of hosts running Linux containers, and helps you easily and
efficiently manage those clusters. Kubernetes clusters can span hosts across on-premise, public, private, or
hybrid clouds. For this reason, Kubernetes is an ideal platform for hosting cloud-native applications that
require rapid scaling.
A Kubernetes cluster is a set of node machines for running containerized applications. At a minimum, a cluster
contains a control plane and one or more compute/worker nodes. The control plane is responsible for
maintaining the desired state of the cluster, such as which applications are running and which container
images they use. Nodes actually run the applications and workloads.
The cluster is the heart of Kubernetes’ key advantage: the ability to schedule and run containers across a
group of machines, be they physical or virtual, on premises or in the cloud.
A Kubernetes cluster has a desired state, which defines which applications or other workloads should be
running, along with which images they use, which resources should be made available for them, and other
such configuration details. Kubernetes provides the orchestration capabilities for containers, including

17 Reference Architecture: Red Hat OpenShift Container Platform on Lenovo


ThinkSystem/ThinkEdge/ThinkAgile HX Servers
Version 3.8
scheduling the container images to nodes in a cluster, managing the container life cycle, availability,
replication, persistent and non-persistent storage for containers, policy, multi-tenancy, network virtualization,
routing, hierarchical clusters via federation APIs, and so forth.
Note: More Kubernetes introduction can be found in webpages:
What Is Kubernetes?, An Introduction to Enterprise Kubernetes, What is a Kubernetes Cluster?

The Core of Red Hat OpenShift is Kubernetes. A software description of the OpenShift components is
described on this website: docs.openshift.com/container-platform/latest/architecture/architecture.html.
Figure 8 shows the OpenShift high-level architecture and components.

Figure 8. OpenShift component architecture

The control plane nodes, as described previously, form the control plane and are responsible for core services
such as API interface, authentication/authorization, container scheduling, controller management, and
configuration database. The control plane manages the state of the cluster and the lifecycle of the user
container images. For redundancy and high availability, three control plane nodes form a cluster with frontend
load-balancers such as HAproxy. (A note on load balancers: “Load balancers deployed to control plane nodes
are only for the API. The load balancer(s) used by applications are deployed to compute/worker nodes hosting
the Router(s)) The command line interface to OpenShift is implemented via the “oc” command.
The compute/worker nodes are where application container images are executed. In OpenShift
terminology the compute/worker nodes run “pods”, each of which manages one or more running
containers. Each node implements a “kubelet”, which is the node level controller that manages the
pods and interacts with the OpenShift control plane.

18 Reference Architecture: Red Hat OpenShift Container Platform on Lenovo


ThinkSystem/ThinkEdge/ThinkAgile HX Servers
Version 3.8
In addition to the core OpenShift services, the Red Hat OpenShift platform also includes other features such
as the web-based user self-service console, monitoring, logging and metrics, an integrated container registry,
storage management, authentication/authorization, automation via Operators, and other administrative tools
for managing the container platform.

19 Reference Architecture: Red Hat OpenShift Container Platform on Lenovo


ThinkSystem/ThinkEdge/ThinkAgile HX Servers
Version 3.8
6 Operational model

This chapter describes the options for mapping the logical components of Red Hat OpenShift onto Lenovo
ThinkSystem/ThinkEdge/ThinkAgile HX servers, and storage.

6.1 Hardware components

The following section describes the hardware components that can be used in an OpenShift implementation.

6.1.1 Lenovo ThinkSystem SR630 V3 1U Server

The Lenovo ThinkSystem SR630 V3 is an ideal 2-socket 1U rack server for small businesses up to large
enterprises that need industry-leading reliability, management, and security, as well as maximizing
performance and flexibility for future growth. The SR630 V3 is designed to handle a wide range of workloads,
such as databases, virtualization and cloud computing, virtual desktop infrastructure (VDI), infrastructure
security, systems management, enterprise applications, collaboration/email, streaming media, web, and HPC.
● ThinkSystem SR630 V3 supports up to two fourth-generation Intel® Xeon® Scalable Processors with
up to 60-core processors, up to 112.5 MB of last level cache (LLC), up to 4800 MHz memory speeds,
and up to 4 Ultra Path Interconnect (UPI) links at 16 GT/s.
● Offers flexible and scalable internal storage in a 1U rack form factor with up to 12x 2.5-inch drives or
up to 4x 3.5-inch drives or up to 16x EDSFF drives, providing a wide selection of SAS/SATA
HDD/SSD and NVMe SSD types and capacities.
● Provides I/O scalability with the OCP slot, PCIe 5.0 slot for an internal storage controller, and up to
three PCI Express (PCIe) 5.0 I/O expansion slots in a 1U rack form factor.

Figure 9. Lenovo ThinkSystem SR630 V3 Server Front Views

20 Reference Architecture: Red Hat OpenShift Container Platform on Lenovo


ThinkSystem/ThinkEdge/ThinkAgile HX Servers
Version 3.8
Figure 10. Lenovo ThinkSystem SR630 V3 Server Rear Views

For more information, see the Lenovo ThinkSystem SR630 V3 Product Guide:
ThinkSystem SR630 V3 Server Product Guide.

6.1.2 Lenovo ThinkSystem SR650 V3 2U Server

The Lenovo ThinkSystem SR650 V3 is an ideal 2-socket 2U rack server for small businesses up to large
enterprises that need industry-leading reliability, management, and security, as well as maximizing
performance and flexibility for future growth. The SR650 V3 is designed to handle a wide range of workloads,
such as databases, virtualization and cloud computing, virtual desktop infrastructure (VDI), infrastructure
security, systems management, enterprise applications, collaboration/email, streaming media, web, and HPC.
The SR650 V3 server supports:
• Up to two fourth-generation Intel® Xeon® Scalable Processors with up to 60-core processors, up to
112.5 MB of last level cache (LLC), up to 4800 MHz memory speeds, and up to 4 Ultra Path
Interconnect (UPI) links at 16GT/s.
• With RDIMMs: Up to 8TB by using 32x 256GB 3DS RDIMMs.
• Up to 40x 2.5-inch or 20x 3.5-inch drive bays with an extensive choice of NVMe PCIe SSDs,
SAS/SATA SSDs, and SAS/SATA HDDs.
• Flexible I/O Network expansion options with the OCP slot, the dedicated storage controller slot, and
up to 10x PCIe slots, up to 9x slots can be PCIe 5.0.

21 Reference Architecture: Red Hat OpenShift Container Platform on Lenovo


ThinkSystem/ThinkEdge/ThinkAgile HX Servers
Version 3.8
Figure 11. Lenovo ThinkSystem SR650 V3 Server Front Views

Figure 12. Lenovo ThinkSystem SR650 V3 Server Rear Views

For more information, see the Lenovo ThinkSystem SR650 V3 Product Guide:
ThinkSystem SR650 V3 Server Product Guide

6.1.3 Lenovo ThinkSystem SR630 V2 1U Server

Lenovo ThinkSystem SR630 V2 is an ideal 2-socket 1U rack server for small businesses up to large
enterprises that need industry-leading reliability, management, and security, as well as maximizing
performance and flexibility for future growth. The SR630 V2 is designed to handle a wide range of workloads,
such as databases, virtualization and cloud computing, virtual desktop infrastructure (VDI), infrastructure
security, systems management, enterprise applications, collaboration/email, streaming media, web, and HPC.

22 Reference Architecture: Red Hat OpenShift Container Platform on Lenovo


ThinkSystem/ThinkEdge/ThinkAgile HX Servers
Version 3.8
● ThinkSystem SR630 V2 supports two third-generation Intel® Xeon® Scalable Processors with up to
40-core processors, up to 60 MB of last level cache (LLC), up to 3200 MHz memory speeds, and up
to 11.2 GT/s Ultra Path Interconnect (UPI) links.
● Offers flexible and scalable internal storage in a 1U rack form factor with up to 12x 2.5-inch drives or
up to 4x 3.5-inch drives or up to 16x EDSFF drives, providing a wide selection of SAS/SATA
HDD/SSD and NVMe SSD types and capacities.
● Provides I/O scalability with the OCP slot, PCIe 4.0 slot for an internal storage controller, and up to
three PCI Express (PCIe) 4.0 I/O expansion slots in a 1U rack form factor.

Figure 13. Lenovo ThinkSystem SR630 V2 Server Front Views

Figure 14. Lenovo ThinkSystem SR630 V2 Server Rear Views

For more information, see the Lenovo ThinkSystem SR630 V2 Product Guide:

23 Reference Architecture: Red Hat OpenShift Container Platform on Lenovo


ThinkSystem/ThinkEdge/ThinkAgile HX Servers
Version 3.8
ThinkSystem SR630 V2 Server Product Guide.

6.1.4 Lenovo ThinkSystem SR650 V2 2U Server

The Lenovo ThinkSystem SR650 V2 is an ideal 2-socket 2U rack server for small businesses up to large
enterprises that need industry-leading reliability, management, and security, as well as maximizing
performance and flexibility for future growth. The SR650 V2 is designed to handle a wide range of workloads,
such as databases, virtualization and cloud computing, virtual desktop infrastructure (VDI), infrastructure
security, systems management, enterprise applications, collaboration/email, streaming media, web, and HPC.
The SR650 V2 server supports:
• Up to two third-generation Intel® Xeon® Scalable Processors with up to 40-core processors, up to 60
MB of last level cache (LLC), up to 3200 MHz memory speeds, and up to 11.2 GT/s Ultra Path
Interconnect (UPI) links.
• With RDIMMs: Up to 8TB by using 32x 256GB 3DS RDIMMs
With Persistent Memory: Up to 6TB by using 16x 128GB 3DS RDIMMs and 16x 256GB Pmem
modules
• Up to 40x 2.5-inch or 20x 3.5-inch drive bays with an extensive choice of NVMe PCIe SSDs,
SAS/SATA SSDs, and SAS/SATA HDDs
• Flexible I/O Network expansion options with the OCP slot, the dedicated storage controller slot, and
up to 8x PCIe slots

Figure 15. Lenovo ThinkSystem SR650 V2 Server Front Views

24 Reference Architecture: Red Hat OpenShift Container Platform on Lenovo


ThinkSystem/ThinkEdge/ThinkAgile HX Servers
Version 3.8
Figure 16. Lenovo ThinkSystem SR650 V2 Server Rear Views

For more information, see the Lenovo ThinkSystem SR650 V2 Product Guide:
ThinkSystem SR650 V2 Server Product Guide

6.1.5 Lenovo ThinkSystem SE350 Edge Server

The ThinkSystem SE350 is a purpose-built server that is half the width and significantly shorter than a
traditional server, making it ideal for deployment in tight spaces. It can be mounted on a wall, stacked on a
shelf or mounted in a rack.
The ThinkSystem SE350 puts increased processing power, storage and network closer to where data is
generated, allowing actions resulting from the analysis of that data to take place more quickly. The server has
wired connections up to 10GbE and optionally supports both Wi-Fi and LTE wireless connectivity.
Since these edge servers are typically deployed outside of secure data centers, they include technology that
encrypts the data stored on the device if it is tampered with, only enabling authorized users to access it.
The SE350 edge server supports:
• One Intel Xeon D Processor with up to 16-core processors, core speeds of up to 2.2 GHz, and
TDP ratings of up to 100W.
• Up to 4 TruDDR4 memory DIMMs an up to 256 GB of memory using 64 GB DIMMs.
• Up to 8 M.2 data drives -- SATA or NVMe -- provide efficient and rugged storage for edge
workloads.
• Supports 1 or 2 additional M.2 SATA drives for OS boot and applications, allowing the
convenience of separating application code from data.
• Two 10 GbE SFP+ or 10GBASE-T ports standard for high-speed networking to back-end servers.
• Support for the NVIDIA T4 GPU for enhanced workloads at the edge of your network.

25 Reference Architecture: Red Hat OpenShift Container Platform on Lenovo


ThinkSystem/ThinkEdge/ThinkAgile HX Servers
Version 3.8
Figure 17. Front view of the Lenovo ThinkSystem SE350 with 10G SFP+ network module

Figure 18. Front view of the Lenovo ThinkSystem SE350 with 10GBASE-T network module

Figure 19. Front view of the Lenovo ThinkSystem SE350 with Wireless network module

26 Reference Architecture: Red Hat OpenShift Container Platform on Lenovo


ThinkSystem/ThinkEdge/ThinkAgile HX Servers
Version 3.8
Figure 20. Rear view of the Lenovo ThinkSystem SE350

Figure 21. View of the Lenovo ThinkSystem SE350

27 Reference Architecture: Red Hat OpenShift Container Platform on Lenovo


ThinkSystem/ThinkEdge/ThinkAgile HX Servers
Version 3.8
Figure 22. Bookshelf mount (with optional locking bezels)

For more information, see the Lenovo ThinkSystem SE350 Product Guide:
ThinkSystem SE350 Server Product Guide

6.1.6 Lenovo ThinkEdge SE360 V2 Server

The SE360 V2 is a purpose-built server that is a 2U high and half width making it significantly smaller than a
traditional server, ideal for deployment in tight spaces. It can be mounted on a wall, ceiling, placed on a desk
or mounted in a rack.
The ThinkEdge SE360 V2 server puts increased processing power, storage and network closer to where data
is generated, allowing actions resulting from the analysis of that data to take place more quickly. The server
has wired connections for 1GbE, 10GbE/25GbE and optionally supporting wireless LAN (WLAN) to enable
connectivity to Wi-Fi clients.
Since these edge servers are typically deployed outside of secure data centers, they include technology that
encrypts the data stored on the device if it is tampered with, only enabling authorized users to access it.
The ThinkEdge SE360 server supports:
• One Intel Xeon D Processor with up to 16-core processors, core speeds of up to 2.1 GHz, and
TDP ratings of up to 100W.
• Up to 4 TruDDR4 memory DIMMs an up to 256 GB of memory using 64 GB DIMMs.
• Up to 8 M.2 data drives -- NVMe -- provide efficient and rugged storage for edge workloads.
• Supports 1 or 2 additional M.2 NVMe drives for OS boot and applications, allowing the
convenience of separating application code from data.
• 1GbE I/O board or 10GbE/25GbE SFP28 I/O board to support low and high-speed networking to
back-end servers.
• Support for the NVIDIA A2, NVIDIA L4 or Qualcomm Cloud AI 100 for enhanced workloads at the
edge of your network.

28 Reference Architecture: Red Hat OpenShift Container Platform on Lenovo


ThinkSystem/ThinkEdge/ThinkAgile HX Servers
Version 3.8
Figure 23. Front view of the Lenovo ThinkEdge SE360 V2 server with 1GbE network module

Figure 24. Front view of the Lenovo ThinkEdge SE360 V2 server with 10/25GbE network module

29 Reference Architecture: Red Hat OpenShift Container Platform on Lenovo


ThinkSystem/ThinkEdge/ThinkAgile HX Servers
Version 3.8
Figure 25. Rear view of the Lenovo ThinkEdge SE360 V2 with DC input connector

Figure 26. Rear view of the Lenovo ThinkEdge SE360 V2 with AC input connector

30 Reference Architecture: Red Hat OpenShift Container Platform on Lenovo


ThinkSystem/ThinkEdge/ThinkAgile HX Servers
Version 3.8
Figure 27. View of the Lenovo ThinkSystem SE360 V2

For more information, see the Lenovo ThinkEdge SE360 V2 Product Guide:
ThinkEdge SE360 V2 Server Product Guide

6.1.7 Lenovo ThinkEdge SE450 Server

The ThinkEdge SE450 is a single-socket server with a 2U height and short depth case, making it suitable for
deployment in shallow cabinets. It can be mounted on a wall, stacked on a shelf, or mounted in a rack.
The SE450 puts increased processing power, storage, and network closer to where data is generated,
allowing actions resulting from the analysis of that data to take place more quickly. The server is also
designed for Wireless LAN (WLAN) connectivity for even great flexibility in deployment options.
Since these edge servers are typically deployed outside of secure data centers, they include technology that
encrypts the data stored on the device if it is tampered with, only enabling authorized users to access it.
The ThinkEdge SE4 50 server supports:
• One Intel Xeon Scalable "Ice Lake" processor with up to 36-core processors, core speeds of up to
3.0 GHz, and TDP ratings of up to 205W.
• Up to 8 TruDDR4 memory DIMMs and up to 1 TB of memory using 128 GB DIMMs.
• Up to 4x internal SSD drive bays supporting non-hot-swap trayless NVMe or SATA SSD drives.
Up to 2x 2.5-inch hot-swap drive bays, front accessible, supporting SAS or SATA SSD drives
(mutually exclusive with slots 3 and 4 in Riser 2). Up to 2x M.2 drives for boot functions,
supporting SATA or NVMe drives.
• Support four network adapters, up to 100 Gb Ethernet or HDR100 InfiniBand, for high-speed
networking to back-end servers.
• Up to 4x single-wide GPUs or up to 2x double-wide GPUs.

31 Reference Architecture: Red Hat OpenShift Container Platform on Lenovo


ThinkSystem/ThinkEdge/ThinkAgile HX Servers
Version 3.8
Figure 28. Front view of the Lenovo ThinkEdge SE450

Figure 29. Rear view of the Lenovo ThinkEdge SE450

32 Reference Architecture: Red Hat OpenShift Container Platform on Lenovo


ThinkSystem/ThinkEdge/ThinkAgile HX Servers
Version 3.8
Figure 30. View of the ThinkEdge SE450 with security bezel attached

Figure 31. SE450 installed on a wall in a manufacturing environment

33 Reference Architecture: Red Hat OpenShift Container Platform on Lenovo


ThinkSystem/ThinkEdge/ThinkAgile HX Servers
Version 3.8
For more information, see the Lenovo ThinkEdge SE450 Product Guide:
ThinkEdge SE450 Server Product Guide

6.1.8 Lenovo ThinkEdge SE455 V3 Server

The ThinkEdge SE455 V3 is a single-socket server with a 2U height and short depth case, making it suitable
for deployment in shallow cabinets. It can be mounted in a 2-post or 4-post rack. The SE455 V3 uses the new
AMD EPYC 8004 Series "Siena" processors for an ideal mix of performance and power efficiency. The SE455
V3 puts processing power, storage and network closer to where data is generated, allowing actions resulting
from the analysis of that data to take place more quickly.
The ThinkEdge SE455 V3 is a purpose-built server that is significantly shorter than a traditional server,
making it ideal for deployment in tight spaces. The ThinkEdge SE455 V3 puts increased processing power,
storage and network closer to where data is generated, allowing actions resulting from the analysis of that
data to take place more quickly.
Since these edge servers are typically deployed outside of secure data centers, they include technology that
encrypts the data stored on the device, protecting the data if the system is tampered with, only enabling
authorized users to access it.
The ThinkEdge SE455 V3 server supports:

• One AMD EPYC 8004 ("Siena") processor with up to 64-core processors, core speeds of up to
2.65 GHz, and TDP ratings of up to 225W.
• Up to 6 TruDDR5 memory DIMMs and up to 576 GB of memory using 96 GB DIMMs.
• Support for up to 8x 2.5-inch drive bays, 4x hot-swap drives at the front of the server, and 4x non-
hot swap drives internal to the server. Optional RAID with the addition of a RAID adapter installed
in a slot.
• Supports M.2 drives for convenient operating system boot functions. Available M.2 adapters
support either one M.2 drive or two M.2 drives. M.2 with RAID is available now using a PCIe
RAID adapter; support for an M.2 adapter with integrated RAID is planned for 1Q/2024.
• The server offers PCI Express 5.0 (PCIe Gen 5) I/O expansion capabilities that doubles the
theoretical maximum bandwidth of PCIe 4.0 (32GT/s in each direction for PCIe 5.0, compared to
16 GT/s with PCIe 4.0). A PCIe 5.0 x16 slot provides 128 GB/s bandwidth, enough to support a
400GbE network connection.
• Supports up to 6x single-width GPUs or 2x double-wide GPUs, for substantial processing power
in an edge system.
• Supports up to 6x AMD Alveo V70 Datacenter Accelerator adapters, tuned for video analytics and
natural language processing workloads.

34 Reference Architecture: Red Hat OpenShift Container Platform on Lenovo


ThinkSystem/ThinkEdge/ThinkAgile HX Servers
Version 3.8
Figure 32. Front view of the ThinkEdge SE455 V3

Figure 33. Rear view of the ThinkEdge SE455 V3

Figure 34. View of the ThinkEdge SE455 V3 with security bezel attached

For more information, see the Lenovo ThinkEdge SE455 V3 Product Guide:
ThinkEdge SE455 V3 Server Product Guide

35 Reference Architecture: Red Hat OpenShift Container Platform on Lenovo


ThinkSystem/ThinkEdge/ThinkAgile HX Servers
Version 3.8
6.1.9 Lenovo ThinkSystem SR645 V3 1U Server

The Lenovo ThinkSystem SR645 V3 is a dense, high performance, 2-socket 1U rack server. It is suitable for
small businesses to large enterprises, and especially cloud service providers. The server features the AMD
EPYC 9004 "Genoa" family of processors and support for the new PCIe 5.0 standard for I/O. It is designed to
handle a wide range of workloads such as cloud computing, virtualization, VDI, enterprise applications, and
database management.

• ThinkSystem SR645 V3 supports up to two fourth-generation AMD EPYC 9004 processors with up to
96-core processors, up to 384 MB of L3 cache, up to 4800 MHz memory speeds, and 4x dedicated
xGMI x16 interprocessor links.
● Offers flexible and scalable internal storage in a 1U rack form factor with up to 12x 2.5-inch drives or
up to 4x 3.5-inch drives or up to 16x E1.S EDSFF drives, providing a wide selection of SAS/SATA
HDD/SSD and NVMe SSD types and capacities.
● Provides I/O scalability with the OCP slot, PCIe 5.0 slot for an internal storage controller, and up to
three PCI Express (PCIe) expansion slots (2x PCIe 5.0, 1x PCIe 4.0) in a 1U rack form factor.

Figure 35. Lenovo ThinkSystem SR645 V3 Server Front Views

36 Reference Architecture: Red Hat OpenShift Container Platform on Lenovo


ThinkSystem/ThinkEdge/ThinkAgile HX Servers
Version 3.8
Figure 36. Lenovo ThinkSystem SR645 V3 Server Rear Views

For more information, see the Lenovo ThinkSystem SR645 V3 Product Guide:
ThinkSystem SR645 V3 Server Product Guide.

6.1.10 Lenovo ThinkSystem SR665 V3 2U Server

The Lenovo ThinkSystem SR665 V3 is a 2-socket 2U server that features the AMD EPYC 9004 "Genoa"
family of processors. With up to 96 cores per processor and support for the new PCIe 5.0 standard for I/O, the
SR665 V3 offers the ultimate in two-socket server performance in a 2U form factor. The server is ideal for
dense workloads that can take advantage of GPU processing and high-performance NVMe drives. The
SR655 V3 is designed to handle a wide range of workloads, such as Inference, virtualization, VDI, HPC,
Hyperconverged infrastructure.
The SR665 V3 server supports:

• Up to 2-socket 2U server that features the AMD EPYC 9004 "Genoa" family of processors, with up to
96-core per processor, up to 384 MB of L3 cache, up to 4800 MHz memory speeds, and up to 4x
xGMI x16 interprocessor links, 1 of which can be used for an additional 16 PCIe 5.0 lanes.
• Up to 6TB of system memory.
• Up to 40x 2.5-inch or 20x 3.5-inch drive bays with an extensive choice of NVMe PCIe SSDs,
SAS/SATA SSDs, and SAS/SATA HDDs.
• Flexible I/O Network expansion options with the OCP slot, the dedicated storage controller slot, and
up to 10x PCIe slots, up to 9x slots can be PCIe 5.0.

37 Reference Architecture: Red Hat OpenShift Container Platform on Lenovo


ThinkSystem/ThinkEdge/ThinkAgile HX Servers
Version 3.8
Figure 37. Lenovo ThinkSystem SR665 V3 Server Front Views

Figure 38. Lenovo ThinkSystem SR665 V3 Server Rear Views

For more information, see the Lenovo ThinkSystem SR665 V3 Product Guide:
ThinkSystem SR665 V3 Server Product Guide

6.1.11 Lenovo ThinkSystem SR635 V3 1U Server

The Lenovo ThinkSystem SR635 V3 is a 1-socket 1U server that features the AMD EPYC 9004 "Genoa"
family of processors. With up to 96 processor cores and support for the new PCIe 5.0 standard for I/O, the
SR635 V3 offers the ultimate in one-socket server performance in a 1U form factor. The server is ideal for
dense workloads that can take advantage of GPU processing and high-performance NVMe drives. It is
designed to handle a wide range of workloads such as AI Inference, VDI, OLTP, Analytics, HPC, software-
defined storage.

38 Reference Architecture: Red Hat OpenShift Container Platform on Lenovo


ThinkSystem/ThinkEdge/ThinkAgile HX Servers
Version 3.8
• ThinkSystem SR635 V3 supports up to one fourth-generation AMD EPYC 9004 processors with up to
96-core processors, up to 384 MB of L3 cache, up to 4800 MHz memory speeds, and 128x PCIe 5.0
lanes per processor.
• Offers flexible and scalable internal storage in a 1U rack form factor with up to 12x 2.5-inch drives or
up to 16x E1.S EDSFF drives, providing a wide selection of SAS/SATA HDD/SSD and NVMe SSD
types and capacities.
• Provides I/O scalability with the OCP slot, PCIe 5.0 slot for an internal storage controller, and up to
five PCI Express (PCIe) slots (3 rear, 2 front) in a 1U rack form factor.

Figure 39. Lenovo ThinkSystem SR635 V3 Server Front Views

Figure 40. Lenovo ThinkSystem SR635 V3 Server Rear Views

For more information, see the Lenovo ThinkSystem SR635 V3 Product Guide:
ThinkSystem SR635 V3 Server Product Guide.

39 Reference Architecture: Red Hat OpenShift Container Platform on Lenovo


ThinkSystem/ThinkEdge/ThinkAgile HX Servers
Version 3.8
6.1.12 Lenovo ThinkSystem SR655 V3 2U Server

The Lenovo ThinkSystem SR655 V3 is a 1-socket 2U server that features the AMD EPYC 9004 "Genoa"
family of processors. With up to 96 cores per processor and support for the new PCIe 5.0 standard for I/O, the
SR655 V3 offers the ultimate in one-socket server performance in a 2U form factor. The server is ideal for
dense workloads that can take advantage of GPU processing and high-performance NVMe drives. The
SR655 V3 is designed to handle a wide range of workloads, such as AI Inference, VDI, OLTP, Analytics,
software-defined storage.
The SR655 V3 server supports:

• Up to 1-socket 2U server that features the AMD EPYC 9004 "Genoa" family of processors, with up to
96-core per processor, up to 384 MB of L3 cache, up to 4800 MHz memory speeds, and up to 64x
PCIe 5.0 lanes per processor.
• Up to 1.5TB of system memory.
• Up to 40x 2.5-inch or 20x 3.5-inch drive bays with an extensive choice of NVMe PCIe SSDs,
SAS/SATA SSDs, and SAS/SATA HDDs.
• Flexible I/O Network expansion options with the OCP slot, the dedicated storage controller slot, and
up to 10x PCIe slots, up to 9x slots can be PCIe 5.0.

Figure 41. Lenovo ThinkSystem SR655 V3 Server Front Views

Figure 42. Lenovo ThinkSystem SR655 V3 Server Rear Views

For more information, see the Lenovo ThinkSystem SR655 V3 Product Guide:
ThinkSystem SR655 V3 Server Product Guide

40 Reference Architecture: Red Hat OpenShift Container Platform on Lenovo


ThinkSystem/ThinkEdge/ThinkAgile HX Servers
Version 3.8
6.1.13Lenovo ThinkAgile HX series

Lenovo ThinkAgile HX Series appliances and certified nodes are designed to help you simplify IT
infrastructure, reduce costs, and accelerate time to value. These hyper-converged appliances from Lenovo
combine industry-leading hyper-convergence software from Nutanix with Lenovo enterprise platforms.
Several common uses are:
• Enterprise workloads
• Private and hybrid clouds
• Remote office and branch office (ROBO)
• Server virtualization
• Virtual desktop infrastructure (VDI)
• Small-medium business (SMB) workloads
Starting with as few as three nodes to keep your acquisition costs down, the Lenovo ThinkAgile HX Series
appliances and certified nodes are capable of immense scalability as your needs grow.
Lenovo ThinkAgile HX Series appliances and certified nodes are available in five families that can be tailored
to your needs:
• Lenovo ThinkAgile HX1000 Series: optimized for ROBO environments
• Lenovo ThinkAgile HX2000 Series: optimized for SMB environments
• Lenovo ThinkAgile HX3000 Series: optimized for compute-heavy environments
• Lenovo ThinkAgile HX5000 Series: optimized for storage-heavy workloads
• Lenovo ThinkAgile HX7000 Series: optimized for high-performance workloads
Table 3 shows the similarities and differences between ThinkAgile HX Series appliances and certified nodes.
Table 3: Comparison of ThinkAgile HX Series appliances and certified nodes
Feature HX Series Appliances HX Series certified nodes

Validated and integrated hardware and firmware Yes Yes


Certified and preloaded with Nutanix software Yes Yes
Includes Nutanix licenses Yes No
ThinkAgile Advantage Single Point of Support for Yes Yes
quick 24/7 problem reporting and resolution
Includes deployment services Optional Optional
Supports ThinkAgile HX2000 Series Yes No

For more information about the system specifications and supported configurations, refer to the product
guides for the Lenovo ThinkAgile HX Series appliances and certified nodes based on the Intel Xeon Scalable
processor. For appliances see:
o Lenovo ThinkAgile HX1000 Series: lenovopress.com/lp0726
o Lenovo ThinkAgile HX2000 Series: lenovopress.com/lp0727
o Lenovo ThinkAgile HX3000 Series: lenovopress.com/lp0728
o Lenovo ThinkAgile HX5500 Series: lenovopress.com/lp0729
o Lenovo ThinkAgile HX7500 Series: lenovopress.com/lp0730
o Lenovo ThinkAgile HX7800 Series: lenovopress.com/lp0950
For certified nodes see:
o Lenovo ThinkAgile HX1001 Series: lenovopress.com/lp0887
o Lenovo ThinkAgile HX3001 Series: lenovopress.com/lp0888
o Lenovo ThinkAgile HX5501 Series: lenovopress.com/lp0889
o Lenovo ThinkAgile HX7501 Series: lenovopress.com/lp0890

41 Reference Architecture: Red Hat OpenShift Container Platform on Lenovo


ThinkSystem/ThinkEdge/ThinkAgile HX Servers
Version 3.8
o Lenovo ThinkAgile HX7800 Series: lenovopress.com/lp0951

For appliances and certified nodes with Intel Xeon Scalable processor Gen 2 see
https://lenovopress.com/lp1521-thinkagile-hx1320-hx1321-hx2320-hx2321-hx3320-hx3321-1u
For appliances and certified nodes with Intel Xeon Scalable processor Gen 3 see
https://lenovopress.com/lp1481-thinkagile-hx-1u-appliances-certified-nodes-whitley

The diagrams below show the Intel Xeon Scalable processor-based ThinkAgile HX Series appliances and
certified nodes.

Figure 43. HX1320 or HX1321

Figure 44. HX2320-E

Figure 45. HX2720-E

Figure 46. HX3320 or HX3321

Figure 47. HX3520-G or HX3521-G

42 Reference Architecture: Red Hat OpenShift Container Platform on Lenovo


ThinkSystem/ThinkEdge/ThinkAgile HX Servers
Version 3.8
Figure 48. HX3720 or HX3721

Figure 49. HX1520-R, HX1521-R, HX5520, HX5521, HX5520-C, or HX5521-C

Figure 50. HX7520 or HX7521

Figure 51. HX7820 or HX7821

For best recipes of supported firmware and software, please see:


https://datacentersupport.lenovo.com/us/en/products/solutions-and-software/thinkagile-
hx/hx3320/7x83/solutions/ht505413.

43 Reference Architecture: Red Hat OpenShift Container Platform on Lenovo


ThinkSystem/ThinkEdge/ThinkAgile HX Servers
Version 3.8
6.2 Hypervisor supported by ThinkAgile HX

The ThinkAgile HX Series appliances and certified nodes (generally) support the following hypervisors:
• Nutanix Acropolis Hypervisor based on KVM (AHV)
• VMware ESXi 6.7
• VMware ESXi 7.0

The HX1520-R, HX5520-C, HX7820, and all SAP HANA models support only the following hypervisor:
• Nutanix Acropolis Hypervisor based on KVM (AHV)

The HX Series appliances come standard with the hypervisor preloaded in the factory. This software is
optional for the ThinkAgile HX Series certified nodes.

6.3 Deployment models

The OpenShift Container Platform can be implemented in development/test, staging, and production settings.
Each node role has its own dedicated servers for performance and availability. However, in a non-production
environment, a minimal environment can be provided to test applications before moving them to a staging or
production environment.
For a production OpenShift deployment on bare metal, all of the core services such as the API servers,
Kubernetes scheduler, etcd, etc., need to be highly available. The table below shows the recommended
configuration for a production deployment with external enterprise storage.
Node type Quantity Node role
Bastion 1 Deployment of the environment, LOC-A Ansible playbooks, hardware
management, etc.
Infrastructure 2 HAProxy, Keepalive, routing, logging, metrics.
Bootstrap 1 OpenShift bootstrap node. It can be a VM.
Control plane 3 OpenShift API, etcd, pod scheduler.
Compute/worker 2+ Runs the application containers.
Storage 1+ External 3rd Party Enterprise Storage as OpenShift platform’s
backend storage.

The table below shows the recommended configuration for a production deployment on bare metal with
OpenShift Data Foundation (ODF).
Node type Quantity Node role
Bastion 1 Deployment of the environment, LOC-A Ansible playbooks, hardware
management, etc.
Infrastructure 2 HAProxy, Keepalive, routing, logging, metrics.
Bootstrap 1 OpenShift bootstrap node. It can be a VM.
Control plane 3 OpenShift API, etcd, pod scheduler.

44 Reference Architecture: Red Hat OpenShift Container Platform on Lenovo


ThinkSystem/ThinkEdge/ThinkAgile HX Servers
Version 3.8
Compute/worker 3+ Runs the application containers and ODF pods.
(enhanced node with
ODF)

For production OpenShift deployment on bare metal environments, it is recommended to build an OpenShift
platform with external storage supporting CSI interface, or to build an OpenShift platform with OpenShift Data
Foundation (ODF).

Red Hat offers three kinds of OpenShift edge cluster deployment approaches in edge sites.
Cluster type Node Logical Node Type
Quantity
Single-node edge 1 Control plane, Worker node.
cluster
Remote worker 3+ (1+) + 3 Control plane, 1+ Remote Worker node, 2+ Local Worker nodes
(2+)
3-nodes cluster 3 Control plane, Worker node.

For a production OpenShift deployment on ThinkAgile HX platform, all OpenShift nodes are installed in
Nutanix VMs, and Nutanix storage can be used by applications running on OpenShift via Nutanix CSI.

Node type Quantity Node role


Infrastructure 2 HAProxy, Keepalive, routing, logging, metrics.
Control plane 3 OpenShift API, etcd, pod scheduler.
Compute/worker 2+ Runs the application containers.

6.4 Deployment Ready Solution

Lenovo works closely with software partners to provide high performance, scalable, and cost-effective IT
solutions-Deployment Ready Solutions (DRS) to accelerate business advantage. With Red Hat, we provide a
series of engineered, tested, and certified OpenShift Deployment Ready Solutions for customers to get fast
time to value benefit.
Hardware configurations and Software license are all packaged in Deployment Ready Solutions.
We have 4 categories of OpenShift Deployment Ready Solutions:AI Edge, Datacenter Cluster, Minimum
Cluster, and Single Node cluster.
• AI Edge - The DRS accelerates AI/ML workflows and the delivery of AI-powered intelligent
applications. It’s a single node solution. It can be deployed from the edge of the network to on-site,
virtualized, and private cloud deployments to public clouds. It is based on the SE350/SE360 V2, with
GPUs, runs Red Hat OpenShift Platform Plus which includes OpenShift Container Platform
(container/VM management), Advanced Cluster Management (multi-cloud/cluster management),
Advanced Cluster Security (cluster/container security), Quay (global image registry) and OpenShift
Data Foundations (container storage). The DRS can be exported or customized in the AI Edge
webpage

45 Reference Architecture: Red Hat OpenShift Container Platform on Lenovo


ThinkSystem/ThinkEdge/ThinkAgile HX Servers
Version 3.8
• Datacenter Cluster - The DRS provides complete container orchestration needed to deploy and
manage containerized applications. It’s a six-node cluster (3 control nodes and 3 worker nodes). It
eases the burden of configuring, deploying, managing, and monitoring of the largest-scale
deployments. It is based on the SR630 V2/SR630 V3, runs Red Hat OpenShift Platform Plus which
includes OpenShift Container Platform (container/VM management), Advanced Cluster Management
(multi-cloud/cluster management), Advanced Cluster Security (cluster/container security), Quay
(global image registry) and OpenShift Data Foundations (container storage). The DRS can be
exported or customized in the Datacenter Cluster webpage
• Minimum Cluster - The DRS is the smallest, fully functional OpenShift cluster offering high
availability. It is a 3-node HA cluster configuration (each node is both a control node and a worker
node) - HCI capable. This cluster is ideal for HCI, edge sites, or regional data centers that have more
restricted space, and power/cooling requirements. It is based on the SR630 V2/SR630 V3, runs Red
Hat OpenShift Platform Plus, which includes OpenShift Container Platform (container/VM
management), Advanced Cluster Management (multi-cloud/cluster management), Advanced Cluster
Security (cluster/container security), Quay (global image registry) and OpenShift Data Foundations
(container storage). The DRS can be exported or customized in the Minimum Cluster webpage
• Single Node cluster – The DRS deploys all OpenShift services and end-user applications to a single
physical or virtual node. This is ideal for edge use cases that have limited space, low bandwidth, or
have intermittent connectivity between remote and core/central sites. It is based on the
SE450/SE455 V3 and runs Red Hat OpenShift Platform Plus which includes OpenShift Container
Platform (container/VM management), Advanced Cluster Management (multi-cloud/cluster
management), Advanced Cluster Security (cluster/container security), Quay (global image registry)
and OpenShift Data Foundations (container storage). The DRS can be exported or customized in the
Single Node Cluster webpage
Some configuration samples leveraging DRS can be found in section 6.15
More information about Red Hat OpenShift Deployment Ready Solutions, see:
https://lenovopress.lenovo.com/lp1671-red-hat-openshift-deployment-ready-solutions-on-lenovo-servers
More information about Red Hat OpenShift Platform Plus, see:
https://www.redhat.com/en/technologies/cloud-computing/openshift/platform-plus

6.5 Auto-Deployment by Lenovo Open Cloud Automation

Lenovo Open Cloud Automation (LOC-A) is a Lenovo software solution for simplifying Cloud and Data Center
Infrastructure Deployment & Management. LOC-A leverages open software stacks to support rapid
deployment, optimization, and management of cloud infrastructures for bare-metal servers, containers and
VMs.

Figure 52 shows high-level scope for Lenovo Open Cloud Automation.

46 Reference Architecture: Red Hat OpenShift Container Platform on Lenovo


ThinkSystem/ThinkEdge/ThinkAgile HX Servers
Version 3.8
Figure 52. High-level scope of Lenovo Open Cloud Automation

LOC-A provides Red Hat OpenShift Container Platform one-click auto-deployment from scratch by following
pre-defined HW configurations and configurable deployment workflows. It supports container cloud
infrastructure deployment at edge and data center environments for Enterprises and Cloud Providers. Lenovo
Open Cloud Automation has been developed specifically for solving all of the challenges at a diversity of
locations and cases.

Figure 53 shows stack components in LOC-A for OpenShift.

Figure 53. Components of Lenovo Open Cloud-Automation for OpenShift

47 Reference Architecture: Red Hat OpenShift Container Platform on Lenovo


ThinkSystem/ThinkEdge/ThinkAgile HX Servers
Version 3.8
LOC-A uses ansible and the AWX tool to perform auto-deployment of multiple platforms. Platform
inventory/resources are stored in a DCIM. Lenovo Confluent is a discovery engine to find resources on site,
and it also supports OS deployment. OpenShift Assisted Installer is a managed service to install OpenShift
clusters.

Figure 54 shows a brief workflow when LOC-A deploys OpenShift.

Figure 54. OpenShift Deployed by Lenovo Open Cloud - Automation


LOC-A can be installed on a bastion node/cloud. It discovers hardware automatically, generates a discovery
ISO image, install the image, configure network, and deploys OpenShift platform on top of bare-metal HW.
More information about licenses, RHOCP versions and configurations is available in Lenovo Open Cloud
Automation document: Lenovo Open Cloud Automation for Red Hat OpenShift datasheet

Note: Current LOC-A is published in an OVA image. OpenShift deployment on ThinkAgile HX platform hasn’t
been supported by LOC-A yet.

6.6 Compute/worker node

The OpenShift Container Platform can be implemented on a small footprint of x86 servers/VMs clustered
together and scaled as the user workloads grow.
The right choice of servers/VMs and the corresponding configuration for CPUs, memory, and networking will
depend upon various factors, including but not limited to:
● Number of concurrent OpenShift users to be supported
● Type and mix of application workloads, which will drive the system resource requirements
● System growth projection
● Development or production use
● Fault-tolerance and availability requirements for applications
● Application performance expectations
● Implementation of hybrid-cloud model, which drives the requirements for on-premises infrastructure
More guidance on sizing and other considerations is available for OpenShift clusters in OpenShift
documentation: access.redhat.com/documentation/en-
us/openshift_container_platform/4.13/html/scalability_and_performance

48 Reference Architecture: Red Hat OpenShift Container Platform on Lenovo


ThinkSystem/ThinkEdge/ThinkAgile HX Servers
Version 3.8
Lenovo does not recommend server/VMs configuration specifics for CPU, memory, storage, etc. because it is
heavily dependent on the sizing considerations previously listed. However, Lenovo has verified configurations
on ThinkSystem/ThinkEdge/ThinkAgile HX servers using both Intel Xeon Scalable Processors gen 2, and gen
3 CPUs. The user is requested to perform a proper sizing assessment for their particular needs and choose
the right configurations to meet those requirements.

6.7 Persistent storage for containerized workloads

There are two types of storage consumed by containerized applications – ephemeral (non-persistent) and
persistent. As the names suggest, non-persistent storage is created and destroyed along with the container
and is only used by applications during their lifetime as a container. Hence, non-persistent storage is used for
temporary data. When implementing the OpenShift Container Platform, local disk space on the application
nodes can be configured and used for the non-persistent storage volumes.
Persistent storage, on the other hand, is used for data that needs to be persisted across container
instantiations. An example is a 2 or 3-tier application that has separate containers for the web and business
logic tier and the database tier. The web and business logic tier can be scaled out using multiple containers
for high availability. The database that is used in the database tier requires persistent storage that is not
destroyed.
OpenShift uses a persistent volume framework that operates on two concepts – persistent storage and
persistent volume claim. Persistent storage is the physical storage volumes that are created and managed by
the OpenShift cluster administrator. When an application container requires persistent storage, it would create
a persistent volume claim (PVC). The PVC is a unique pointer/handle to a persistent volume on the physical
storage, except that PVC is not bound to a physical volume. When a container makes a PVC request,
OpenShift would allocate the physical disk and bind it to the PVC. When the container image is destroyed, the
volume bound to the PVC is not destroyed unless you explicitly destroy that volume. In addition, during the
lifecycle of the container if it relocates to another physical server in the cluster, the PVC binding will still be
maintained. After the container image is destroyed, the PVC is released, but the persisted storage volume is
not deleted. The specific persistent storage policy for the volume will determine when the volume gets
deleted.
For more detailed conceptual information on persistent volumes see: access.redhat.com/documentation/en-
us/openshift_container_platform/4.13/html/storage
A variety of persistent storage options are available for OpenShift, choices including CSI (Container Storage
Interface), ODF (Red Hat OpenShift Data Foundation), NFS, Cinder, iSCSI, Azure File, AWS elastic block
storage (EBS), and others. OpenShift deployed on ThinkAgile HX platform can use Nutanix storage via
Nutanix CSI. For a complete list of these choices and the corresponding requirements, see the link below:
access.redhat.com/documentation/en-us/openshift_container_platform/4.13/html/storage/understanding-
persistent-storage#persistent-storage-overview_understanding-persistent-storage

6.7.1 Container Storage Interface (CSI)

The Container Storage Interface (CSI) is a standard for exposing arbitrary block and file storage systems to
containerized workloads on Container Orchestration Systems like OpenShift. Using CSI, third-party storage
providers can write and deploy plugins exposing new storage systems in OpenShift platform. OpenShift
Container Platform can leverage CSI to consume storage from storage backends as persistent storage.
OpenShift platform supports CSI, such as Nutanix storage, Cinder, etc. More CSI drivers can be found in:
kubernetes-csi.github.io/docs/drivers.

49 Reference Architecture: Red Hat OpenShift Container Platform on Lenovo


ThinkSystem/ThinkEdge/ThinkAgile HX Servers
Version 3.8
For more information on Container Storage Interface, see: access.redhat.com/documentation/en-
us/openshift_container_platform/4.13/html/storage/using-container-storage-interface-csi#persistent-storage-
using-csi
Figure 55 provides a high-level overview about the Container Storage Interface components running in pods
in the OpenShift Container Platform cluster. CSI driver needs its own external controllers' deployment and
DaemonSet with the driver and CSI registrar.

Figure 55. Container Storage Interface Architecture

6.7.2 ThinkSystem DM series

Lenovo ThinkSystem DM Series (Hybrid Flash Array) are unified, hybrid, scalable storage systems. Lenovo
ThinkSystem DM Series (All-Flash Array) are all-flash storage systems, available as either unified or SAN.
Lenovo ThinkSystem DM Series are designed to provide high performance, simplicity, capacity, security, and
high availability for small to large enterprises. Powered by the ONTAP software, Lenovo ThinkSystem DM
Series deliver enterprise-class storage management capabilities with a wide choice of host connectivity
options, flexible drive configurations, and enhanced data management features.

50 Reference Architecture: Red Hat OpenShift Container Platform on Lenovo


ThinkSystem/ThinkEdge/ThinkAgile HX Servers
Version 3.8
Figure 56. ThinkSystem DM series

For more information on ThinkSystem DM series, see: Unified-storage

Trident is an open source tool used as storage provisioner for Kubernetes based orchestrator. Trident has
been purpose-built to address the persistent storage needs of containerized applications. It leverages
industry-standard interfaces like the Container Storage Interface (CSI) to ensure seamless integration with
OpenShift platform. When deployed in OpenShift clusters, Trident operates as pods and delivers dynamic
storage orchestration services for your OpenShift workloads. This empowers your containerized applications
to effortlessly access and utilize persistent storage resources managed by ONTAP.
Trident simplifies the dynamic storage orchestration and expedites storage provisioning for your clusters. In
OpenShift clusters using the DM Series in conjunction with Trident, you can quickly establish a containerized
storage environment that can be easily expanded upon.
Trident installation offers two distinct approaches:
Generic Installation: This method represents the simplest way to install Trident. It is ideal for OpenShift
clusters with unrestricted network access, allowing for seamless image retrieval from external sources.
Customized Installation: Alternatively, you have the option to tailor your installation to specific needs. In
scenarios where network access is limited (such as air-gapped environments), you can configure Trident to
fetch its images from a private repository.

For more information on Trident, see: https://github.com/NetApp/trident

6.7.3 Red Hat OpenShift Data Foundation (ODF)

Red Hat OpenShift Data Foundation is used as the persistent storage backend in this Reference Architecture
as it simplifies the overall OpenShift architecture and consolidates the compute and storage components in
the same x86 servers. ODF was previously known as OpenShift Container Storage (OCS).
Red Hat OpenShift Data Foundation is an open source distributed, scalable, and high-performance file based
storage system. It is used widely for many types of applications. Red Hat OpenShift Data Foundation provides
volume plug-ins into OpenShift to support the persistent storage for containers.

51 Reference Architecture: Red Hat OpenShift Container Platform on Lenovo


ThinkSystem/ThinkEdge/ThinkAgile HX Servers
Version 3.8
Red Hat OpenShift Data Foundation can be implemented for the OpenShift platform in converged mode and
external mode.
In the converged mode, ODF is running within the OpenShift cluster
● ODF can be deployed on standard compute/worker nodes, alongside the applications. OCP and ODF
subscription will be needed in this case.
● ODF can be deployed on infra nodes, where only an ODF subscription will be needed.
In the external mode, customers can easily set up and operate a stand-alone storage cluster that
simultaneously supports block, file, and object access by one or more OpenShift clusters. This feature also
enables centralized OpenShift storage administration.
For more information on Red Hat OpenShift Data Foundation, see:
access.redhat.com/documentation/en-us/red_hat_openshift_data_foundation/4.13

Figure 57 gives an overview of persistent storage in an OpenShift Data Foundation cluster.

Figure 57. Persistent storage in an OpenShift Data Foundation cluster

52 Reference Architecture: Red Hat OpenShift Container Platform on Lenovo


ThinkSystem/ThinkEdge/ThinkAgile HX Servers
Version 3.8
Figure 58 gives an overview of object storage service in an OpenShift Data Foundation cluster.

Figure 58. Object storage service in an OpenShift Data Foundation cluster

Note: Subscription method for ODF has changed. Entitlements are based on the number of cores that the
worker nodes have just like OpenShift. So, running storage stacked or on dedicated nodes doesn't change the
number of subscriptions. Having dedicated nodes simplifies scaling and management.

Note: Red Hat will not provide individual ODF since January 1st, 2023. We can get ODF from OpenShift
Platform Plus subscription. Please see section 6.17 for detailed information about subscriptions. For individual
ODF license, we can get it from IBM.

6.8 Networking for OpenShift deployment on bare metal

For OpenShift Container Platform deployment on bare metal with ThinkSystem servers, 25Gbps networking is
recommended as the choice for all cluster-wide communication for the core OpenShift services, virtual
network implementation for container workloads, storage services access, as well as all east-west traffic
across the container workloads. In addition, the north-south traffic between the OpenShift environment and
uplink into the customer (or campus) network can be implemented over the 25Gbps network. 10GbE switches
can be used in test/dev environments as alternative switches to 25GbE switches.
There are three logical networks defined in this RA:
● External: The external network is used for the OpenShift Control Plane API, the OpenShift web
interface, and exposed applications (services and routes).
● Internal: This is the primary, non-routable network used for cluster management and inter-node
communication. The same network acts as the layer for server provisioning using PXE and HTTP.
Domain Name Servers (DNS) and Dynamic Host Configuration Protocol (DHCP) services also reside
on this network to provide the functionality necessary for the deployment process and the cluster to
work. Communication with the Internet is provided by NAT configured on the bastion node.
● Out-of-band/IPMI: This is a secured and isolated network used for switch and server hardware
management, such as access to the IMM module and SoL (Serial-over-LAN).

53 Reference Architecture: Red Hat OpenShift Container Platform on Lenovo


ThinkSystem/ThinkEdge/ThinkAgile HX Servers
Version 3.8
Figure 59 shows the Red Hat OpenShift cluster with Lenovo ThinkSystem servers and the recommended
network architecture.

Figure 59. OpenShift Network Connectivity


All OpenShift nodes are connected via the internal network, where they can communicate with each other.
Furthermore, OpenShift SDN creates its own network for OpenShift pod-to-pod communication. Because of
the multi-tenant plugin, OpenShift SDN pods can communicate to each other only if they share the same
project namespace.

6.9 Networking for OpenShift deployment on ThinkAgile HX

For OpenShift Container Platform deployment on Lenovo ThinkAgile HX platform, 3 logical networks are
recommended as the choice for all cluster-wide communication for management services, storage services,
the core OpenShift services, application container workloads, as well as all east-west traffic and north-south
traffic across the container workloads.

Three logical networks:


• Internal Network: The Internal network is used for OpenShift internal workload.
• Management Network: This is the primary, non-routable network used for cluster management and inter-
node communication. The same network acts as the layer for server provisioning using PXE and HTTP.
Domain Name Servers (DNS) and Dynamic Host Configuration Protocol (DHCP) services also reside on this
network to provide the functionality necessary for the deployment process and the cluster to work.
Communication with the Internet can be provided by NAT in this network.
• Storage Network: This is network used for Nutanix storage traffic.

54 Reference Architecture: Red Hat OpenShift Container Platform on Lenovo


ThinkSystem/ThinkEdge/ThinkAgile HX Servers
Version 3.8
Figure 60. OpenShift Network Connectivity

6.10 Hardware management network

For out-of-band management of the servers and initial cluster deployment over the network from the bastion
node, use the 1Gbps management fabric via 1 GbE switch. The Lenovo ThinkSystem/ThinkEdge/ThinkAgile
HXservers have a dedicated 1GbE network port for the IMM interface. The IMM enables remote-management
capabilities for the servers, access to the server’s remote console for troubleshooting, and running the IPMI
commands via the embedded baseboard management controller (BMC) module.

6.11 Network redundancy

The Lenovo ThinkSystem/ThinkEdge/ThinkAgile HX platform uses the 10 GbE/25 GbE network as the
primary fabric for inter-node communication. Two switches are used to provide redundant data layer
communication and deliver maximum availability.

55 Reference Architecture: Red Hat OpenShift Container Platform on Lenovo


ThinkSystem/ThinkEdge/ThinkAgile HX Servers
Version 3.8
Figure 61 shows the redundant network architecture.

Figure 61. Redundant network architecture


Virtual Link Aggregation Group (VLAG) is a feature that allows a pair of switches to work as a single virtual
switch. Each of the cluster nodes has a link to each VLAG peer switch for redundancy. This provides
improved high availability (HA) for the nodes using the link aggregation control protocol (LACP) for
aggregated bandwidth capacity. Connection to the uplink core network is facilitated by the VLAG peers, which
present a logical switch to the uplink network, enabling connectivity with all links active and without a hard
requirement for spanning-tree protocol (STP). The link between the two VLAG peers is an inter-switch link
(ISL) and provides excellent support of east-west cluster traffic the nodes. The VLAG presents a flexible basis
for interconnecting to the uplink/core network, ensures the active usage of all available links, and provides
high availability in case of a switch failure or a required maintenance outage.

6.12 Networking switch configurations

This section contains the configurations for network switches. The typical configuration can use 10 or 25 GbE
networking. The Intel Select configuration must use 25 GbE networking. The higher end switches (100GbE)
are also supported. The management network can use 1 GbE switch.
Following table shows the Switches configurations:
Role Description
Management
1 GbE switch
network
Data network 10/25/100 GbE switch

56 Reference Architecture: Red Hat OpenShift Container Platform on Lenovo


ThinkSystem/ThinkEdge/ThinkAgile HX Servers
Version 3.8
6.13 Edge computing

Red Hat offers 3 kinds of OpenShift edge cluster deployment approaches for different edge locations and
different edge categories. Edge site allows users to extend their services from core data center to remote
locations. Edge site is a place to gather, process, and perform on data. Edge can support reliability, low-
latency, and high-performance link with computing environments close to customers and devices. Red Hat
OpenShift for edge solution allows edge sites deployed in regional edge and far edge at low-cost, even in a
constricted environment.

Figure 62. Red Hat OpenShift Edge Cluster solution on Lenovo Servers
For detailed server configuration information for edge sites, please see section 6.15.2.

6.14 Systems management

In addition to in-band management via IPMI, the Lenovo XClarity Administrator software provides centralized
resource management that reduces complexity, speeds up response, and enhances the availability of
Lenovo® server systems and solutions.
The Lenovo XClarity Administrator provides agent-free hardware management for Lenovo’s ThinkSystem®
rack servers, System x® rack servers, and Flex System™ compute nodes and components, including the
Chassis Management Module (CMM) and Flex System I/O modules. Figure 63 shows the Lenovo XClarity
administrator interface, in which Flex System components and rack servers are managed and are seen on the
dashboard. Lenovo XClarity Administrator is a virtual appliance that is quickly imported into a virtualized
environment server configuration.

57 Reference Architecture: Red Hat OpenShift Container Platform on Lenovo


ThinkSystem/ThinkEdge/ThinkAgile HX Servers
Version 3.8
Figure 63. Lenovo XClarity Administrator Dashboard
For more information, see: Lenovo XClarity Administrator Product Guide

6.15 Deployment examples with ThinkSystem server

This section describes six example configurations, Three typical and enhanced configurations, and Three
edge cluster configurations:
● Compact OpenShift cluster configuration
● Converged OpenShift cluster configuration for
● Typical OpenShift cluster configuration for data center

● Three-node OpenShift cluster configuration for edge


● Three controller nodes and multiple remote worker nodes for edge
● Single node OpenShift cluster for edge

We can configure them from scratch, we can also leverage OpenShift Deployment Ready solution to create
configurations.

6.15.1 Typical and enhanced OpenShift configurations

The Compact OpenShift configuration uses three nodes, three switches:


● Three Compact nodes (Control plane, Compute/worker, OpenShift Data Foundation)

58 Reference Architecture: Red Hat OpenShift Container Platform on Lenovo


ThinkSystem/ThinkEdge/ThinkAgile HX Servers
Version 3.8
● 2 25GbE switches for traffic load network
● 1 1GbE switch for management network
We can leverage Minimum Cluster in OpenShift Deployment Ready solution to create a Compact OpenShift
cluster with ThinkSystem SR630 V2 servers:
● Minimum Cluster in OpenShift Deployment Ready solution. The configurations can be customized.
● 2 25GbE switches for traffic load network
● 1 1GbE switch for management network

The converged OpenShift configuration with OpenShift Data Foundation (ODF) in converged mode uses 7+
nodes, 1 bootstrap, 2 load balancer, and 3 switches:
● 1 Bastion node
● 2 Infrastructure nodes (Optional. User can use other commercial load balancers, such as
F5/NGINX/Avi networks/Fortinet/etc)
● 1 Bootstrap node (Bootstrap node is a temporary node that can be removed after deployment)
● 3 Control plane nodes
● 3+ Converged Compute/worker nodes (Compute/worker, OpenShift Data Foundation)
● 2 25GbE switches for traffic load network
● 1 1GbE switch for management network
We can also leverage DataCenter Cluster in OpenShift Deployment Ready solution to create a converged
OpenShift cluster with ThinkSystem SR630 V2 servers:
● 1 Bastion node
● 2 Infrastructure nodes (Optional. User can use other commercial load balancers, such as
F5/NGINX/Avi networks/Fortinet/etc)
● 1 Bootstrap node (Bootstrap node is a temporary node that can be removed after deployment)
● DataCenter Cluster in OpenShift Deployment Ready solution. The configurations can be customized.
More Converged Compute/worker nodes can be added via Customize entry on DataCenter Cluster .
● 2 25GbE switches for traffic load network
● 1 1GbE switch for management network

The typical OpenShift configuration uses 10+ nodes, 1 bootstrap, 2 load balancer, 3 switches, and external
storage as follows:
● 1 Bastion node
● 2 Infrastructure nodes (Optional. User can use other commercial load balancers, such as
F5/NGINX/Avi networks/Fortinet/etc)
● 1 Bootstrap node (Bootstrap node is a temporary node that can be removed after deployment)
● 3 Control plane nodes
● 2+ Compute/worker nodes
● 3+ ODF node in external mode/external 3rd Party Enterprise Storage as OpenShift platform’s backend
storage.
● 2 25GbE switch for traffic load network
● 1 1GbE switch for management network

This configuration represents a production level OpenShift implementation that meets high-availability,
redundancy, and scale requirements for enterprises. Additional Application nodes can be added to increase
the available compute and storage capacity.
Table 4 provides the hardware configuration summary using Lenovo ThinkSystem SR630/SR650 V2 servers.

59 Reference Architecture: Red Hat OpenShift Container Platform on Lenovo


ThinkSystem/ThinkEdge/ThinkAgile HX Servers
Version 3.8
Table 4. Node Hardware Configuration for OpenShift Deployment
OpenShift Node Role ThinkSystem server configuration
Bastion Node 2x Intel Xeon Gold 6326 16C 185W 2.9GHz Processor
Control plane Node 192GB memory (12x 16 GB)
Bootstrap Node 2x ThinkSystem M.2 5300 480GB SATA 6Gbps Non-Hot Swap SSD
1x ThinkSystem M.2 SATA/NVMe 2-Bay Enablement Kit
1x ThinkSystem 4350-8i SAS/SATA 12Gb HBA
4x ThinkSystem 2.5" S4520 3.84TB Read Intensive SATA 6Gb HS SSD
1x ThinkSystem Mellanox ConnectX-6 Lx 10/25GbE SFP28 2-Port OCP Ethernet
Adapter
Compact node 2x Intel Xeon Platinum 8362 32C 265W 2.8GHz Processor
Converged 512GB memory (12x 32 GB)
Compute/Worker node 2x ThinkSystem M.2 5300 480GB SATA 6Gbps Non-Hot Swap SSD
1x ThinkSystem M.2 SATA/NVMe 2-Bay Enablement Kit
1x ThinkSystem 4350-8i SAS/SATA 12Gb HBA
4x ThinkSystem 2.5" S4520 3.84TB Read Intensive SATA 6Gb HS SSD
1x ThinkSystem Mellanox ConnectX-6 Lx 10/25GbE SFP28 2-Port OCP Ethernet
Adapter
Storage (ODF) node 2x Intel Xeon Gold 6326 16C 185W 2.9GHz Processor
192GB memory (12x 32 GB)
2x ThinkSystem M.2 5300 480GB SATA 6Gbps Non-Hot Swap SSD
1x ThinkSystem M.2 SATA/NVMe 2-Bay Enablement Kit
1x ThinkSystem 4350-16i SAS/SATA 12Gb HBA
10x ThinkSystem 2.5" S4520 3.84TB Read Intensive SATA 6Gb HS SSD
1x ThinkSystem Mellanox ConnectX-6 Lx 10/25GbE SFP28 2-Port OCP Ethernet
Adapter
The typical server configuration for the Bastion, Control plane and Compute/worker nodes is the same. This
allows the role for a server to be easily changed. The Compact node/converged node is an HCI type of node
with OpenShift compute and storage workload running on top of it.
We can also select similar configurations with proper capabilities that meet requirements from Lenovo
ThinkSystem SR630 V3/ SR650 V3/ SR645 V3/ SR665 V3/ SR635 V3/SR655 V3 servers.

6.15.2 OpenShift configurations for edge computing

Three-node OpenShift cluster configuration for edge use cases uses 3 nodes, 2 switches:
● 3 nodes (Control plane, Remote worker)
● 1 25GbE switch for traffic load network
● 1 1GbE switch for management network
We can leverage Minimum Cluster in OpenShift Deployment Ready solution to create a 3-node edge site with
ThinkSystem SR630 V2 servers:
● Minimum Cluster in OpenShift Deployment Ready solution. The configurations can be customized.
● 1 25GbE switches for traffic load network
● 1 1GbE switch for management network

60 Reference Architecture: Red Hat OpenShift Container Platform on Lenovo


ThinkSystem/ThinkEdge/ThinkAgile HX Servers
Version 3.8
Three controller nodes and multiple remote worker nodes for edge use cases uses 3 nodes and 2 switches for
controller nodes, 1+ nodes for remote worker nodes:
● 3 nodes (Control plane)
● 1 25GbE switch for traffic load network
● 1 1GbE switch for management network
● 1+ nodes (Remote worker)

Single node OpenShift cluster for edge uses 1 node:


● 1 node (Control plane, Remote worker)
We can also leverage Single Node or AI Edge in OpenShift Deployment Ready solution to create a single-
node edge site for general purpose or AI service with ThinkEdge SE450 server or ThinkEdge SE350 server:
● Single Node or AI Edge in OpenShift Deployment Ready solution. The configurations can be
customized.

Table 5 provides the hardware configuration summary using Lenovo ThinkEdge SE350 servers. Lenovo
ThinkEdge SE450 and ThinkSystem SR630 are also 2 options for create edge computing sites.

Table 5. Node Hardware Configuration for OpenShift Deployment at Edge Site


Node type samples
ThinkEdge server configuration samples
in Edge cluster
Single-node edge Normal condition:
server in Single-node 1x Intel Xeon D-2183IT 16C 100W 2.20 GHz Processor
cluster 128GB memory (4x 32 GB)
2x ThinkSystem M.2 480GB Industrial A600i SATA SED SSD
2x ThinkSystem SE350 M.2 SATA/NVMe 4-bay Data Drive Enablement Kit
8x ThinkSystem M.2 N600Si 1.92TB NVMe PCIe 3.0 x4 Non-Hot Swap SSD
(Industrial)
1x ThinkSystem SE350 10GbE SFP+ 2-Port, 1GbE SFP 2-Port Switch, Wireless
Capable
1x ThinkSystem M.2 WiFi Module

Extrem shock & vibration condition:


1x Intel Xeon D-2183IT 16C 100W 2.20 GHz Processor
128GB memory (4x 32 GB)
2x ThinkSystem M.2 480GB Industrial A600i SATA SED SSD
1x On Board SATA Software RAID Mode
2x ThinkSystem SE350 M.2 SATA/NVMe 4-bay Data Drive Enablement Kit
(Extreme Shock & Vibe)
8x ThinkSystem M.2 N600Si 1.92TB NVMe PCIe 3.0 x4 Non-Hot Swap SSD
(Industrial)
1x ThinkSystem SE350 10GbE SFP+ 2-Port, 1GbE SFP 2-Port Switch, Wireless
Capable (Extreme Shock & Vibe)
1x ThinkSystem M.2 WiFi Module

Remote worker in far 1x Intel Xeon D-2163IT 12C 75W 2.10 GHz Processor
edge site 128GB memory (4x 32 GB)
2x ThinkSystem M.2 480GB Industrial A600i SATA SED SSD
1x On Board SATA Software RAID Mode
1x ThinkSystem SE350 M.2 SATA/NVMe 4-bay Data Drive Enablement Kit

61 Reference Architecture: Red Hat OpenShift Container Platform on Lenovo


ThinkSystem/ThinkEdge/ThinkAgile HX Servers
Version 3.8
4x ThinkSystem M.2 N600Si 1.92TB NVMe PCIe 3.0 x4 Non-Hot Swap SSD
(Industrial)
1x ThinkSystem SE350 10GbE SFP+ 2-Port, 1GbE SFP 2-Port Switch, Wireless
Capable
1x ThinkSystem M.2 WiFi Module

Node in 3-nodes 1x Intel Xeon D-2183IT 16C 100W 2.20 GHz Processor
cluster 128GB memory (4x 32 GB)
2x ThinkSystem M.2 480GB Industrial A600i SATA SED SSD
1x On Board SATA Software RAID Mode
2x ThinkSystem SE350 M.2 SATA/NVMe 4-bay Data Drive Enablement Kit
8x ThinkSystem M.2 N600Si 1.92TB NVMe PCIe 3.0 x4 Non-Hot Swap SSD
(Industrial)
1x ThinkSystem SE350 10GbE SFP+ 2-Port, 10/100/1GbE RJ45 2-Port Intel i350

6.16 Deployment examples with ThinkAgile HX server

This section describes 3 Nutanix ThinkAgile HX example configurations:


● Small size Nutanix cluster configuration for OpenShift deployment
● Medium size Nutanix cluster configuration for OpenShift deployment
● Large size Nutanix cluster configuration for OpenShift deployment

Table 6 provides the hardware configuration summary using Lenovo ThinkAgile HX servers.
Table 6. Hardware Configuration for OpenShift Deployment on ThinkAgile HX server
Cluster type ThinkAgile HX server configuration sample
Small size cluster 2x Intel Xeon Platinum 8276 28C 165W 2.2GHz Processor
(Sample: 3x HX3320) 24x ThinkSystem 32GB TruDDR4 2933MHz (2Rx4 1.2V) RDIMM
1x ThinkSystem 430-16i SAS/SATA 12Gb HBA
8x ThinkSystem 2.5" PM1645a 800GB Mainstream SAS 12Gb Hot Swap SSD
2x ThinkSystem M.2 5300 480GB SATA 6Gbps Non-Hot Swap SSD
1x Mellanox ConnectX-4 Lx 10/25GbE SFP28 2-port PCIe Ethernet Adapter

Medium size cluster 2x Intel Xeon Platinum 8352S 32C 205W 2.2GHz Processor
(Sample: 5x HX5530) 32x ThinkSystem 32GB TruDDR4 3200 MHz (2Rx4 1.2V) RDIMM
1x ThinkSystem 440-16i SAS/SATA PCIe Gen4 12Gb HBA
12x ThinkSystem 3.5" PM1645a 1.6TB Mainstream SAS 12Gb Hot Swap SSD
2x ThinkSystem M.2 5300 480GB SATA 6Gbps Non-Hot Swap SSD
1x ThinkSystem Mellanox ConnectX-6 Lx 10/25GbE SFP28 2-Port OCP Ethernet
Adapter

Large size cluster 4x Intel Xeon Platinum 8276 28C 165W 2.2GHz Processor
(Sample: 7x HX7820) 48x ThinkSystem 64GB TruDDR4 2933MHz (2Rx4 1.2V) RDIMM
2x ThinkSystem 430-16i SAS/SATA 12Gb HBA
24x ThinkSystem 2.5" PM1645a 1.6TB Mainstream SAS 12Gb Hot Swap SSD
2x ThinkSystem M.2 5300 480GB SATA 6Gbps Non-Hot Swap SSD
2x Mellanox ConnectX-4 Lx 10/25GbE SFP28 2-port PCIe Ethernet Adapter

62 Reference Architecture: Red Hat OpenShift Container Platform on Lenovo


ThinkSystem/ThinkEdge/ThinkAgile HX Servers
Version 3.8
Single/Multiple OpenShift clusters can be deployed on each type of Nutanix clusters on ThinkAgile HX
platform. Applications/services on OpenShift can use Nutanix storages via Nutanix CSI.
Other types of ThinkAgile HX servers can be selected according to varies of cluster requirements.

For more information about OpenShift deployment on Nutanix cluster, see:


https://docs.openshift.com/container-platform/installing/installing_nutanix/

63 Reference Architecture: Red Hat OpenShift Container Platform on Lenovo


ThinkSystem/ThinkEdge/ThinkAgile HX Servers
Version 3.8
6.17 Software and subscription

For this example, the following software is needed:


● OpenShift Container Platform, which adds developer and operation-centric tools to enable rapid
application development, easy deployment, scaling, and long-term lifecycle maintenance for small
and large teams and applications
● OpenShift Data Foundation, which uses Red Hat OpenShift Container Platform as a base, and can
provide Block storage, Object storage, and File storage for different workload types.
● Lenovo XClarity Administrator for management of the operating systems on bare-metal servers

Additionally, the following OpenShift Container Platform components play a key role in the solution:
● Kubernetes to orchestrate and manage containerized applications
● Etcd*, which is a key-value store for the OpenShift Container Platform cluster
● OpenShift SDN to provide software-defined networking (SDN)-specific functions in the OpenShift
Container Platform environment
● Ingress Controller for routing, load-balancing, and virtual IP management
Note: There is still the expectation placed on the customer or implementer that an outside-the-cluster L4
load balancer (F5, NGINX, Avi networks, Fortinet, etc.) is provided for balancing both the API and compute
traffic. Those wildcards and API DNS records point to these LB VIPs.

Table 7 lists the software versions used for this example deployment
Table 7. Software versions
Component Version
Red Hat Enterprise Linux 8.8
Red Hat CoreOS 4.13
OpenShift Container Platform 4.13
OpenShift Data Foundation (ODF) 4.13

We can subscribe self-managed OpenShift software features from 3 kinds of offerings:


● Red Hat OpenShift Kubernetes Engine, it includes Red Hat OpenShift Kubernetes
Runtime(engine), Red Hat Enterprise Linux and Red Hat Enterprise Linux CoreOS, Red Hat
OpenShift Virtualization, Red Hat OpenShift administrator console, Red Hat Application Streams.
● Red Hat OpenShift Container Platform, it includes Red Hat OpenShift Kubernetes Engine, Red Hat
JBoss Web Server, Red Hat’s single sign-on (SSO) technology, Log management, Red Hat
CodeReady Workspaces, Red Hat build of Quarkus, Web console, Red Hat OpenShift Pipelines, Red
Hat OpenShift GitOps, Red Hat OpenShift Serverless, Red Hat OpenShift Service Mesh, Red Hat
Insights for OpenShift, IBM Cloud Satellite.
● Red Hat OpenShift Platform Plus, it includes Red Hat OpenShift Container Platform, Red Hat
Advanced Cluster Management for Kubernetes, Red Hat Advanced Cluster Security for Kubernetes,
Red Hat Quay, Red Hat OpenShift Data Foundation Essentials.

Figure 64 shows services and components running on self-managed OpenShift platform with 3 kinds of
subscription offerings:

64 Reference Architecture: Red Hat OpenShift Container Platform on Lenovo


ThinkSystem/ThinkEdge/ThinkAgile HX Servers
Version 3.8
Figure 64. Services and components running on self-managed OpenShift platform with 3 kinds of
subscription offerings

More information about Red Hat OpenShift subscriptions, see:


https://www.redhat.com/en/resources/self-managed-openshift-sizing-subscription-guide

Note: Red Hat will not provide individual ODF since January 1st, 2023. We can get ODF from OpenShift
Platform Plus subscription. For individual ODF offering, we can get it from IBM.

65 Reference Architecture: Red Hat OpenShift Container Platform on Lenovo


ThinkSystem/ThinkEdge/ThinkAgile HX Servers
Version 3.8
6.18 Deployment validation

The deployment should be validated before it is used. The OpenShift Container Platform web console
provides two perspectives: the Administrator perspective and the Developer perspective. At verification step,
log on to the OpenShift Container Platform web console, and display the OpenShift container Administrator
perspective.
Figure 65 shows an example.

Figure 65. Administrator perspective of OpenShift Container Platform

Figure 66 shows Developer perspective.

Figure 66. Developer perspective of OpenShift Container Platform

66 Reference Architecture: Red Hat OpenShift Container Platform on Lenovo


ThinkSystem/ThinkEdge/ThinkAgile HX Servers
Version 3.8
Other important validation steps include:
● Backing the OCP image registry with ODF. This provides validation for the ODF and completes a post
install task.
● Also important to add are the Prometheus Metrics collected and having them be backed by ODF, as
by default they store to emptydir{}, which is ephemeral storage.
Figure 67 shows a sample of backing OCP image registry with ODF

Figure 67. Backing OCP image registry with ODF

6.19 Multi-cluster management

Red Hat OpenShift extends the capabilities of native Kubernetes from data center to edge sites. Red Hat
OpenShift clusters are widespread in different regions. Customers want the ability to manage their
infrastructure across multi-cloud environments. Multi-cluster topologies and management introduce new
challenges and potentials. Multi-cluster topologies can be useful to provide unified access to the
infrastructure, and to orchestrate applications across various locations. Multi-cluster management can
leverage multi-cluster topologies to bring the possibility of migrating an application from cluster to cluster,
transparently and quickly. Workloads shifting can be practical when dealing with cluster disasters or
infrastructure upgrading, scaling, or placement optimization.
Red Hat Advanced Cluster Management (RHACM) for Kubernetes has been designed to manage Kubernetes
clusters, including Red Hat OpenShift Container Platform, in the cloud and on-premise seamlessly.
Figure 68 shows RHACM components in an infrastructure where multiple OpenShift clusters are managed by
RHACM and deployed on Lenovo ThinkSystem/ThinkEdge/ThinkAgile HX platform:
• Hub cluster. It defines the central controller that runs in a Red Hat Advanced Cluster Management for
Kubernetes cluster.

• Managed cluster. It defines additional clusters that are managed by the hub cluster.
• Cluster lifecycle. It defines the process of creating, importing, managing, and destroying Kubernetes
clusters across various infrastructure cloud providers, private clouds, and on-premises data centers.
• Application lifecycle. It defines the processes that are used to manage application resources on your
managed clusters.
• Governance. It enables admin/super users to define policies that either enforce security compliance,
or inform you of changes that violate the configured compliance requirements for your environment.

67 Reference Architecture: Red Hat OpenShift Container Platform on Lenovo


ThinkSystem/ThinkEdge/ThinkAgile HX Servers
Version 3.8
• Observability. It collects and reports the status and health of the OpenShift Container Platform version
4.x, or later, managed clusters to the hub cluster.

Figure 68. Multiple OpenShift clusters managed by RHACM on ThinkSystem/ThinkEdge/ThinkAgile HX


In order to use RHACM a "hub cluster" is required. From the hub cluster, you can access the console and
product components, as well as the Red Hat Advanced Cluster Management APIs. You can also use the
console to search resources across clusters and view your topology.
Hub clusters govern "managed clusters". The connection between the two is completed by using the
klusterlet, which is the agent that is installed on the managed cluster. The managed cluster receives and
applies requests from the hub cluster and enables it to service cluster lifecycle, application lifecycle,
governance, and observability on the managed cluster.
RHACM simplifies not only cluster lifecycles but also application lifecycles too. It enhances governance and
observability of your managed clusters, regardless of where they are running. There are a dozen (and
counting) supported platforms today and the list continues to grow.
More information about RHACM and components, see:
https://access.redhat.com/documentation/en-
us/red_hat_advanced_cluster_management_for_kubernetes/2.8/html/about/welcome-to-red-hat-advanced-
cluster-management-for-kubernetes#multicluster-architecture

6.20 Virtualization on OpenShift cluster

New application development and deployment is shifting to containers. But organizations and enterprises
already have large existing investment on applications running in VMs. Containers and VMs are coexisting to
support a variety of services in different industries. For this hybrid environment, we can deploy container
clusters in a virtualization environment, such as deploy OpenShift on Nutanix platform (ThinkAgile HX). We
can also leverage OpenShift Virtualization to run and manage VMs on OpenShift cluster in a bare-metal
environment.
OpenShift Virtualization is based on KubeVirt. KubeVirt supports running VMs within a container. It can
manage both Linux VM and Windows VM. It can import and clone existing VMWare VM and Red Hat VM. It
supports live migrating VMs between nodes.

68 Reference Architecture: Red Hat OpenShift Container Platform on Lenovo


ThinkSystem/ThinkEdge/ThinkAgile HX Servers
Version 3.8
Figure 69 shows Cloud-Native/Virtualization applications running on OpenShift cluster.

Figure 69. Cloud-Native/Virtualization applications managed by RHOCP on ThinkSystem/ThinkEdge


With OpenShift Virtualization, container and VM technologies can work together, supporting cutting edge and
legacy applications, supporting cloud-native and virtualization applications, providing services for customers,
all in one OpenShift platform. It delivers a cloud-native virtualization in OpenShift cluster with the advantage of
simplicity and speed of container platform.
More information about OpenShift Virtualization and components, see:
https://docs.openshift.com/container-platform/virt/about-virt.html

6.21 OpenShift security

Red Hat OpenShift is a leading container application platform that provides robust security features to ensure
the safety and integrity of the applications running on it. The platform offers multiple layers of security,
including access control, container security, network security, encryption, and compliance management.
For authentication, OpenShift integrates with various identity and access management solutions, including
LDAP, Active Directory, OAuth, etc, to enable secure user authentication and access control. For
authorization, the platform provides role-based access control (RBAC) to enable users to manage access
control policies efficiently. Users can also use security context constraints (SCCs) to control permissions for
the pods in their cluster.
OpenShift includes multiple layers of container security to protect applications from potential security threats.
The platform provides features such as image signing and verification, pod security policies, secure build
process, and container image vulnerability scanning to ensure the security and integrity of containers.
OpenShift provides several encryption features to secure containerized applications, including secure
communication, encrypted etcd data, and local disk encryption. The secure network communication has
encrypted links between nodes, platform components, applications, stateful applications and external storage,
services and clients, ingress and egress, all using TLS encryption. Etcd data is not encrypted by default,
however users can enable etcd encryption to provide an additional layer of data security to further protect
against the loss of sensitive data if etcd were ever unexpectedly exposed. Red Hat OpenShift supports
encryption for the boot disks on both control plane and compute nodes at installation time. Users can select
their preferred encryption approach between Trusted Platform Module (TPM) v2 and Tang encryption modes
for different deployment cases like edge site and datacenter.

69 Reference Architecture: Red Hat OpenShift Container Platform on Lenovo


ThinkSystem/ThinkEdge/ThinkAgile HX Servers
Version 3.8
Red Hat OpenShift Compliance Operator is a tool that helps organizations ensure that their OpenShift
clusters meet compliance requirements. The Compliance Operator supports various industry and regulatory
compliance standards, such as CIS benchmark, NIST risk management framework, PCI security standard and
more. The Compliance Operator uses OpenSCAP, a NIST-certified tool, to scan, identify non-compliant
resources and provide recommendations on how to remediate those resources. The Compliance Operator
CRDs include ProfileBundle and Profile objects, in which you can define and set the rules for your compliance
scan requirements. Users can use ScanSetting and ScanSettingBinding APIs to run compliance scans.
Compliance checks are defined by compliance Profiles. The Compliance Operator can be customized via
tailored profiles to fit the organization's needs and requirements. A remediation object will be created when a
compliance rule can be remediated automatically. It is recommended to check the results and review the
remediation achieves the target compliance goal.

For more information about OpenShift security and components, see:


https://docs.openshift.com/container-platform/security/index.html and
https://docs.openshift.com/container-platform/authentication/index.html

Note: Security is a rapidly growing technology area. The topics of DevSecOps, Hybrid cloud security and Red
Hat Advanced Cluster Security for Kubernetes (RHACS) are not covered in this document. Also, hardware-
based security control technologies, like Intel Software Guard Extensions (Intel SGX), are not introduced in
this document.

6.22 OpenShift data science

Red Hat OpenShift data science can be installed on-prem in a self-managed OpenShift Container Platform.
OpenShift Data Science includes Jupyter and a collection of default notebook images optimized with the tools
and libraries required for model development, and the TensorFlow and PyTorch frameworks. Deploy and host
your models, integrate models into external applications, and export models to host them in any hybrid cloud
environment. You can also accelerate your data science experiments through the use of graphics processing
units (GPUs).
OpenShift Data Science offers a comprehensive set of features tailored to the needs of data scientists:
Containers for Seamless Collaboration: OpenShift Data Science provides a containerized development
environment, including Jupyter notebooks, which allows data scientists to work on their projects without the
constraints of individual hardware or software installations. With the use of containers, collaboration becomes
effortless as you can easily share your work with team members. This ensures that specialized resources like
GPUs and substantial memory are readily available, without the need for expensive personal hardware.
Integration with Third-Party Tools: OpenShift Data Science promotes flexibility by offering compatibility with a
wide range of open source and third-party machine learning tools. Data scientists can seamlessly integrate
these tools into their workflow, supporting the entire machine learning lifecycle, from data preprocessing and
feature engineering to model deployment and management.
Collaborative Notebooks with Git: The platform enables collaborative coding with the Git interface within
Jupyter notebooks. This allows data scientists to work together effectively and keep a record of changes to
their code, enhancing version control and facilitating teamwork.
Secure Notebook Images: OpenShift Data Science offers a selection of secure, pre-configured notebook
images equipped with the necessary libraries and tools for model development. These images are thoroughly

70 Reference Architecture: Red Hat OpenShift Container Platform on Lenovo


ThinkSystem/ThinkEdge/ThinkAgile HX Servers
Version 3.8
vetted to ensure reliability and security. Data scientists can start new projects with confidence, avoiding the
risk of using unverified and potentially insecure images from external sources.
Custom Notebook Images: In addition to the provided notebook images, data scientists have the flexibility to
configure custom notebook images tailored to the specific requirements of their projects. This customization
ensures that their development environment aligns perfectly with the demands of their work.
Data Science Pipelines: OpenShift Data Science supports data science pipelines, allowing for standardized
and automated machine learning workflows. This streamlines the process of developing and deploying data
science models, enhancing efficiency and maturity in the data science workloads.
Model Serving: Data scientists can deploy their trained machine learning models, making them available as
service endpoints for integration into applications and testing. OpenShift Data Science offers extensive control
over how model serving is executed, ensuring that intelligent applications can utilize the deployed models
effectively in a production environment.
OpenShift Data Science empowers data scientists with the tools and capabilities necessary to collaborate,
experiment, and deploy machine learning solutions efficiently, all within a secure and customizable
development environment.

Figure 70. Jupyter notebooks managed by Red Hat OpenShift Data Science

IT Operations administrators can effectively manage users, data, and resources within OpenShift Data
Science by utilizing the following key features:
User Management with Identity Provider Integration: OpenShift Data Science seamlessly integrates with the
same authentication systems as your OpenShift cluster. By default, all users listed in your identity provider
gain access to OpenShift Data Science without the need for separate credentials. Additionally, administrators
have the option to restrict access by creating OpenShift groups, defining specific user subsets. Moreover,
administrator access can be granted to a designated group, ensuring control and security.

71 Reference Architecture: Red Hat OpenShift Container Platform on Lenovo


ThinkSystem/ThinkEdge/ThinkAgile HX Servers
Version 3.8
Resource Management through OpenShift Expertise: IT administrators can leverage their existing OpenShift
knowledge to configure and manage resources for OpenShift Data Science users. This familiar environment
simplifies resource allocation and management.
Control over Red Hat Usage Data Collection: Administrators have the choice to enable or disable data
collection related to OpenShift Data Science usage within their cluster. By default, data collection is enabled
during the installation of OpenShift Data Science on the OpenShift cluster, offering flexibility in data privacy.
Cost Optimization through Autoscaling: The cluster autoscaler feature allows administrators to dynamically
adjust the cluster size to meet the current resource requirements, optimizing usage costs. This ensures that
resources are efficiently allocated to meet user needs while minimizing unnecessary expenditure.
Resource Usage Management by Automatically Stopping Idle Notebooks: OpenShift Data Science offers the
capability to reduce resource consumption by automatically stopping notebook servers that have remained
idle for a specified period. This proactive approach helps maintain resource efficiency in the deployment.
Support for Model-Serving Runtimes: OpenShift Data Science provides native support for model-serving
runtimes. This integration facilitates interaction with specific model servers and their supported frameworks.
The platform includes the OpenVINO Model Server runtime by default, but administrators can add custom
runtimes to address specific model framework requirements.
Installation in Disconnected Environments: OpenShift Data Science's self-managed deployment option
supports installation in disconnected environments where clusters are isolated from the internet, typically
behind firewalls. In such cases, the OpenShift Data Science Operator can be deployed to a disconnected
environment using a private registry that mirrors relevant images. This ensures that the platform can operate
effectively in restricted network environments.
These features empower IT Operations administrators to efficiently manage users, resources, and data while
maintaining control, security, and cost-effectiveness within OpenShift Data Science.

Figure 71. Administrator perspective of of Red Hat OpenShift Data Science

72 Reference Architecture: Red Hat OpenShift Container Platform on Lenovo


ThinkSystem/ThinkEdge/ThinkAgile HX Servers
Version 3.8
For more information about OpenShift Data Science, see:
https://access.redhat.com/documentation/en-us/red_hat_openshift_data_science_self-managed/

73 Reference Architecture: Red Hat OpenShift Container Platform on Lenovo


ThinkSystem/ThinkEdge/ThinkAgile HX Servers
Version 3.8
Resources
● Architecture of the Red Hat OpenShift Container Platform
access.redhat.com/documentation/en-us/openshift_container_platform/4.13/html/architecture/architecture

● OpenShift Container Platform


access.redhat.com/documentation/en-us/openshift_container_platform/4.13/

● Kubernetes
kubernetes.io/ and kubernetes.io/docs/tutorials/kubernetes-basics/

● Storage in OpenShift Container Platform


access.redhat.com/documentation/en-us/openshift_container_platform/4.13/html/storage

● OpenShift Data Foundation


access.redhat.com/documentation/en-us/red_hat_openshift_data_foundation/4.13

● Lenovo ThinkSystem Server Comparison


lenovopress.lenovo.com/lp1263-lenovo-thinksystem-server-comparison

● Lenovo Open Cloud Automation DataSheet


lenovo.com/datasheet/ds0129-lenovo-open-cloud-automation-for-red-hat-openshift

● Lenovo ThinkSystem DE Series


lenovo.com/us/en/data-center/storage/storage-area-network/thinksystem-de-series/c/thinksystem-de-
series

● Lenovo ThinkSystem DM Series


https://www.lenovo.com/us/en/c/servers-storage/storage/unified-storage/

● Trident
https://github.com/NetApp/trident

● Red Hat OpenShift on Nutanix


https://portal.nutanix.com/page/documents/solutions/details?targetId=TN-2030-Red-Hat-OpenShift-on-
Nutanix:architecture

● Lenovo ThinkAgile HX Series


https://lenovopress.lenovo.com/servers/thinkagile/hx-series#sort=relevance

● Red Hat Advanced Cluster Management for Kubernetes


https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes

● Red Hat OpenShift Virtualization


https://www.redhat.com/en/technologies/cloud-computing/openshift/virtualization
https://docs.openshift.com/container-platform/virt/about-virt.html

● Red Hat OpenShift Deployment Ready Solution


https://lenovopress.lenovo.com/lp1671-red-hat-openshift-deployment-ready-solutions-on-lenovo-servers

● Red Hat OpenShift Deployment Ready Solution


https://lenovopress.lenovo.com/lp1671-red-hat-openshift-deployment-ready-solutions-on-lenovo-servers

● Red Hat OpenShift Security

74 Reference Architecture: Red Hat OpenShift Container Platform on Lenovo


ThinkSystem/ThinkEdge/ThinkAgile HX Servers
Version 3.8
https://docs.openshift.com/container-platform/security

● Red Hat OpenShift subscription and sizing guide


https://www.redhat.com/en/resources/self-managed-openshift-sizing-subscription-guide

● Red Hat OpenShift Data Science


https://access.redhat.com/documentation/en-us/red_hat_openshift_data_science_self-managed/

75 Reference Architecture: Red Hat OpenShift Container Platform on Lenovo


ThinkSystem/ThinkEdge/ThinkAgile HX Servers
Version 3.8
Document History
Version 1.0 1 October 2018 ● Initial version

Version 1.1 1 April 2019 ● Updated to include the Intel Select Base configuration

Version 1.2 22 April 2019 ● Updated configurations to use Intel Xeon Scalable Processor
gen 2 CPUs

Version 2.0 22 April 2020 ● Update to OpenShift 4.2

Version 2.1 27 June 2020 ● Update to OpenShift 4.4 and OpenShift Container Storage 4.4

Version 2.2 17 July 2020 ● Updated with Red Hat feedback and markup. Return to
Lenovo for review

Version 3.0 16 December 2021 ● Updated with ThinkSystem v2 generation HW and Red Hat
OCP 4.9 and ODF 4.9

Version 3.1 23 June 2022 ● Add more Lenovo Open Cloud Automation (LOC-A) sections

Version 3.2 30 Sept 2022 ● Add OpenShift with ThinkAgile HX solution


● Upgrade OpenShift version to OCP 4.11.
● Add Multi-cluster Management

Version 3.3 15 Dec 2022 ● Add OpenShift Deployment Ready Solution


● Add OpenShift Virtualization

Version 3.4 13 Jan 2023 ● Updated with Red Hat feedback and markup.

Version 3.5 29 March 2023 ● Add OpenShift Security


● Upgrade OpenShift version to OCP 4.12

Version 3.6 15 June 2023 ● Updated with ThinkSystem v3 generation HW.

Version 3.7 18 Sept 2023 ● Update with DM storage

Version 3.8 29 Nov 2023 ● Update with OpenShift Data Science


● Upgrade OpenShift version to OCP 4.13

76 Reference Architecture: Red Hat OpenShift Container Platform on Lenovo


ThinkSystem/ThinkEdge/ThinkAgile HX Servers
Version 3.8
Trademarks and special notices
© Copyright Lenovo 2023.
References in this document to Lenovo products or services do not imply that Lenovo intends to make them available in
every country.
The following terms are trademarks of Lenovo in the United States, other countries, or both:
• Lenovo®
• Flex System
• System x®
• ThinkAgile®
• ThinkEdge®
• ThinkSystem®
• TruDDR4
• XClarity®
The following terms are trademarks of other companies:
Intel® and Xeon® are trademarks of Intel Corporation or its subsidiaries.
Linux® is the trademark of Linus Torvalds in the U.S. and other countries.
Microsoft®, Active Directory®, Azure®, and Windows® are trademarks of Microsoft Corporation in the United States, other
countries, or both.
Other company, product, or service names may be trademarks or service marks of others.
Information is provided "AS IS" without warranty of any kind.
All customer examples described are presented as illustrations of how those customers have used Lenovo products and
the results they may have achieved. Actual environmental costs and performance characteristics may vary by customer.
Information concerning non-Lenovo products was obtained from a supplier of these products, published announcement
material, or other publicly available sources and does not constitute an endorsement of such products by Lenovo. Sources
for non-Lenovo list prices and performance numbers are taken from publicly available information, including vendor
announcements and vendor worldwide homepages. Lenovo has not tested these products and cannot confirm the
accuracy of performance, capability, or any other claims related to non-Lenovo products. Questions on the capability of
non-Lenovo products should be addressed to the supplier of those products.
All statements regarding Lenovo future direction and intent are subject to change or withdrawal without notice, and
represent goals and objectives only. Contact your local Lenovo office or Lenovo authorized reseller for the full text of the
specific Statement of Direction.
Some information addresses anticipated future capabilities. Such information is not intended as a definitive statement of a
commitment to specific levels of performance, function or delivery schedules with respect to any future products. Such
commitments are only made in Lenovo product announcements. The information is presented here to communicate
Lenovo’s current investment and development activities as a good faith effort to help with our customers' future planning.
Performance is based on measurements and projections using standard Lenovo benchmarks in a controlled environment.
The actual throughput or performance that any user will experience will vary depending upon considerations such as the
amount of multiprogramming in the user's job stream, the I/O configuration, the storage configuration, and the workload
processed. Therefore, no assurance can be given that an individual user will achieve throughput or performance
improvements equivalent to the ratios stated here.
Photographs shown are of engineering prototypes. Changes may be incorporated in production models.
Any references in this information to non-Lenovo websites are provided for convenience only and do not in any manner
serve as an endorsement of those websites. The materials at those websites are not part of the materials for this Lenovo
product and use of those websites is at your own risk.

77 Reference Architecture: Red Hat OpenShift Container Platform on Lenovo


ThinkSystem/ThinkEdge/ThinkAgile HX Servers
Version 3.8

You might also like