0% found this document useful (0 votes)
86 views64 pages

RM82796 - ThinkSystem V4 Update - 0829

Thinksystem

Uploaded by

ahmar.hp1212
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
86 views64 pages

RM82796 - ThinkSystem V4 Update - 0829

Thinksystem

Uploaded by

ahmar.hp1212
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

ThinkSystem V4 update

RM82796
August 2025
© Copyright Lenovo 2025
Lenovo
8001 Development Drive
Morrisville, North Carolina, 27560
Lenovo reserves the right to change product information and specifications at any time without notice. This publication might include
technical inaccuracies or typographical errors. References herein to Lenovo products and services do not imply that Lenovo intends to
make them available in all countries. Lenovo provides this publication as is, without warranty of any kind, either expressed or implied,
including the implied warranties of merchantability or fitness for a particular purpose. Some jurisdictions do not allow disclaimer of
expressed or implied warranties. Therefore, this disclaimer may not apply to you.
Data on competitive products is obtained from publicly obtained information and is subject to change without notice. Contact the
manufacturer for the most recent information.
Lenovo and the Lenovo logo is a trademark or registered trademark of Lenovo Corporation or its subsidiaries in the United States, other
countries, or both. Intel and the Intel logo is a trademark or registered trademark of Intel Corporation or its subsidiaries in the United
States, other countries, or both. Other names and brands are the property of their respective owners.
The following terms are trademarks, registered trademarks, or service marks of Lenovo:
Access Connections, Active Protection System, Automated Solutions, Easy Eject Utility, Easy-Open Cover, IdeaCentre, IdeaPad,
ImageUltra, Lenovo Care, Lenovo (logo), Lenovo, MaxBright, NetVista, New World. New Thinking, OneKey, PC As A Service, Rapid
Restore, Remote Deployment Manager, Rescue and Recovery, ScrollPoint, Secure Data Disposal, Skylight, Software Delivery Center,
System Information Gatherer, System Information Reporter, System Migration Assistant, System x, ThinkAccessories, ThinkAgile,
ThinkCentre, ThinkDisk, ThinkDrive, ThinkLight, ThinkPad, ThinkPlus, ThinkScribe, ThinkServer, ThinkStation, ThinkStore, ThinkSystem,
ThinkVantage, ThinkVision, ThinkWorld, TopSeller, TrackPoint, TransNote, UltraBase, UltraBay, UltraConnect, UltraNav, VeriFace.
For more information, go to: [Link]
The terms listed for the following partners are the property of their respective owners:
AMD Intel IBM Microsoft NVIDIA
The content in this document is subject to the existing non-disclosure agreement held between Lenovo and its Authorized Service
Providers.
Preface
This document may not be copied or sold, either in part or in whole, without permission from
the Lenovo ISG Services Education team.

Current release date: August 2025


Current release level: 1.0

The information in this publication is correct as of the date of the latest revision and is
subject to change at any time without notice.
To provide feedback or receive more information about this course, send an email to:
ServicesEdu@[Link]
Prerequisites
Although there are no specific prerequisites for this course, you should have some
knowledge of Lenovo ThinkSystem products.

Objectives
After completing the course, you will be able to:
• Describe the ThinkSystem V4 architecture
• Describe the changes in XCC3
• Describe the changes in LXCE and LXPM5
• Describe the ThinkSystem V4 configurations and hardware replacement tips
Documentation

5
Documentation

Documentation / pre-registration courses


Courses on the Lenovo LMS site
• Intel Xeon processor architecture for ThinkSystem V4 servers
• ThinkSystem tools for the ThinkSystem V4 platform

Documentation
• Lenovo press
• [Link]
• Management tools documentation
Intel architecture for V4

7
Intel architecture for V4
Intel Xeon 6 processors

ThinkSystem V4 servers are equipped with next-generation Intel


Xeon 6 processors, which are based on the Intel Birch Stream
platform.
The Intel Xeon 6 processor family includes the following two
classes:
• P-cores (performance-cores)
• E-cores (efficient-cores)
Lenovo Press: ThinkSystem V4 Servers with Intel Xeon 6
Processors

8
Intel architecture for V4
Intel Xeon 6 P-cores and E-cores overview

P-Cores E-Cores
(Intel code name: Granite Rapids) (Intel code name: Sierra Forest)

P-cores are optimized for high performance per E-cores are optimized for high core density and
core. They excel at the widest range of exceptional performance per watt. They deliver
workloads, including better AI performance distinct advantages for cloud-scale workloads
than any other general-purpose processor. that demand high task-parallel throughput.

9
Intel architecture for V4
Intel processor naming convention
Intel Xeon 6 products formerly Former Intel Intel Xeon 6
represented with code names codename abbreviation former product name

Granite Rapids GNR-AP Intel Xeon 6900 P-cores or


Intel Xeon 6900 P-series
GNR-SP Intel Xeon 6700/6500 P-cores or
Intel Xeon 6700/6500 P-series
Sierra Forest SRF-AP Intel Xeon 6900 E-cores or
Intel Xeon 6900 E-series

SRF-SP Intel Xeon 6700 E-cores or


Intel Xeon 6700 E-series

• AP = advanced performance (6900 series processor)


• SP = scalable performance (6500 or 6700 series processor)

10
Intel architecture for V4
Intel Xeon processor road map
2022 2023 2024-25 2026

Xeon 6
11
Intel architecture for V4
Intel Xeon 6 SKU naming rule

12
Intel architecture for V4
P-core and E-core specification comparison

Note: ThinkSystem V4 servers do not support the Xeon 6 R1S SKU.


13
Intel architecture for V4
Xeon gen-to-gen specification comparison

Note: ThinkSystem V4 servers do not support the Xeon 6 R1S SKU.


Intel architecture for V4
Processor comparison
For a specification comparison of the latest Intel Xeon 6 processors, refer to the following Intel web
page:
[Link]
Intel architecture for V4
Xeon 6 multi-socket design options

Intel Xeon 6900-series Intel Xeon 6700 / 6500-series


Design options
Design options

Eight-socket
design

E-Cores processor
P-Cores processor

P-Cores processor
Two-socket Four-socket
design design

processor
E-Cores
Two-socket
One-socket
design
design

One-socket high
I/O optimized
design

Supports an up to two-socket configuration Supports an up to eight-socket configuration


Intel architecture for V4
MRDIMM overview
Xeon 6 supports MRDIMMs (Multiplexed Rank Dual Inline Memory Modules). MRDIMMs are an
enhanced DDR5 DIMM technology that deliver 30% greater memory bandwidth than RDIMMs with
an expected data transfer rate of up to 8800 MT/s. The MRDIMM is the fastest DDR5 DIMM
currently available, and they are supported by the Intel Xeon 6 P-cores processor.

17
Intel architecture for V4
Xeon 6 memory configuration
• Xeon 6900 P-cores processors support 1DPC (DIMM per channel) population only
• 6500 / 6700-series supports both 1DPC and 2DPC population
– 2DPC population is only supported with RDIMMs, not with MRDIMMs
• MRDIMMs are supported only on P-cores processors with 1DPC population
DIMM rating Operating speed (MT/s) Memory configuration
Xeon 6900 P-cores
DDR5-6400 rated RDIMMs only 6400, 6000, 5600, 5200, 4800 1DPC / 1SPC

MR-8800 only 8800, 8000, 7200 1DPC / 1SPC


Xeon 6500 / 6700 P-cores
DDR5-6400 rated RDIMMs only 6400, 6000, 5600, 5200, 4800 1 DPC / 1 SPC
5200, 4800 2 DPC / 2 SPC
MR-8800 only 5200, 4800 1 DPC / 1 SPC
Xeon 6700 E-cores
DDR5-6400 rated RDIMMs only 6400, 6000, 5600, 5200, 4800 1 DPC / 2 SPC
5200, 4800 2 DPC / 2 SPC
Xeon 6900 E-cores
DDR5-6400 rated RDIMMs only 6400, 6000, 5600, 5200, 4800 1 DPC / 1 SPC

• DPC = DIMM per channel


18
• SPC = slot per channel
XCC3 on ThinkSystem V4
servers
New features and enhancements

19
XCC3 on ThinkSystem V4 servers
Overview
XCC3 features the following hardware and design changes:
• XCC3 uses the system I/O board (DC-SCM) as the BMC hardware module

• A move to OpenBMC-based new architecture


– Eliminates the dependency on Vertiv
– Open source-based solution
– In-house design with full control
– Flexible architecture for future extensions
– As far as possible, user interfaces are kept compatible with the previous generation

• A phased approach to deliver a full-function stack


– Almost every single function is re-designed and re-coded
– Schedule/resource constraints

20
XCC3 on ThinkSystem V4 servers
XCC3 and XCC2 login page comparison

XCC3 on V4 XCC2 on V3

21
XCC3 on ThinkSystem V4 servers
XCC3 and XCC2 home page comparison

XCC3
XCC3ononV4
V4 XCC2 on V3
22
XCC3 on ThinkSystem V4 servers
BMC configuration settings through OneCLI
• BMC and UEFI settings both leverage the Redfish standard
• The BMC settings prefix and name have been changed along with the new architecture

Change from IMM to BMC

OneCLI on V4 OneCLI on V3

23
XCC3 on ThinkSystem V4 servers
UEFI configuration settings through OneCLI
• The name of UEFI settings aligns with Redfish BIOS attribute names, and UEFI has been added as a prefix

Add UEFI as prefix

OneCLI on V4 OneCLI on V3

24
XCC3 on ThinkSystem V4 servers
System I/O board (DC-SCM card)
The BMC chip was stored on the RoT module in
V3 servers. The BMC chip has been moved to the
DC-SCM card in V4 servers.
• The new BMC hardware module follows OCP
spec
• The BMC chip, BMC/UEFI flash, and PFR chip
are all on one board
• New power sequence management through the
SCM FPGA and HPM FPGA
• Two FPGAs (SCM FPGA and HPM FPGA),
which the BMC needs to authenticate and update
• As with the ThinkSystem V3 platform, the
AST2600 is used as the BMC chip
• New UEFI SPI flash is 64 MB (V3 is 32 MB)
– Longer UEFI firmware update time

25
XCC3 on ThinkSystem V4 servers
System I/O board and processor board
The SR650 V4 and SR650a V4 use the same system board assembly. The system board assembly
has two components:
• Processor board
– A board containing CPU sockets, PCIe slots, memory slots, and other server component
connectors
• System I/O board, as known as DC-SCM (Data Center Secure Control Module)
– A board containing the system BMC (XCC3) management port, USB ports, and a VGA
connector
– A MicroSD card slot to extend XCC3 storage space for the backup of firmware and for remote
console virtual media

Processor board

System I/O board


26
XCC3 on ThinkSystem V4 servers
Replacing an I/O board and updating system firmware on a
V4 system
After replacing the System I/O board (DC-SCM), you need to update the UEFI firmware to the specific
version supported by this server model. The following methods can be used to update the firmware:
• Using a USB boot kit
• Using Lenovo XClarity Essentials OneCLI

Reboot the system to start LXPM. Select System Summary from the menu on the left, and then click
Update VPD. On the Update VPD page, click Set under End of Manufacture to set the End of Manufacture
(EOM) flag.

Update the XCC, FPGA HPM, FPGA SCM, LXPM, and LXUM firmware to the required version, and then
restore the XCC and UEFI settings.

Procedure for replacing System I/O board (DC-SCM) and updating system firmware on V4 system
XCC3 on ThinkSystem V4 servers

Board replacement tips on V4 systems


Board type Code/Config Note
Processor board XCC => FPGA HPM => FPGA SCM => UEFI Ensure that all firmware updates are compatible, and
then update all the firmware components at the same
time. If you are updating both XCC and UEFI
firmware, update XCC firmware first.
VPD Update VPD after replacing a processor board
(machine type and serial number)
System I/O board (DC-SCM) (EOM) flag Set the End of Manufacture (EOM) flag

XCC => FPGA HPM => FPGA SCM => UEFI System I/O board replacement requires flashing of
XCC / FPGA HPM / HPM SCM / UEFI
UEFI/XCC configuration System I/O board replacement requires updating all
XCC and UEFI configurations. This can be done with
the customer’s backup, OneCLI scripts, or XCC and
F1 settings.

Note: Check the XCC firmware release notes for the associated firmware update version.
XCC3 on ThinkSystem V4 servers
Firmware updates
• FPGA firmware and backplane PSoC firmware are
separated from the XCC3 package
– In previous generations, they were packaged
together
• The SCM FPGA and HPM FPGA have individual
firmware inventory entries, but they can be updated
together
– If FPGA firmware is updated when system
power is on, a BMC reset and system reboot
will be needed
• The drive backplane PSoC firmware has a separate
firmware inventory entry – there will be a bundle
which includes all the drive backplane firmware
– If drive backplane firmware is updated when
system power is on, a system reboot will be
needed

29
XCC3 on ThinkSystem V4 servers
Replacing a system I/O board with MicroSD card
If you are replacing the system I/O board at the rear of the system, remove the MicroSD card from
the old system I/O board and install it on the new system I/O board.

MicroSD card location

30
UEFI on ThinkSystem V4
servers
New features and enhancements

31
UEFI on ThinkSystem V4 servers

UEFI on V3 – Load Default Settings

There is only one option.


Select Load Default Settings.

UEFI on V3

32
UEFI on ThinkSystem V4 servers

UEFI on V4 – Default Options

Select Default Options to open a setup page with


two default options:
• Custom Default
• Factory Default

“Default Options” page entry on V4

“Default Options” page

33
UEFI on ThinkSystem V4 servers
UEFI Setup – Boot manager enhancement
To optimize boot option management in the ThinkSystem V4 UEFI, there is a
new setup page to support changes to and the deletion of boot options.

Note: Due to Intel CPU limitations, the ThinkSystem


V4 UEFI only supports UEFI Boot Mode.
34
UEFI on ThinkSystem V4 servers
RAS feature enhancement - FQXSFMA0056M
To support SRAR and UCNA, the [arg4] element has been added to the FQXSFMA0056M UEFI event
message:
An uncorrected recoverable memory error has been detected on DIMM [arg1] at address [arg2].[arg3]
[arg4]
[arg4] values: –T0 to indicate the error is UCNA
–T1 to indicate the error is SRAR
See the following slide for examples.

To simplify the system error log (SEL):


• Only one uncorrectable error (UE) is reported in the SEL per CPU (UEFI event: FQXSFPU0027N /
FQXSFPU0062F)
• Only one UE is reported in the SEL per DIMM (UEFI event: FQXSFMA0056M (with T0 or T1))
Note:
• SRAR - Software Recoverable Error Action Required
• UCNA - UnCorrected No Action (an uncorrectable error logged in MCA Bank)
35
UEFI on ThinkSystem V4 servers
UEFI event FQXSFMA0056M – SEL screen capture
FQXSFMA0056 represents both the UCNA and SRAR memory error types.
• UCNA is indicated by an arg string of -T0
• SRAR is indicated by an arg string of -T1

36
LXCE on ThinkSystem V4
servers
New features and enhancements

37
LXCE on ThinkSystem V4 servers
Overview
LXCE has the following new features and enhancements for the ThinkSystem V4 platform:
• Version updates to support ThinkSystem V4
– OneCLI: 5.3.0
– UpdateXpress: 5.3.0
– BoMC: 14.3.0

• New OS support
– Windows: Windows Server 2025
– Linux: RHEL 8.10/9.4, SLES 15.6, Ubuntu 24.04

Data Center Support


• Lenovo XClarity Essentials (LXCE) (OneCLI, BOMC, UpdateXpress)
Always check the Lenovo XClarity Essentials OneCLI (OneCLI) website for the latest OneCLI
information and User Guide.
• [Link]
38
LXCE on ThinkSystem V4 servers
Binary files
The following binary file names will be updated in accordance with the version change:
OneCLI Binary
• lnvgy_utl_lxce_oneclixxx-5.3.0_windows_indiv.zip
• lnvgy_utl_lxceb_oneclixxx-5.3.0_windows_indiv.exe
• Lnvgy_utl_lxce_oneclixxx-5.3.0_linux_indiv.tgz
• lnvgy_utl_lxceb_oneclixxx-5.3.0_linux_indiv.bin
• lnvgy_utl_lxcer_oneclixxx-5.3.0_linux_indiv.rpm
UpdateXpress Binary
• lnvgy_utl_lxce_uxxxx-5.3.0_windows_indiv.zip
• lnvgy_utl_lxce_uxxxx-5.3.0_linux_indiv.tgz
BoMC Binary
• lnvgy_utl_lxce_bomcxxx-14.3.0_windows_indiv.exe
• lnvgy_utl_lxce_bomcxxx-14.3.0_linux_indiv.bin

Note that file names now end with indiv.


39
LXCE on ThinkSystem V4 servers
LXCE major software features summary
LXCE has the following feature updates for the ThinkSystem V4 platform:
• OneCLI - LXUM (Lenovo XClarity Update Manager) replaces the legacy Bare Metal Update
(BMU) on BHS
• OneCLI - In-band disk drive firmware update without a reboot
• OneCLI - Configuration with OneCLI
• OneCLI - Back up/restore the configuration with encryption
• OneCLI - Miscellaneous new OneCLI functions
• UpdateXpress - ThinkEdge server security feature
• UpdateXpress - Back up and restore system configuration settings
• BoMC - Support for the ST45 V3 FW update

40
LXPM5 on ThinkSystem V4
servers
New features and enhancements

41
LXPM5 on ThinkSystem V4 servers
LXPM5 on ThinkSystem V4 servers
LXPM5 supports the same features and functions on the ThinkSystem V4 platform as LXPM4 did
on the V3 platform. From a service perspective, the only difference is a greater number of
supported servers and hardware components. The LXPM5 screen captures on the following slides
show the tool’s initial launch and key features.

• Lenovo XClarity Provisioning Manager


• How to switch the Lenovo XClarity Provisioning Manager from the text interface to the graphical
user interface
• How to switch the Lenovo XClarity Provisioning Manager from the graphical user interface to the
text interface

42
LXPM5 on ThinkSystem V4 servers
LXPM5 – Initial launch

43
LXPM5 on ThinkSystem V4 servers
LXPM5 - RAID Setup

Supported RAID adapters and


chips are listed below:

Intel VROC 9.0.0.x (new)


Broadcom RAID 940-16i
RAID 940-8i
RAID 940-8e
RAID 545-8i (new)
Microchip RAID 9350-16i
RAID 9350-8i
5350-8i

44
LXPM5 on ThinkSystem V4 servers
LXPM5 - OS Installation

Supported OSs are listed below:

Windows Server WS2025


WS2022
Windows Client Win10
Win11
VMWare ESXi9.0
RHEL RHEL9.4
RHEL9.5
SLES SLES15.6

45
LXPM5 on ThinkSystem V4 servers
LXPM5 - Firmware Update

46
New options in ThinkSystem
V4 servers

47
New options in ThinkSystem V4 servers
Useful links
Lenovo Press Service training
ThinkSystem SR630 V4 ES72641
ThinkSystem SR650 V4 ES72642
ThinkSystem SR650a V4 ES72642
ThinkSystem SR850 V4 ES72721
ThinkSystem SR860 V4 ES72720
ThinkSystem SC750 V4 Neptune ES72694
ThinkAgile MX630 V4 Hyperconverged System ES42013C
ThinkAgile MX650 V4 Hyperconverged System ES42013C
ThinkAgile VX630 V4 Hyperconverged System ES41800E
ThinkAgile VX650 V4 Hyperconverged System ES41800E
ThinkAgile HX630a V4 Hyperconverged System ES41641H
ThinkAgile HX650 Hyperconverged System ES41641H
ThinkAgile HX650a V4 Hyperconverged System ES41641H 48
New options in ThinkSystem V4 servers
Overview
ThinkSystem V4 servers support the following new options:
• Hot-swap M.2 drive
• CXL memory module
• Lenovo Processor Neptune Core Module
• Lenovo Compute Complex Neptune Core Module
New options in ThinkSystem V4 servers
Hot-swap M.2 drives

50
New options in ThinkSystem V4 servers
Hot-swap M.2 drive assembly components

The figure on the right shows the components of the


hot-swap M.2 SATA/NVMe Drive Assembly Kit, which
can be used for hot-swap M.2 drives in front and rear
drive bays – not for internal M.2 drives.
This kit can be used in all ThinkSystem V4 1U/2U
servers.

51
New options in ThinkSystem V4 servers
Hot-swap M.2 drive replacement tips
Hot-swap M.2 drives require a heat sink and thermal pad. When replacing a hot-swap M.2 drive,
apply a new thermal pad to the replacement M.2 adapter.

The heat sink on the hot-swap M.2 adapter Applying a new thermal pad to the
replacement M.2 adapter

ThinkSystem SR650 V4 and SR650a V4 installing a hot swap M.2 drive assembly
52
New options in ThinkSystem V4 servers
CXL memory

A SR650 V4 with 12 E3.S 2T CXL memory bays (non-hot-swap) and eight E3.S 1T drive bays

53
New options in ThinkSystem V4 servers
Replacing a CXL memory module
Unlike 2.5-inch or E3.S drives, the CXL memory modules in the front drive bays are non-hot-swap
parts. Power off the system before replacing a CXL memory module. A 3 mm flat-head screwdriver
is required to unlock or lock the CXL memory module handle.

E3.S bay covers (with non-hot-swap blue tags) Unlocking a CXL memory module handle

54
New options in ThinkSystem V4 servers
CXL memory replacement tips
• FQXSFMA0094K: CMM device at Bay [arg1] is failed to be active
• FQXSFMA0099M: An uncorrected recoverable memory error has been detected on
CMM Bay [arg1] at [arg2]

ThinkSystem SR630 V4 removing an E3.S non-hot-swap CXL memory module

55
New options in ThinkSystem V4 servers
Lenovo Processor Neptune Core Module introduction
ThinkSytstem V4 servers support advanced direct water cooling (DWC) with the Lenovo Processor
Neptune Core Module. This module implements a liquid cooling solution that allows heat from the
processors to be removed from the rack and data center using an open loop and coolant
distribution units.

56
Hardware replacement tips
Lenovo Processor Neptune Core Module replacement
tip(1)
A shipping bracket is required to replace a Neptune core module. New Neptune core modules are
shipped with an attached shipping bracket. Do not lift the Neptune core module without the
shipping bracket.
If you need to replace a processor or system board assembly on an SR850 V4 or SR860 V4 after
a Neptune core module has been installed, you will need to order a shipping bracket separately.

Using a shipping bracket to install a Neptune core module in the system

57
Hardware replacement tips
Lenovo Processor Neptune Core Module replacement
tip(2)
Before installing a Neptune core module in four-CPU systems (the SR850 V4 and SR860 V4) or
attaching a shipping bracket to an installed Neptune core module, remove the DIMMs from slots 9
to 24 and 41 to 56. These DIMMs would block the shipping bracket and prevent you from
completing the Neptune core module replacement procedures.

Incorrect

58
New options in ThinkSystem V4 servers
Lenovo Compute Complex Neptune Core Module
With the Lenovo Compute Complex Neptune Core Module (also known as DIMM cooling), the SR650 V4
supports advanced direct water cooling of the compute complex: the processors, DIMMs, and voltage
regulators.
With this solution, all heat generated by the compute complex is removed from the server using water, which
places less pressure on the server fans and data center air conditioning units. The DIMM cooling module
occupies one DIMM slot for each DIMM channel, so only the 1DPC DIMM configuration is supported.

Outlet hose Inlet hose

Cold plates

Note: The SR650a V4 does not support the Lenovo Compute Complex Neptune Core Module. 59
New options in ThinkSystem V4 servers
ThinkSystem V4 with Compute Complex Neptune Core Module
rules
1. Dual processor requirement: Each server must be configured with two processors; single processor
configurations are not supported.
2. Limited memory capacity: The system supports only half of the maximum possible memory capacity due
to architectural constraints.
3. Shared system board architecture: The ThinkSystem SR630 V4 and SR650 V4 share the same system
board, enabling common hardware support across both models.
4. PCIe slot occupation: The Compute Complex Neptune Core Module and its hose brackets, inlet/outlet
pipes, and leakage detection sensor occupies the location reserved for the low-profile PCIe slot 8.

60
New options in ThinkSystem V4 servers
Handling a Compute Complex Neptune Core Module
If a server is installed with a Compute Complex Neptune Core Module, note the following:
• If you need to replace the processor board, the system I/O board, or the processor, apply for a shipping
bracket (FRU: 03NX955).
• When re-installing a Processor Neptune Core Module, check the thermal pads and replace any that are
damaged or missing. (FRU: 03NX956)
Shipping bracket Thermal pads

61
Note: For complete replacement procedures, refer to the SR630 V4 User Guide on Lenovo Docs.
New options in ThinkSystem V4 servers
ThinkSystem SR650a V4 product overview
The SR650a V4 is a 2U two-socket (2U2S) rack server based on the SR650 V4 with added support for four
double-width GPUs, including the NVIDIA H100 NVL 94GB, or eight single-width GPUs. It features two Intel
Xeon 6700-series or 6500-series processors (code name: Granite Rapids). The chassis supports up to eight
2.5-inch SAS/SATA, NVMe, or AnyBay hot-swap drive bays or eight E3.S 1T NVMe hot-swap drives for local
storage at the front.
The following SR650a V4 machine types and warranties are available:
7DGC – One-year warranty
7DGD – Three-year warranty

62
New options in ThinkSystem V4 servers
SR650a V4 front riser card combinations
The SR650a V4 has front slots for GPUs, either four double-width GPUs or up to eight single-width GPUs.
The following figure shows the locations of the front-accessible slots.
Riser 6 Riser 7

16 20
17 21
18 22

19 23

With one CPU installed: With two CPUs installed:


• Two x16 slots in Riser 7 (slots 21, 23) • Four x16 slots in Riser 6 (slots 17, 19) and Riser 7 (slots 21, 23)
• Four x8 slots in Riser 7 only (slots 20, 21, 22, 23) • Eight x8 slots in Riser 6 and Riser 7 (all slots)
• Four x8 slots in Riser 6 (slots 16, 18) and Riser 7 (slots 20, 22)

Configuration notes: Both Riser 6 and Riser 7 must have slots configured, even in one-processor configurations where only Riser 7 is used.

63

You might also like