0% found this document useful (0 votes)
143 views43 pages

Exadata KVM Overview

The document provides an overview of Oracle Exadata Database Machine's KVM virtualization for RoCE-based systems, detailing software requirements, isolation considerations, and deployment best practices. It discusses the advantages of virtualization, including performance and resource management, while outlining security measures and interoperability with existing systems. Additionally, it covers system sizing, resource allocation, and storage considerations for virtual machines within the Exadata environment.

Uploaded by

hoangvn
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
143 views43 pages

Exadata KVM Overview

The document provides an overview of Oracle Exadata Database Machine's KVM virtualization for RoCE-based systems, detailing software requirements, isolation considerations, and deployment best practices. It discusses the advantages of virtualization, including performance and resource management, while outlining security measures and interoperability with existing systems. Additionally, it covers system sizing, resource allocation, and storage considerations for virtual machines within the Exadata environment.

Uploaded by

hoangvn
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Oracle Exadata Database Machine

KVM Virtualization Overview and Best Practices


for On-Premises RoCE-Based Systems
Exadata X11M and Exadata System Software 25.1 Update

Exadata Development

January 2025
Topics Covered

• Use Cases
• Exadata Virtualization Software Requirements
• Exadata Isolation Considerations
• Exadata KVM Sizing and Prerequisites
• Exadata KVM Deployment Overview
• Exadata KVM Administration and Operational Life Cycle
• Migration, HA, Backup/Restore, Upgrading/Patching
• Monitoring, Resource Management

2 Copyright © 2025, Oracle and/or its affiliates


Exadata Virtualization
High-Performance Virtualized Database Platform Using KVM

• Kernel-based Virtual Machine (KVM) hypervisor


• Linux kernel-based type 2 hypervisor with improved performance
Finance Cluster • Exadata RoCE based systems only (X11M, X10M, X9M-2, X8M-2)
• VMs provide CPU, memory, OS, and system admin isolation for consolidated
Sales Cluster workloads
• Hosting, cloud, cross department consolidation, test/dev, non-database or third-
Marketing
Cluster party applications
• Exadata VMs deliver near raw hardware performance
• Database I/Os go directly to high-speed RDMA Network Fabric bypassing
hypervisor
• Combine with Exadata network and I/O prioritization to achieve unique full stack
isolation
• Trusted Partitions allow licensing by virtual machine
• See Oracle Exadata Database Machine Licensing Information User's Guide

3 Copyright © 2025, Oracle and/or its affiliates


Exadata Consolidation Options

• Dedicated Database Servers provide the best isolation


Dedicated • Virtualization has good isolation but requires more
DB Servers management overhead and resource usage
• VMs have separate OS, memory, CPUs, and patching
• Isolation without need to trust DBA, System Admin
More Efficient

Virtual • Database consolidation in a single OS is highly efficient but


More Isolation

Machines less isolated


• DB Resource manager isolation adds no overhead
• Resources can be shared much more dynamically
Many DBs in • But must trust admins to configure systems correctly
one Server • Best strategy is to combine VMs with database native
consolidation
• Multiple trusted DBs or Pluggable DBs in a VM
Database • Few VMs per server to limit overhead of fragmenting
Multitenant CPUs/memory/software updates/patching etc.

4 Copyright © 2025, Oracle and/or its affiliates


Software Architecture Comparison
Database Server: Bare Metal / Physical versus Virtualized

Bare Metal / Physical Virtualized


Database Server Database Server
Guest-n
Oracle GI/DB
Guest-2homes
Host
Oracle GI/DB homes - vs - Oracle Exadata
GI/DB (Linux)
Guest-1homes
Exadata
Exadata (Linux, firmware) (Linux w/ KVM, Oracle Exadata
GI/DB (Linux)
firmware) homes

Exadata (Linux)

5 Copyright © 2025, Oracle and/or its affiliates


Differences Between Physical and Virtual Deployments

Topic How Virtual differs from Physical


Reduced licensing option Use Trusted Partitions to license database options within specific VM Clusters

Cluster configuration System has one or more VM clusters, each with own Grid Infrastructure / Database software installations

Network isolation Use Secure Fabric to isolate cluster infrastructure network while sharing underlying Exadata storage

Exadata storage configuration Separate grid disks / ASM disk groups or Exascale vaults for each cluster

Default file system sizes are small


Database server disk configuration
Grid Infrastructure and Database software homes attached as separate file systems

Software updates Database servers require separate KVM host (Linux, firmware) and KVM Guest (Linux) updates

EXAchk Run once for KVM host + storage servers + switches, run once for each VM Cluster

Enterprise Manager Enterprise Manager + Oracle Virtual Infrastructure plug-in + Exadata plug-in

6 Copyright © 2025, Oracle and/or its affiliates


Requirements

• Hardware
• Exadata systems with RDMA over Converged Ethernet (RoCE) networks – X11M, X10M, X9M-2,
X8M-2

• Software
• Review MOS 888828.1 for recommended and minimum required versions
• KVM Host
• Virtualization using Oracle Linux Kernel-based Virtual Machine (KVM)
• KVM host and guests can run different Exadata System Software versions
• KVM Guests
• Each guest runs Exadata System Software isolated from other guests
• Each guest runs Grid Infrastructure and Database software isolated from other guests

7 Copyright © 2025, Oracle and/or its affiliates


Interoperability

• Interoperability between KVM/RoCE and Xen/InfiniBand


• KVM supported only with RoCE interconnects
• Xen supported only with InfiniBand interconnects
• X8 and earlier upgraded to or freshly deployed with latest Exadata software continue to be based on Xen
• Cannot inter-rack RoCE and InfiniBand
• Separate KVM/RoCE and Xen/InfiniBand systems can be used in same Data Guard / GoldenGate
configuration
• e.g., KVM-based system as primary, separate Xen-based system as standby

• Migration from Xen to KVM


• Move database using Data Guard, GoldenGate, RMAN, ZDM

8 Copyright © 2025, Oracle and/or its affiliates


Security

• Storage Isolation - Each VM RAC cluster has own Exadata grid disks and ASM Disk Groups
• Setting Up Oracle ASM-Scoped Security on Oracle Exadata Storage Servers
• https://docs.oracle.com/en/engineered-systems/exadata-database-machine/dbmsq/exadata-security-practices.html

• Network Isolation - 802.1Q VLAN Tagging for Client and Admin Ethernet Networks
• Configured w/ OEDA during deployment (requires pre-deployment switch config)
• https://docs.oracle.com/en/engineered-systems/exadata-database-machine/dbmmn/managing-oracle-vm-guests-kvm.html

• RoCE Network Isolation - Secure RDMA Fabric Isolation with Oracle Linux KVM
• https://docs.oracle.com/en/engineered-systems/exadata-database-machine/dbmin/exadata-network-requirements.html

• KVM Guest Secure Boot with UEFI (Exadata 24.1)


• https://docs.oracle.com/en/engineered-systems/exadata-database-machine/dbmsq/exadata-security-features.html

• RESTful remote access for storage server administration through ExaCLI


• https://docs.oracle.com/en/engineered-systems/exadata-database-machine/dbmmn/exacli.html

9 Copyright © 2025, Oracle and/or its affiliates


Secure RDMA Fabric Isolation for RoCE

• Exadata Secure Fabric for RoCE systems implements network


isolation for Virtual Machines while allowing access to
common Exadata Storage Servers
• Each VM cluster is assigned a private network
• VM clusters cannot communicate with each other
• All VMs can communicate to the shared storage
infrastructure
• Security cannot be bypassed
• Enforcement done by the network card on every packet
• Rules programmed by hypervisor automatically
• Enabled by default in OEDA (Exadata 25.1) for new on-
premises deployment

10 Copyright © 2025, Oracle and/or its affiliates


System Sizing and VM Resource Allocation

• Maximum of 50 KVM guests per database server (X11M, X10M with Exadata 24.1 and later)
• X10M with Exadata 23.1, X9M, X8M – Maximum of 12 KVM guests per database server
• X11M-Z and Eighth Rack systems – Maximum of 4 KVM guests per database server

• Determine peak CPU, memory, disk space needed by each database


• Perform sizing evaluation prior to deployment, configure in OEDA accordingly
• Consider KVM host reserved memory
• Consider KVM host reserved CPU
• Consider KVM guest long-term local disk file system growth
• Long lived KVM guests should budget for full space allocation (assume no benefit from sparseness and
shareable reflinks)
• Each VM cluster has its own grid disks and disk groups and/or Exascale vaults
• Contact Oracle for sizing guidance

11 Copyright © 2025, Oracle and/or its affiliates


VM Memory Sizing

• Cannot over-provision physical memory


• Sum of all KVM guests + KVM host reserved memory <= installed physical memory

• KVM Host Reserved Memory


• KVM host reserves portion of installed memory
• Not available to KVM guests, enforced by vm_maker

12 Copyright © 2025, Oracle and/or its affiliates


VM Memory Sizing

• KVM Guest memory sizing • Minimum 16 GB memory for a guest


• Total VM Memory Available • To support OS, GI/ASM, starter DB, few connections
• Allocate to single guest or divide among multiple • VM Memory size can not be changed online
guests • Guest restart required

Memory Config Supported Platforms Installed Memory (GB) VM Memory (GB)


X11M X10M X9M X8M
24 x 128 GB ● ● 3072 2800
24 x 96 GB ● ● 2304 2090
32 x 64 GB ● 2048 1870
24 x 64 GB ● ● ● ● 1536 1390
12 x 96 GB ● ● 1152 1010
16 x 64 GB ● 1024 920
12 x 64 GB ● ● ● 768 660
16 x 32 GB ● ● ● 512 440
12 x 32 GB ● ● ● 384 328
13 Copyright © 2025, Oracle and/or its affiliates
VM CPU Sizing

• CPU over-provisioning is allowed


• Up to 2x over-provisioning permitted with multiple VMs
• Exceptions – No CPU over-provisioning on X10M systems:
• with 512 GB memory
• when Capacity-On-Demand is used
• Large increase in cores with X10M
• CPU over-provisioning use cases decrease significantly compared to previous Exadata hardware
• Performance degradation may occur if all guests become fully active when over-provisioning

• Number of vCPUs assigned to a VM can be changed online

• KVM Host Reserved CPU


• Host is allocated 4 vCPUs (2 cores) – Not available to guests
• X11M-Z and Eighth rack is allocated 2 vCPUs (1 core)

14 Copyright © 2025, Oracle and/or its affiliates 1 vCPU == 1 hyper-thread; 1 core == 2 hyper-threads == 2 vCPUs
VM CPU Sizing

• Guest CPU sizing (X11M 2x96-core CPUs) Min vCPU Max vCPU Max over-
Hardware per guest per guest provision vCPU
• Single guest vCPU all guests
• Minimum 4 vCPU
X11M, X10M 4 380 7601,2
• Maximum 380 vCPU
X11M-Z,
• X11M-Z (1x32-core CPU) 4 62 1241
X10M Eighth
• Maximum 62 vCPU
X9M-2 4 124 248

X9M-2 Eighth 4 62 124


• Sum of all guest’s vCPUs
• Max 380 vCPU if no over-provisioning X8M-2 4 92 184
• Max 760 vCPU if 2x over-provisioning1,2 X8M-2 Eighth 4 46 92
• X11M-Z (1x32-core CPU)
1 – No CPU over-provisioning when Capacity-on-Demand is used
• Max 62 vCPU if no over-provisioning 2 – No CPU over-provisioning on systems with 512GB memory
• Max 124 vCPU if 2x over-provisioning1

15 Copyright © 2025, Oracle and/or its affiliates 1 vCPU == 1 hyper-thread; 1 core == 2 hyper-threads == 2 vCPUs
VM Storage Considerations

Database files and Grid Infrastructure shared files


• Default – store in Exadata storage managed by ASM
• Optional (Exadata 24.1) – store in Exadata storage managed by Exascale
• Native Exascale support in Database 23ai
• Database 19c may use Exascale Direct Volume (EDV) devices in VM guest as ASM grid disks in ASM disk groups

VM Guest disk images


• Default – store in KVM host local disk, limited by local disk size (3.84TB)
• Optional (Exadata 24.1) – store in Exadata storage managed by Exascale (EDV devices in KVM host)

Storage Type Storage Options


ASM (Automatic Storage Management)
Database files (data, control, online/archived redo, etc.)
Exascale native (GI/DB 23ai only)
Grid Infrastructure shared files (OCR, voting files)
Exascale EDV devices in VM guest + ASM (GI/DB 19c and 23ai)
KVM host local disk
VM guest disk images
Exascale EDV devices in KVM host

16 Copyright © 2025, Oracle and/or its affiliates


Storage for VM Guest Disk Images – KVM Host Local Disk

• X11M, X10M database server – 2 x 3.84TB NVME drives configured RAID1


• Default local disk space available for VMs 1.46 TiB, online resizable to 3.4 TiB
• Option to add 2 x 3.84TB NVME drives RAID1, increase local disk space to 6.9 TiB

• Default disk space used per KVM guest 228 GiB


• Extend after initial deployment by allocating new disk images in KVM host local disk
• Extend with shared storage (e.g., ACFS, DBFS, external NFS, OCI File Storage) for user files
• Do not use shared storage for Oracle/Linux binaries/configuration/diagnostic files. Access/network issues may
cause system crash or hang.

• KVM guest local file system disk space over-provisioning not recommended, but possible
• Allocated space initially much lower than apparent space due to sparseness and shareable reflinks
(with multiple VMs), but will grow over time (shared space diverges and becomes less sparse)
• Long lived KVM guests should budget for full space allocation (assume no sparse/reflink benefit)
• Over-provisioning may cause unpredictable out-of-space errors or prevent ability to restore disk image backup

17 Copyright © 2025, Oracle and/or its affiliates


Storage for VM Guest Disk Images – Exascale

• Enables VM image files to be located on Customer Customer


VM1 VM2
shared Exascale storage as Exascale Volumes
HR RAC Database
• Increases the available storage capacity for
VM images significantly
• Requires Exascale to be deployed in the
environment and the VM cluster must be
created with OEDA specifying Exascale will be
used for VM storage Exadata Database Server Exadata Database Server

• Exascale Volumes can also be used to provide


extended shared storage using ACFS or other RDMA RDMA
Linux filesystems
VM1 VM2
Volumes Volumes

@HR_Vault

Exadata Exascale Storage

18 Copyright © 2025, Oracle and/or its affiliates


Storage for Database and Grid Infrastructure – ASM

• Storage on cells may be shared between ASM and Exascale


• Spread ASM disk groups for each VM cluster across all disks on all cells
• Every VM cluster has its own grid disks
• Disk group size for initial VM clusters should consider future VM additions
• Using all space initially will require shrinking the existing disk group before adding a new disk group

• Enable ASM-Scoped Security to limit grid disk access


VM Cluster Cluster Grid Disks (DATA/RECO for all clusters on all disks in all cells)
Nodes
clu1 db01vm01 DATAC1_CD_{00..11}_cel01 RECOC1_CD_{00..11}_cel01
db02vm01 DATAC1_CD_{00..11}_cel02 RECOC1_CD_{00..11}_cel02
DATAC1_CD_{00..11}_cel03 RECOC1_CD_{00..11}_cel03
clu2 db01vm02 DATAC2_CD_{00..11}_cel01 RECOC2_CD_{00..11}_cel01
db02vm02 DATAC2_CD_{00..11}_cel02 RECOC2_CD_{00..11}_cel02
DATAC2_CD_{00..11}_cel03 RECOC2_CD_{00..11}_cel03
19 Copyright © 2025, Oracle and/or its affiliates
Storage for Database and Grid Infrastructure – Exascale

• Cell disks may be shared between Exascale and ASM


• Cell disk partitions can be configured as pool disks (Exascale) or grid disks (ASM)
• One storage pool spans all pool disks of the same type (HC or EF) on all cells
• Each database uses Exascale vaults or ASM disk groups for storage, not both
• Databases within a VM cluster can be a mixture of Exascale and ASM
• Recommend one vault per VM cluster

VM Cluster Cluster Nodes Exascale storage pool Exascale vault


Cluster-c1 db01vm01 Cluster-c1vault
db02vm01
hcpool1
Cluster-c2 db01vm02 Cluster-c2vault
db02vm02

20 Copyright © 2025, Oracle and/or its affiliates


Deployment Specifications and Limits

Category X8M-2 X9M-2 X10M X11M


Max guests per database server 12 12 50 (121) 50
VMs Max guests per Eighth Rack and X11M-Z
4 4 4 4
database servers
Min GB per guest 16 16 16 16
Memory
Max GB per guest / all guests 13902 18702 28002 28002
Min vCPU per guest 4 4 4 4
CPU/vCPU Max vCPU per guest 92 124 380 380
Max over-provisioned vCPU all guests 184 248 7603,4 7603,4
Usable TiB per DB server for all guests 3.15 3.40 / 6.975 3.40 / 6.975 3.40 / 6.975
Disk space
Used GiB per guest at deployment 141 228 228 228
1 – X10M supports 50 guests w/ Exadata 24.1 and later, Exadata 23.1 and earlier supports 12
2 – Using maximum memory configuration
3 – No CPU over-provisioning when Capacity-on-Demand is used
4 – No CPU over-provisioning on systems with 512GB memory
5 – When local disk expanded to 4 drives

21 Copyright © 2025, Oracle and/or its affiliates


Deployment Overview
Oracle Exadata Deployment Assistant

The Oracle Exadata Deployment Assistant, also known as OEDA, is the only tool to create VMs on
Exadata

1. Create configuration with OEDA Web User Interface


2. Prepare customer environment for OEDA deployment
• Configure DNS, switches for VLANs, and Secure Fabric
3. Prepare Exadata system for OEDA deployment
• # switch_to_ovm.sh, applyElasticConfig.sh
4. Deploy system with OEDA Deployment Tool
• # install.sh

22 Copyright © 2025, Oracle and/or its affiliates


OEDA Configuration Tool
Choose to deploy Exascale

• Initial configuration choice is


whether to enable Exascale

• Storage pool size and VM use of


Exascale in later OEDA screens

• Exascale can also be deployed into


existing environments

23 Copyright © 2025, Oracle and/or its affiliates


OEDA Configuration Tool
Choose Virtual or Physical Database Server Configuration

• Virtual or physical configuration


• All Linux VM
• All Linux Physical
• Custom (some servers VM,
some servers physical)

• An individual database server is


configured either VM or Physical

24 Copyright © 2025, Oracle and/or its affiliates


OEDA Configuration Tool
Configure Exascale Storage Pool

• Select cells for Exascale


storage pool
• Recommend selecting all
available cells

• Configure storage pool size


• Space allocated to storage
pool used for Exascale
• Unallocated space used for
future Exascale expansion
or ASM storage

25 Copyright © 2025, Oracle and/or its affiliates


OEDA Configuration Tool
Define Clusters

• Decide
• Number of VM clusters to create
• Database servers and Cells that will
make up those VM clusters
• Recommend using all cells for each cluster

• What is a VM cluster?
• One or more guests on different
database servers running Oracle GI/RAC,
each accessing the same shared Exadata
storage managed by ASM or Exascale.

26 Copyright © 2025, Oracle and/or its affiliates


OEDA Configuration Tool
Configure VM Use of Exascale

• Decide
• Vault size
• Store VM guest disk images in
Exascale vault(s) or local disks
• Store GI/DB files in Exascale
vault(s) (23ai only) or ASM disk
groups

Note:
• A VM may use Exascale for VM
image files and ASM for
database file storage
• A 23ai database may use only
Exascale or ASM for file storage

27 Copyright © 2025, Oracle and/or its affiliates


OEDA Configuration Tool
Cluster Configuration

• Each VM cluster has its own configuration


• OS users and groups
• VM size (memory, CPU)
• Grid infrastructure version and software location
• Exadata software version
• ASM disk groups (and underlying storage grid disks)
• Database version(s) and software location(s)
• Starter database configuration
• Client, Backup, and Admin networking configuration

28 Copyright © 2025, Oracle and/or its affiliates


OEDA Configuration Tool
Advanced Network Configuration

• Admin and Client Networks


802.1Q VLAN Tagging
• To separate Admin and Client
Networks traffic across VMs,
use distinct VLAN ID and IP
info for each cluster
• Admin and Client Network
switches must have VLAN tag
configuration done before
OEDA deployment

29 Copyright © 2025, Oracle and/or its affiliates


OEDA Configuration Tool
Advanced Network Configuration

• Private Network Secure Fabric


• Secure RDMA Fabric Isolation uses
RoCE VLANs to enable strict network
isolation for Oracle RAC clusters
• Multiple VM clusters share storage
server resources but cannot
communicate with each other
• Secure Fabric is chosen by default for
new on-premises deployments starting
with the Oct 2024 OEDA release

30 Copyright © 2025, Oracle and/or its affiliates


OEDA Configuration Tool
Installation Template

• Verify proper settings for all VM


clusters in Installation Template
so the environment can be
properly configured before
deployment (DNS, switches,
VLANs, etc.)

31 Copyright © 2025, Oracle and/or its affiliates


OEDA Configuration Tool
Network Requirements for VM Deployment

Component Domain Network Example hostname


KVM host Mgmt eth0 dm01dbadm01
(one per database server) Mgmt ILOM dm01dbadm01-ilom
Mgmt eth0 dm01dbadm01vm01
Database
KVM guest Client bondeth0 dm01client01vm01
servers
(one or more per database Client VIP dm01client01vm01-vip
server) Client SCAN dm01vm01-scan
Private RoCE dm01dbadm01vm01-priv
Mgmt eth0 dm01celadm01
Storage servers (same as physical) Mgmt ILOM dm01celadm01-ilom
Private RoCE dm01celadm01-priv
Switches (same as physical) Mgmt and Private dm01sw-adm, dm01sw-roce

32 Copyright © 2025, Oracle and/or its affiliates


Guest Disk Layout

File system Size Use


/ (root) 15G Root file system
/u01 20G Oracle BASE
/u01/app/<ver>/grid 50G Grid infrastructure software home
/u01/app/oracle/product/<ver>/dbhome_1 50G Database software home
/tmp 3G /tmp
/home 4G User home directories
/var 2G /var
/var/log 18G System logs
/var/log/audit 1G System audit logs
/crashfiles 20G System kdump kernel crash vmcore
/boot 512M System boot
Other LVM space 44G LVDbSwap1, LVDbSys2, LVDbVar2, LVDoNotRemoveOrUse
TOTAL 228G
33 Copyright © 2025, Oracle and/or its affiliates
Exadata KVM Basic Maintenance

• Primary maintenance tools


• OEDACLI – OEDA Command Line Interface
• vm_maker

• Refer to Exadata Database Machine Maintenance Guide


• Managing Oracle Linux KVM Guests
• https://docs.oracle.com/en/engineered-systems/exadata-database-machine/dbmmn/managing-oracle-vm-
guests-kvm.html

34 Copyright © 2025, Oracle and/or its affiliates


Exadata KVM Migration

• Migrate databases on existing system to new Exadata KVM system


• Methods
• Create Data Guard standby on new Exadata KVM system, switchover (minimal downtime)
• Duplicate existing databases to new Exadata KVM system
• Back up existing databases, restore databases on new Exadata KVM system

• Convert existing ROCE-based Exadata system deployed bare metal/physical to KVM


• Methods
• Back up existing databases, redeploy system to KVM, restore databases
• Convert one or subset of database servers at a time to KVM

35 Copyright © 2025, Oracle and/or its affiliates


Backup/Restore of Virtualized Environment

• KVM host
• Standard backup/restore practices to external location

• KVM guest
• Backup within KVM host: Snapshot the VM disk images and back up snapshot externally
• Backup within KVM guest: Standard OS backup/restore practices apply
• If over-provisioning local disk space – Restoring VM backup will reduce or eliminate space
savings (i.e., relying on over-provisioning may prevent full VM restore)

• Database backup/restore
• Use standard Exadata MAA practices with RMAN, ZDLRA, and Cloud Storage

• Refer to Exadata Database Machine Maintenance Guide

36 Copyright © 2025, Oracle and/or its affiliates


Updating Software

Component to update Method


Storage servers • Same as physical – run patchmgr from any server with ssh access to all storage
servers or use Storage Server Cloud Scale Software Update feature.
RDMA Network Fabric • Same as physical – run patchmgr from any server with ssh access to all switches.
switches
Database server – KVM • Run patchmgr from any server with ssh access to all KVM hosts.
host • KVM host update upgrades database server firmware.
• KVM host reboot requires restart of all local VMs.
• KVM guest software not updated during KVM host update.
• KVM host/guest do not have to run same version, although specific update
ordering may be required (see MOS 888828.1).
Database server – KVM • Run patchmgr from any server with ssh access to all KVM guests. Typically done
guest on a per-VM cluster basis (e.g., vm01 on all nodes, then vm02, etc.), or update all
VMs on a KVM host before moving to next.
Grid Infrastructure / • Use Fleet Patching and Provisioning (FPP), OEDACLI, or standard upgrade and
Database patching methods, maintained on a per-VM cluster scope. GI/DB homes should be
mounted disk images, like initial deployment.

37 Copyright © 2025, Oracle and/or its affiliates


Health Checks and Monitoring

• Exachk (AHF) runs in KVM host and KVM guest


• Run in one KVM host – evaluates all KVM hosts, cells, switches
• Run in one KVM guest of each VM cluster – evaluates all KVM guests, GI/DB of that cluster

• Exadata Storage Software Versions Supported by the Oracle Enterprise Manager Exadata Plug-in
(MOS 1626579.1)

• Exawatcher runs in KVM host and KVM guest

• Database and Grid Infrastructure monitoring practices still apply

• Considerations
• KVM host is not sized to accommodate Enterprise Manager or custom agents

38 Copyright © 2025, Oracle and/or its affiliates


Exadata MAA/HA

• Exadata MAA failure/repair practices


• Refer to MAA Best Practices for Oracle Exadata Database Machine

• Live Migration is not supported – use RAC to move workloads between nodes

39 Copyright © 2025, Oracle and/or its affiliates


Resource Management

• Manage Exadata resources using Exadata I/O Resource Management (IORM)

• Manage resources within VMs and within a cluster using Database Resource Management (DBRM)
• cpu_count set at the database instance level for multiple databases in a VM
• Recommended minimum cpu_count=2

• No local disk resource management and prioritization


• I/O intensive workloads should not use local disks
• For higher I/O performance and bandwidth, use ACFS or NFS

40 Copyright © 2025, Oracle and/or its affiliates


Exadata KVM / Xen Comparison

Category KVM-based Xen-based

Terminology host, guest dom0, domU

Hardware support X8M-2 through X11M (using RoCE network) X2-2 through X8-2 (using InfiniBand network)

Hypervisor KVM Xen

VM management vm_maker, OEDACLI xm, OEDACLI, domu_maker

Database server software patchmgr using same ISO/yum repo for KVM host and patchmgr using different ISO/yum repo for
update guests dom0 and domUs

File system configuration xfs ext4, and ocfs2 for EXAVMIMAGES

41 Copyright © 2025, Oracle and/or its affiliates


Our mission is to help people see
data in new ways, discover insights,
unlock endless possibilities.

You might also like