0% found this document useful (0 votes)
37 views8 pages

HyperCore - VM-to-Node Affinity Feature Note

Uploaded by

caio.seman.cs
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
37 views8 pages

HyperCore - VM-to-Node Affinity Feature Note

Uploaded by

caio.seman.cs
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Version 1.

0 - 07/2021
Table of Contents
VM-TO-NODE AFFINITY OVERVIEW 3
SINGLE VM-TO-NODE AFFINITY 4
VM Migration Logs 5
VM-TO-VM ANTI-AFFINITY 6
VM-TO-VM AFFINITY 7

FEEDBACK & SUPPORT 8

©2020 Scale Computing. All rights reserved. Any and all other trademarks used are owned by their respective holders. 2
VM-TO-NODE AFFINITY OVERVIEW
This feature note provides information to help understand and utilize the VM-to-Node Affinity behavior included in
version 8.9 of Scale Computing HyperCore. Setting VM affinity to particular nodes is intuitive and straightforward,
and taking advantage of the affinity functionality allows greater control over the performance of VMs on the
HyperCore system without introducing additional management overhead.

NOTE

The affinity scenarios below are demonstrated using a three-node system. The same affinity
principles apply for systems using four or more nodes.

The affinity behavior requires multiple nodes to handle VM transfer and node failure in a high
availability system. Single Node Systems (SNS) cannot utilize VM-to-Node Affinity.

When a user initially powers on a HyperCore VM, the system will intelligently place that VM on the node with the
most available resources. This behavior covers the most common use cases and helps keep the HyperCore system
simple to manage and naturally well-balanced. Power it on and forget it! However, there are also scenarios where it
may be helpful to associate a VM to a particular node using VM-to-Node Affinity, such as:

● Non-uniform hardware capabilities - “I want to run my ERP system on this new node with the fastest clock
speed!”
● VM-to-VM anti-affinity - “I don’t want my 2 domain controllers to run on the same node.”
● VM-to-VM affinity - “These two VMs perform better when they operate on the same node.”

As of version 8.9 of HyperCore, users are able to specify these settings by defining a primary and backup node
preference on a VM-by-VM basis. To do this, users can simply live migrate a VM from one node to another which will
set an implicit affinity to that target node (and set the source node as the backup node for failover and rolling
upgrades). These VM settings persist through rolling upgrades as well as through full cluster shutdowns and
restarts, in addition to the node failure scenarios mentioned below.

©2020 Scale Computing. All rights reserved. Any and all other trademarks used are owned by their respective holders. 3
SINGLE VM-TO-NODE AFFINITY
The most common reason to establish single VM-to-Node affinity would be to accommodate non-uniform hardware
capabilities in your HyperCore system.

In the example below, Node 1 goes offline and the Node 1 VMs are migrated to their backup nodes. When Node 1
rejoins the cluster, the VMs return to their preferred node. Follow the steps below for an example of how this node
failure migration would look in the HyperCore web interface.

1. VMs with affinity set to Node 1 are currently running on Node 1 to take advantage of a high clock speed CPU.

2. Node 1 fails, and the VMs are sent to their respective backup nodes.

3. Node 1 comes back online and rejoins the cluster, and the VMs begin migrating back to Node 1.

4. All VMs have migrated back to Node 1 according to their affinity settings.

©2020 Scale Computing. All rights reserved. Any and all other trademarks used are owned by their respective holders. 4
VM Migration Logs
When VMs are migrated between nodes as a result of their specific affinity settings, a specific condition is set on
the system to indicate this behavior. As the migration completes, the specific VMs that migrated are noted in the
system log and once all VMs have returned to their preferred nodes, the condition is cleared.

The notices and logs below were generated during the migration that occurred in the example above.

©2020 Scale Computing. All rights reserved. Any and all other trademarks used are owned by their respective holders. 5
VM-TO-VM ANTI-AFFINITY
There are many ways to accomplish VM-to-VM anti-affinity, but the most straightforward method is to set the
primary node for each VM to different nodes and the backup node for those VMs to the same third node.
HyperCore is designed to support a single node failure, so in no scenario would those two VMs be running on the
same node. In practice, simply migrate both VMs to the same node (Node 3 in the scenario below) and then migrate
one VM to Node 1 and the other to Node 2. This will set the affinity rules as follows:

● VM1
○ Preferred - Node 1
○ Backup - Node 3
● VM2
○ Preferred - Node 2
○ Backup - Node 3

Follow the node failure scenario below to see how the VMs move through the system according to their affinity:

1. VM1 and VM2 are located on their preferred nodes; VM1 on Node 1 and VM2 on Node 2.

2. Node 2 goes offline. VM2 is then migrated to the backup, Node 3. If Node 1 had gone offline, VM1 would have moved to Node 3.

3. Node 2 comes back online, and VM2 begins returning to Node 2.

4. VM2 has returned to Node 2. At no point during the node failure did VM1 and VM2 run on the same node.

©2020 Scale Computing. All rights reserved. Any and all other trademarks used are owned by their respective holders. 6
VM-TO-VM AFFINITY
For VM-to-VM affinity, a user would need to set the same primary and backup nodes for both of the VMs. To
accomplish this, migrate both VMs to the same node (Node 2 in the example) and then migrate them again to a
secondary node (Node 1 in the example). This will set the affinity rules as follows:

● VM1:
○ Preferred - Node 1
○ Backup - Node 2
● VM2:
○ Preferred - Node 1
○ Backup - Node 2

Walking through the node failure scenario below, you can see that both VMs fail to the same backup node (Node 2)
when Node 1 goes down. When Node 1 rejoins the cluster, both VMs will migrate back to their preferred node
(Node 1) maintaining VM-to-VM affinity.

1. VM1 and VM2 on Node 1, the preferred node. Node 2 is set as the backup node.

2. Node 1 goes offline. VM1 and VM2 move to their backup node, Node 2.

3. Node 1 comes back online, and the VMs begin returning to their preferred node based on their affinity settings.

4. VM1 and VM2 have returned to their preferred node, Node 1.

©2020 Scale Computing. All rights reserved. Any and all other trademarks used are owned by their respective holders. 7
FEEDBACK & SUPPORT
DOCUMENT FEEDBACK
Scale Computing welcomes your suggestions for improving our documentation. Please send your feedback to
[email protected].

TECHNICAL SUPPORT AND RESOURCES


There are many technical support resources available for use. Access this document, and many others, at
http://www.scalecomputing.com/support/login/.

● Partner Portal - Partner and Distributor use only.


● User Community - Customer focused, including our online Forum.

©2020 Scale Computing. All rights reserved. Any and all other trademarks used are owned by their respective holders. 8

You might also like