A Nutanix cluster is equipped with four nodes. Four VMs on this cluster have been configured with a VM-VM anti-affinity policy and are each being hosted by a different node.
What occurs to the cluster and these VMs during an AHV upgrade?
A Nutanix cluster is equipped with four nodes. Four VMs on this cluster have been configured with a VM-VM anti-affinity policy and are each being hosted by a different node.
What occurs to the cluster and these VMs during an AHV upgrade?
During an AHV upgrade, the cluster ensures continuity by hosting two VMs on one node while the node being upgraded is in maintenance mode. The VM-VM anti-affinity policy is a preferential policy rather than an absolute rule, allowing the Acropolis Dynamic Scheduling (ADS) feature to take necessary actions, such as temporarily violating the anti-affinity policy to accommodate maintenance needs.
Shouldnt it be C ?
Should be C- When you run NCC it will comes up as please disable affinity rules
Should be D for me. In MCI Training course, stated that ADS takes necessary actions in case of resource contraints, maintenance mode or HA. Is like "should run on" in VMware environments
Yeah, but this is talking about an upgrade not resource contention.
i'm sure that the correct answer is "D", because we simulated this situation in our env
I believe the answer is D. The below statement from Prism 6.5 guide says the policy does not limit ADS feature which to me says ADS will do the needful and move the VM to the host best suited to handle the workload. https://portal.nutanix.com/page/documents/details?targetId=Web-Console-Guide-Prism-v6_5:ahv-vm-anti-affinity-t.html After you configure the group and then power on the VMs, the VMs that are part of the group are started (attempt to start) on the different hosts. However, this is a preferential policy. This policy does not limit the Acropolis Dynamic Scheduling (ADS) feature to take necessary action in case of resource constraints.
I believe D is the incorrect question but I know the answer is C.
it should not be C, as the vm-vm anti-affinity policy only applies when the specified virtual machines apart in such a way that when a problem occurs with one host, you should not lose both the virtual machines. here we just have 1 vm on the one host/node. So, no chance of disabling the vm-vm anti-affinity policy. Option D sounds reasonable answer.
Guest VMs Running in an Affinity/Anti-Affinity Rules Environment Hypervisor upgrades might not complete successfully in environments where third-party or other applications apply affinity or anti-affinity rules. For example, some antivirus appliances or architectures might install an antivirus scanning guest VM on each node in your cluster. This guest VM might not be allowed to power off or migrate from the host being upgraded, causing maintenance mode to time out. In this case, disable such rules or power off such VMs before upgrading. https://portal.nutanix.com/page/documents/details?targetId=Acropolis-Upgrade-Guide-v5_20:upg-hypervisor-upgrade-recommend-c.html
the answer is C
i'm sure that the correct answer is "D", because we simulated this situation in our env
C is correct answer
D is correct answer. The question here just refers to VM-VM Anty Affinity in AHV upgrade scenario, VM-Host Affinity is not applied. So there is no AHV pre-check failing and no ungraceful poweroff https://portal.nutanix.com/page/documents/details?targetId=Acropolis-Upgrade-Guide-v5_20:upg-hypervisor-upgrade-ahv-c.html