Q: How is a Dynamic Resource Scheduler (DRS) cluster’s current host load standard deviation calculated?

Greg Shields

March 29, 2011

1 Min Read
ITPro Today logo in a gray background | ITPro Today

A: A DRS cluster is load balanced when each of its hosts’ level of consumed resources is equivalent to the others. When they aren’t, the cluster is considered to be imbalanced and VMs must be relocated to restore the balance.

Figuring out the numerical value associated with that imbalance is another of DRS’ jobs. At every five minute interval, DRS determines the load on each host using the following equation:

(The load on each host is figured by adding all the VM entitlements and dividing that number by the host capacity.)

The VM entitlements in this equation refer to the quantity of CPU and memory resources being demanded on that host. Restrictions like reservations and limits are also taken into account. Overhead resources like those required for the VMKernel and Service Console, as well as any HA Admission Control reservations are also included along with a 6 percent extra reservation.

Once each host’s load is quantified, the cluster can then determine the average load, as well as how far away hosts in the cluster are from that average. The statistical measurement of that distance from the average is called the standard deviation. That measurement is displayed inside the vCenter Client console as the current host load standard deviation.

Sign up for the ITPro Today newsletter
Stay on top of the IT universe with commentary, news analysis, how-to's, and tips delivered to your inbox daily.

You May Also Like