Network Guidelines for Windows 2012 Hyper-V Clusters

New guidelines are here for networking with Windows Server 2012 Hyper-V clusters.

John Savill

June 19, 2013

3 Min Read
Network Guidelines for Windows 2012 Hyper-V Clusters

Q: Have the network guidelines for Hyper-V clusters in Windows Server 2012 changed from Windows Server 2008 R2?

A: In Windows Server 2008 R2, there were very specific guidelines regarding the separate networks required for a Hyper-V cluster. Thus, there were separate 1Gbps networks for each of the following:

  • Management and storage traffic

  • Virtual Machine traffic

  • Live Migration traffic

  • Cluster/CSV traffic

Extra resiliency and traffic such as iSCSI would all be separate connections. If you had 10Gbps networks, you would use Quality of Service (QoS) to ensure sufficient bandwidth for the different types of traffic.

This is all documented at the Microsoft website.  So has this changed in Windows Server 2012?

Certainly if you have many 1Gbps connections, you could use the same model as in Server 2008 R2, with one network for each type of traffic. However, the challenge with this approach is that there's no resiliency for a network adapter failure.

And, too, networks such as the management and cluster networks generally have very small amounts of traffic but are using 50 percent of the available bandwidth. The Live Migration network isn't used frequently, and the virtual machines (VMs) use only 25 percent of the bandwidth.

With native NIC teaming available in Windows Server 2012, an alternate option is available,  as additional adapters can be added to the Hyper-V switches for use on the Hyper-V host as documented in FAQ "Q: How can I add additional virtual network adapters to my Hyper-V host using a virtual switch?" In this model, it's now possible to configure the following:

  • Add all four NICs to a native Windows Server 2012 NIC team.

  • Create a Hyper-V switch connected to the NIC team.

  • On the Hyper-V host, add three virtual network adapters connected to the Hyper-V switch for use with management traffic, live migration, and cluster traffic.

  • Use QoS to ensure the different types of traffic have a minimum amount of bandwidth guaranteed in times of contention. This means that when no other traffic needs the network bandwidth, any of the traffic types could consume nearly all of the available aggregated bandwidth (4Gbps in this example), giving the virtual machines access to more bandwidth.Thus, all the networks would have resiliency from a single NIC failure. I discuss the QoS in the article "Implementing Windows Server 2012 Quality of Service."  The two key commands to create the virtual adapters and assign a minimum bandwidth weight are as follows:

    Add-VMNetworkAdapter -ManagementOS -Name "LiveMigration" -SwitchName "External Switch"Set-VMNetworkAdapter -ManagementOS -Name "LiveMigration" -MinimumBandwidthWeight 40 

Another approach would be to create two teams each, with two network adapters each. Use one team on the host only for live migration, clusters, and management; use the other team for virtual machines (VMs) only.

Essentially the answer is, the guidance is moving away from dedicated network adapters for each type of traffic (think one lane on the highway for each type of traffic) to combining them,  then using QoS to guarantee minimum bandwidth on virtual network adapters at the host level.

Here's a video explaining this and how to configure networks.

About the Author

Sign up for the ITPro Today newsletter
Stay on top of the IT universe with commentary, news analysis, how-to's, and tips delivered to your inbox daily.

You May Also Like