Getting Started with Hyper-V in Windows Server 2012
Major updates make Hyper-V a must
April 12, 2013
Windows Server 2012 is shaking up the world of server virtualization. In particular, major updates to Hyper-V make the hypervisor a viable option for enterprises that previously might not have considered it.
Windows Server 2012 Hyper-V features the highest levels of scalability available, with 64 virtual CPU (vCPU) virtual machines (VMs) that can have up to 1TB of assigned memory. You can create virtual hard disks (VHDs) of 64TB, thanks to the new VHDX format, which removes the need to use pass-through storage. Put those items together and pretty much any workload can be virtualized with Hyper-V.
Of course, all the scalability in the world doesn't mean much without features that can take advantage of it. Fortunately, Windows Server 2012 delivers those as well. Storage live migration, which enables storage of a VM, lets you move VMs without downtime. Shared-nothing live migration allows VMs to be migrated between Windows Server 2012 Hyper-V hosts with no downtime, even without being clustered and without common storage. These features offer complete mobility of VMs in the data center.
New network and storage options—including single root I/O virtualization (SR-IOV), Server Message Block (SMB) 3.0, virtual Fibre Channel, and network virtualization—make Hyper-V an appealing hypervisor choice. However, most organizations have not considered Hyper-V before and might not be clear about how to get started. In this article, I cover some of the basics of getting up and running with Windows Server 2012 Hyper-V.
What Hardware Do I Need?
For a basic, single-server setup, you need a server with a 64-bit processor that supports hardware-assisted virtualization. For Intel processors, this is the Intel Virtualization Technology (VT) feature; for AMD processors, you need AMD virtualization (AMD-V). Pretty much any server processor manufactured within the past 5 years should have this capability. But if you aren't sure about your hardware, download and run the Coreinfo utility, with the -v switch, from an elevated command prompt. This action will show whether the processor supports virtualization and whether it supports Second Level Address Translation (SLAT)—also called Extended Page Tables (EPT) by Intel and AMD. The output in Figure 1 shows that Intel hardware-assisted virtualization is enabled, which is all we need to get started. SLAT is not required for Hyper-V to function, but it does improve performance. So the use of SLAT is preferred when possible and is crucial for virtual-desktop environments such as virtualized Remote Desktop Services servers and virtual desktop infrastructure (VDI) environments.
C:>coreinfo -vCoreinfo v3.2 - Dump information on system CPU and memory topologyCopyright (C) 2008-2012 Mark RussinovichSysinternals - www.sysinternals.comIntel(R) Core(TM)2 Quad CPU Q6600 @ 2.40GHzIntel64 Family 6 Model 15 Stepping 11, GenuineIntelHYPERVISOR - Hypervisor is presentVMX * Supports Intel hardware-assisted virtualizationEPT - Supports Intel extended page tables (SLAT)
Windows Server 2012 doesn't have any of the supported virtual processor–to–logical processor ratio limitations that were present in previous versions. (An 8:1 ratio was supported in Windows Server 2008 R2 for VMs running server OSs.) Basically, if the server is handling the load to your satisfaction, that's good enough for Microsoft!
The amount of memory required depends completely on the amount that you want to allocate to VMs. I generally carve out around 2GB of memory for the virtualization host, then base additional memory on the amount I need for VMs. For large-scale virtualization environments, servers with 96GB or 192GB of memory are common. But in a lab environment, you need only enough to run your desired virtual load.
Each VM has one or more VHDs. For Windows Server 2012, use the new VHDX format, which not only supports 64TB VHDs (up from 2TB with the old format) but also has been re-architected to offer near bare-metal disk performance. This is true even for dynamic VHDs, which use up only a small amount of disk space initially and grow the file as data is written to the VHD. You also have the option to create a fixed-size VHD. This option is typically used in production environments, both for legacy performance reasons and to avoid the possibility of running out of physical disk space. That's something that can happen as dynamic VHDX files expand, if proper monitoring is not in place to track the actual physical disk space used. Use the old VHD format only if you need compatibility with older Hyper-V servers, such as Windows Server 2008 R2.
VHDX files can be stored on locally attached storage (i.e., internal disks, in lab environments), although ideally you should use external storage, such as a SAN. New to Windows Server 2012 is the ability to use an SMB 3.0 file share to store and run VMs. Using external storage simplifies the backup of virtual environments. As you increase the number of servers, external storage enables a higher utilization of disk space. The use of a central pool actually makes management easier. External storage is also required if you are going to use Failover Clustering to group multiple hosts into a cluster, allowing VMs to move easily between hosts and automatically restart if a host fails. (Shared-nothing live migration also allows VMs to move, with no downtime, outside of a cluster, as previously mentioned.) Clusters are also great for maintenance because they allow VMs to be moved to another host (using live migration, which prevents downtime to the VM) while the original host is patched and rebooted, then moved back while the next node is evacuated of VMs, patched, and rebooted. Every host in the cluster can be patched, without any downtime to VMs. Windows Server 2012 actually features one-click patching of an entire cluster, using this process.
What About Network Connections?
Before I talk about the number of network adapters you need, let's review how to use them:
First, you must be able to manage the Hyper-V host. Therefore, you need a management connection to communicate over the network.
Second, the VMs most likely need connectivity to the outside world. Private virtual switches can allow communications between VMs only, and internal virtual switches can allow communications between VMs and the Hyper-V host, but neither provides communications to the outside world. You therefore need a network adapter for VM traffic. In a production environment, you likely have at least two network adapters for VM traffic; you can team them to create a single load-balanced, fault-tolerant connection. The option exists to share the network adapter used for VMs with the management OS; in a lab environment, you could use this solution. But ideally, you should separate the management traffic and the virtual network switch that manages VM traffic. If a problem occurs with the virtual switch, you don't want to lose access to the server.
Third, you need a method for the hosts in a cluster to communicate for internal purposes, such as various IsAlive messages. Typically, this method is a separate network (although networks that are used for other purposes can—and will—be used if your cluster network is unavailable). In addition to cluster-heartbeat traffic, the cluster network is also used for cluster shared volume (CSV) traffic. This use allows all the cluster hosts to simultaneously access the same set of NTFS LUNs. The CSV traffic typically consists of only metadata changes. However, in some scenarios all storage traffic for certain hosts uses this network. So when using CSV, you should carve out a separate network for the cluster.
Fourth, you need a dedicated network to ensure a timely migration of VMs between Hyper-V hosts. So you need to allocate a network for live migration.
Fifth, if you use iSCSI for storage access, then you need a separate network for iSCSI communication.
This demand for five network connections doesn't take into account the use of multiple network adapters for VM traffic or VM teaming (for load balancing and high availability). Nor does it consider the use of multiple iSCSI network connections or Microsoft Multipath I/O (MPIO) for added fault tolerance.
This scenario assumes that you're using 1Gbps networks. The situation is different if you use 10Gbps. There is no sense in having a dedicated 10Gbps network connection for management traffic or CSV traffic. Production environments with 10Gbps likely have two connections, so team them for fault tolerance and then use Quality of Service (QoS) to reserve enough bandwidth for each traffic type, in case of contention. Microsoft details these recommendations in its "Hyper-V: Live Migration Network Configuration Guide."
You can use the same approach for 1Gbps connections. Team your connections and use QoS to ensure bandwidth for different traffic types. Another option: Some new platforms have converged fabrics with huge bandwidth pipes that can be virtually carved up into virtual network and storage adapters.
Which OS Should I Use?
When you have processor, memory, disk, and network worked out, all you need is an OS. But should you use Windows Server 2012 Standard, Windows Server 2012 Datacenter, or the free Microsoft Hyper-V Server 2012? From a Hyper-V feature perspective, all are identical. All three OSs have the same limits, clustering capabilities, and features. The decision depends entirely on which OS you will be running on the Hyper-V host.
If it's Windows Server, do you intend to freely move the OS instances (i.e., VMs) between hosts? Hyper-V Server 2012 doesn't include any Windows Server guest OS instance rights. This makes sense—the OS is free—and is a great choice if you aren't running the Windows Server OS on the hypervisor. If you're running a VDI environment with Windows 8 VMs, or if you're running only Linux or UNIX VMs, then use Hyper-V 2012.
When you want to run the Windows Server OS in the VMs, Windows Server 2012 Standard includes the right to run two Windows Server VMs. If I wanted to run four VMs with Windows Server, I could buy two copies of Windows Server 2012 Standard. Note that you can still run other VMs with a non-Windows Server OS on the same servers. There's no VM limitation, just a limit on the number of Windows Server guest OS rights running on Hyper-V, VMware, or anything else.
If you want to run numerous Windows Server OS instances, Windows Server 2012 Datacenter includes the right to run an unlimited number of VMs running the Windows Server OS. Consider the price of the Standard and Datacenter versions for your environment. For example, if you're running six or fewer VMs with Windows Server, it's less expensive to buy multiple copies of Standard than to buy Datacenter. But if you're clustering hosts and want to move the VMs, then you have another consideration: Windows Server licenses are tied to a specific piece of hardware and can be moved between servers only every 90 days.
Let's take the example of a branch office with two clustered virtualization hosts. (Again, we're talking about licensing of Windows, not of Hyper-V, so everything I'm talking about is independent of the hypervisor you use.) On each Hyper-V host, there are typically four VMs running, so I need two copies of Windows Server 2012 Standard for each host. But the hosts are clustered so that I can move the VMs between servers. Maybe as part of patching, I want to move all the VMs to host B (which would then end up running eight VMs), while I patch and reboot host A. I then want to move all eight VMs to host A while I patch and reboot host B. I can't do this unless I wait 90 days between the time I move the VMs to host B and the time I move them to host A. And I'd need to wait another 90 days before moving half the VMs back to host B! Plus, if a host fails, I can't move migrated VMs back to the fixed host for 90 days. If I want to move my VMs around freely, then I need to have enough licenses to cover the high watermark of all the VMs that might ever run on one box—eight VMs. To do that, I need four copies of Standard for each server—at which point it makes more sense to just go with Datacenter.
Companies often misunderstood this comparison, which is an important consideration as you plan your licensing. You'll generally use Windows Server 2012 Standard for physical deployments or lightly virtualized environments. You'll typically use Windows Server 2012 Datacenter for true virtualization environments.
The next consideration is whether to use the Server Core or Server with a GUI installation (previously known as a full installation). In general, use the Server Core configuration level, which requires less patching and therefore fewer reboots. However, the good news is that with Windows Server 2012, you can change the configuration level at any time, with only a reboot required. So, especially if you are new to Windows Server 2012, install the Server with a GUI configuration initially. Perform the configuration, get comfortable, then remove the graphical shell and management tools, run your server at the Server Core configuration level, and manage it remotely from a Windows 8 system. (You can learn more about the different configuration levels in "Windows Server 2012 Installation Options.")
Installing Hyper-V
You've installed Windows Server 2012, applied the most recent patches, connected to storage, renamed your network adapters to enable easy identification (and teamed them, if required), and configured IP addresses. The next step is to enable the Hyper-V role. This task can be performed graphically, through Server Manager, using the same process that you use to add any other role or feature. Or you can use Windows PowerShell:
Install-WindowsFeature Hyper-V -IncludeManagementTools -Restart
The benefit of using the Server Manager GUI is that it also prompts you to create a virtual switch on a selected network adapter in the server. The virtual network adapters that you configure on your VMs connect to this switch to access the external network. By default, a virtual network adapter is also created on the host OS so that the OS can use that adapter for VM traffic. If you have a dedicated management network adapter, disable the shared adapter after you complete the installation process.
Creating the virtual switch after installation is a straightforward process and can be performed by using PowerShell. The choice of whether to use Server Manager (local or remote) or PowerShell is primarily a matter of preference. If you are automating the deployment of Hyper-V, then use PowerShell, because manual steps need to be avoided.
In the accompanying video, I quickly walk through the entire Hyper-V installation process, showing the changes to networking.
The following are the basic steps for using Server Manager:
Log on, as an account with administrative credentials, to the server that will be the Hyper-V host and launch Server Manager. Or remotely launch Server Manager with an account that has administrative credentials on the server that will be the Hyper-V host.
Select Add Roles and Features from the Manage menu.
Click Next on the Before You Begin page.
On the Installation Type page, choose the Role-based installation type and click Next.
On the Server Selection page, from the list of servers in the server pool, choose the server on which to install the Hyper-V role and click Next.
Under Server Roles, select Hyper-V and accept the option to automatically install the management tools.
On the Create Virtual Switches page, which Figure 2 shows, select the network adapter that you want to use for VM traffic and click Next.
Leave the check box for the option to enable live migrations cleared and click Next. Live migration can easily be added later.
Choose new locations for VM storage, or accept the defaults, and click Next.
Select the check box to enable automatic restart of the server if required, and click Yes in the displayed confirmation box. Click the Install button.
Figure 2: Selecting the Network Adapter to Use for VM Traffic
After the server restarts, you are the proud owner of a Hyper-V virtualization host. Running
bcdedit /enum
from a command prompt shows that the hypervisor is now autoloading at system startup, as the output in Figure 3 shows.
C:UsersAdministrator>bcdedit /enumWindows Boot Manager--------------------identifier {bootmgr}device partition=DeviceHarddiskVolume1description Windows Boot Managerlocale en-USinherit {globalsettings}bootshutdowndisabled Yesdefault {current}resumeobject {424ad143-811e-11e2-abd4-ee945fee3b56}displayorder {current}toolsdisplayorder {memdiag}timeout 30Windows Boot Loader-------------------identifier {current}device partition=C:path Windowssystem32winload.exedescription Windows Server 2012locale en-USinherit {bootloadersettings}recoverysequence {424ad145-811e-11e2-abd4-ee945fee3b56}recoveryenabled Yesallowedinmemorysettings 0x15000075osdevice partition=C:systemroot Windowsresumeobject {424ad143-811e-11e2-abd4-ee945fee3b56}nx OptOuthypervisorlaunchtype Auto
Launch Server Manager. Under Tools, choose Hyper-V Manager and navigate to your server. Note that there are no VMs. However, if you click the Virtual Switch Manager action, you'll see a single virtual switch that has the name of the network adapter controller; for example, Realtek PCIe GBE Family Controller, as shown in Figure 4.
Figure 4: Changing the Virtual Switch Name and Sharing It with the Management OS
I recommend renaming the virtual switch something useful, such as External Switch, to represent the network to which it connects. Using consistent naming for switches across your Hyper-V hosts is important: If you move VMs between hosts, a switch of the same name must exist on both the target and source hosts if the VM is to maintain its network connectivity. Also as Figure 4 shows, clear the check box for the Allow management operating system to share this network adapter option. That option is needed only if you don't have a separate network adapter for management of the host or if you have only one network adapter that is shared for VM and host traffic. You can also use this interface to create additional switches, as required.
You are now ready to start creating VMs on your standalone host. For maximum capability, cluster multiple Hyper-V hosts and enable live migration. (I go through the complete live migration setup process in the article "Shared-Nothing VM Live Migration with Windows Server 2012 Hyper-V.")
There are likely some other steps that you should at least consider on your new Hyper-V server:
If you enabled Windows Update, you probably don't want it to automatically reboot your server. If you're using an enterprise patch-management solution, make sure to define a maintenance window, outside of business hours, during which your Hyper-V server can reboot. While your Hyper-V server reboots, all your VMs will be unavailable.
If you run malware protection on your Hyper-V server, you should exclude certain files and folders from scanning, for performance reasons. These exclusions are documented in the Microsoft article "Virtual machines are missing, or error 0x800704C8, 0x80070037, or 0x800703E3 occurs when you try to start or create a virtual machine."
Make sure to back up your VMs. The good news with Windows Server 2012 is that as long as your VMs are backed up, importing those VMs from backup into a new Hyper-V server is easy. You don't need to export the VMs first.
If you have a Hyper-V server with a large amount of memory, then Windows by default creates a large pagefile. This file can be manually reduced, because the VMs use most of the memory. You can safely create a 4GB pagefile for the Hyper-V host, using the Control Panel System applet. (Go to the Advanced tab, click the Performance Settings button, go the Advanced tab again, click the Change button under Virtual Memory, then set a custom size. Click the Set and OK buttons on all the open dialog boxes.)
A Whirlwind Tour
This has been a whirlwind tour of Hyper-V installation. The process is simple, but remember these considerations when choosing your hardware and your configuration level. In a future article, we'll look at the process of creating VMs on your new host.
About the Author
You May Also Like