Windows Server 2012: Shared Storage Live Migration
How to set up Shared Storage Live Migration in Windows Server 2012
May 21, 2013
Microsoft first added live migration to Hyper-V in Windows Server 2008 R2, and significantly enhanced it in Windows Server 2012. The management feature helps reduce planned downtime and provides a foundation for the dynamic data center by allowing you to move virtual machines (VMs) between Hyper-V hosts with no downtime at all.
You can use live migration to move VMs from a Hyper-V host that needs maintenance to another Hyper-V host.Then when the maintenance is complete, you can move the VMs back to the original host—all with no interruption of end-user services. Live migration also enables you to build a dynamic data center that can respond to high resource-utilization periods by automatically moving VMs to hosts with greater capacities, thereby enabling a VM to meet service level agreements (SLAs) and provide end users with high levels of performance, even during periods of heavy resource utilization.
The original implementation of live migration was limited to performing a single live migration at a time between two Hyper-V hosts. All subsequent live migrations were queued up. In addition, live migration required a Windows Failover Cluster and a shared storage solution. With Windows Server 2012 Hyper-V, Microsoft enhanced live migration in a number of ways. First, Microsoft added the capability to perform live migrations without a cluster or shared storage. In addition, Server 2012 Hyper-V can perform multiple live migrations simultaneously. In this article, I show how to configure Server 2012 Hyper-V to perform Shared Storage Live Migrations. Other articles in this series show you how to set up Server 2012 Hyper-V to perform Server Message Block (SMB) and Shared-Nothing Live Migrations.
Shared Storage Live Migration is the fastest and most seamless of the three live migration methods. However, shared storage also requires more infrastructure and configuration than the other live migration options. In this article, I guide you through the process of setting up Shared Storage Live Migration. First, I explain how live migration works. Then I cover some of the hardware and software prerequisites that must be in place. Finally, I walk you through the important points of the Hyper-V and Failover Clustering configuration that must be performed to enable live migration.
Overview of Shared Storage Live Migration
Live migration takes place between Hyper-V hosts. Essentially, a VM’s configuration and memory is initially copied from a source Hyper-V host to the target Hyper-V host. After the memory is copied, a memory synchronization process occurs where the target VM is updated with the user’s changes to the source VM. After the memory is synchronized, the user is cut over to the VM running on the target Hyper-V host. The VM on the new host can immediately access its virtual hard disk (VHD) files stored on Cluster Shared Volumes (CSVs). Figure 1 shows an overview of the live migration architecture.
Figure 1: Shared Storage Live Migration Overview
When you initiate a live migration, the following steps occur:
A VM configuration file is created on the target server.
The source VM’s initial memory state is copied to the target.
Changed memory pages on the source VM are tagged and copied to the target.
This process continues until the number of changed pages is small.
The VM is paused on the source node.
The final memory state is copied from the source VM to the target.
The VM is resumed on the target.
An Address Resolution Protocol (ARP) update is issued to update the network routing tables.
Requirements for Shared Storage Live Migration
From a hardware standpoint, you must have a minimum of two physical servers, each running Server 2012 with the Hyper-V virtualization role installed. This means you must be using the Server 2012 Standard or Datacenter edition. You can't use the Server 2012 Essentials or Foundation editions because they don't support the Hyper-V virtualization role. All servers also must support x64 virtualization. It’s also recommended that the processors provide support for Second-Level Address Translation (SLAT). All modern servers from tier-one OEMs such as HP, Dell, IBM, and Cisco support these standards.
One point to be aware of, however, is that all the physical servers must use processors from the same manufacturer. In other words, they must all be Intel or they must all be AMD. Although this requirement might change at some point in the future, at this time you can’t perform a live migration of a VM from a Hyper-V host system with an AMD processor to a Hyper-V host system with an Intel processor. Although you can’t mix processor manufacturers, it’s important to note that you don’t need to have matching processors or memory configurations in the systems acting as live migration hosts. The host systems can have different processors with different numbers of cores and different amounts of memory from those within the source systems. However, you should be sure that the host servers have the processing capacity to run the workloads of the VMs that are live migrated.
In addition, you need a shared storage subsystem. This can be either an iSCSI or Fibre Channel SAN. If you're using an iSCSI SAN, it must support the iSCSI 3 persistent reservations feature. This shared storage solution must be accessible to all the different systems performing live migration.
It’s also recommended that each server have a minimum of three physical network adapters. One network adapter is used by the VMs for external-network connectivity, another network adapter is used for VM management, and the third network adapter is used for the live migration process. In most production environments, you would need more network adapters to handle the combined bandwidth requirements of the workloads running in the VMs.
In addition to the Hyper-V role installed on the Server 2012 systems, Shared Storage Live Migration requires a Windows Server Failover Cluster; therefore, you must install the Failover Clustering feature and have a minimum of a two nodes in your Server 2012 cluster. Server 2012 Failover Clusters support a maximum of 64 nodes. For step-by-step instructions on setting up a Server 2012 Failover Cluster, refer to "Windows Server 2012: Building a Two-Node Failover Cluster." You also can watch a short video in which I step you through the process of building a two-node Server 2012 Failover Cluster in the video "Windows Server 2012: Creating a Two-Node Cluster".
Creating Cluster Shared Volumes (CSVs)
After the cluster is created, create one or more CSVs on the cluster. Technically, CSVs aren't required for Shared Storage Live Migration, but using them makes the whole process easier and lets the live migrations happen much more quickly. The CSV feature lets multiple cluster nodes simultaneously access the shared storage locations. Unlike in Windows Server 2008 and Windows Server 2008 R2, Server 2012 CSVs are enabled by default. However, you still need to select the cluster storage that will be used for CSVs.
To select a CSV's clustered storage location, open the Failover Cluster Manager, select the cluster, and expand the Storage node. This displays the Disks and Pools nodes. Select the Disks node to display the available cluster disks. For this example, I already added the disks for the CSV to the cluster. If you need to add disks to your cluster, select the Add Disk option on the Actions pane. To add disks to the cluster, they must be visible to the nodes in the cluster from Windows Disk Management. The storage for a CSV has to be visible to the cluster, and it can’t be used for other purposes such as clustered applications or the cluster quorum. You can get more information on how to add disk storage to a cluster at "Windows Server 2012: Building a Two-Node Failover Cluster." To use an existing cluster disk for your CSV, right-click on the disk in the Failover Cluster Manger and select the Add to Cluster Shared Volumes option from the context menu (Figure 2).
Figure 2: Adding Cluster Shared Volumes
Select the Add to Cluster Shared Volumes option to convert the disk to a CSV. The conversion process takes only a couple of seconds. You can convert multiple disks. In the example, I converted Cluster Disk 1 and Cluster Disk 3 to CSVs. Previously, these disks connected to the LUNs on my iSCSI back end and were used for VM storage.
Creating CSVs also results in the creation of a mount point on all cluster nodes. By default, the mount point is labeled C:ClusterStorageVolume1. Figure 3 shows an example of mount points for two CSVs.
Figure 3: Cluster Shared Volumes Mount Point
The C:ClusterStorageVolume1 mount point was created when I converted Cluster Disk 1 to a CSV. The C:ClusterStorageVolume2 mount point was created when I converted Cluster Disk 3 to a CSV. Once the CSVs are created, the next step is to store VMs on them.
Creating VMs on Cluster Shared Volumes
At this point, failover clustering is configured on all nodes in the cluster and the Cluster Shared Volumes feature has been added to the cluster storage, allowing all nodes to simultaneously access the CSV storage. The next step is to create VMs or move existing ones to CSV storage. If you have an existing VM, you can move it and its artifacts to the CSV using Hyper-V Manager's move options. If you’re creating a new Hyper-V VM, you can use Hyper-V Manager, PowerShell, or System Center Virtual Machine Manager. To create a new VM using Hyper-V Manager, open Server Manager, and click the Administrative Tools, Hyper-V Manager option. Next, select New, then Virtual Machine from the Hyper-V Manager Action pane to start the New Virtual Machine wizard. Figure 4 shows the wizard dialog box, labeled Specify Name and Location.
Figure 4: Adding a New Virtual Machine to the Cluster Shared Volume
The new VM is named ORPORTVM1 (Figure 4). Also note that the value for the VM location is set to the Cluster Shared Volumes mount point: C:ClusterStorageVolume1. This creates the VM configuration files on the shared storage. Click Next to assign RAM to the VM. Click Next again to select the network connection for the VM. Assigning a network to the VM is optional. However, if you do select an external network, be sure that the external network connection is named the same on all your Hyper-V nodes. In my case, I used the external network name External Virtual Network on all my Hyper-V cluster nodes. Click Next to display the Connect Virtual Hard Disk dialog box (Figure 5).
Figure 5: Adding New Virtual Hard Disks on the Cluster Shared Volume
Again, it’s important to create the VHD files on the Cluster Shared Volumes storage. Initially, the dialog box displays the Hyper-V Manager default values for name and location. I used the value ORPORTVM1.vhd for the VHD file and changed the location to C:ClusterStorageVolume1. Click Next to specify the guest OS installation options. All guest OSs, including Linux, can take advantage of live migration. The rest of the process for creating a VM is exactly like creating a regular VM. When you complete the New Virtual Machine Wizard, the VM is created on the Cluster Shared Volumes storage. The next step is to start the VM and install the guest OS and the application that you want to run on the VM.
Creating the Highly Available VM Role
Using the Failover Cluster Manager, go to Administrative Tools and open the Failover Cluster Manager console. Navigate to the Roles node under the cluster name and right-click to display the context menu (Figure 6).
Figure 6: Adding a New VM Role
Select the Configure Role option to start the High Availability wizard. The first dialog box displayed by the High Availability wizard is the Select Role dialog box (Figure 7).
Figure 7: Selecting the Virtual Machine Role
Choose Virtual Machine from the list of roles displayed on the Select Role dialog box, as shown in Figure 7. Click Next to display the Select Virtual Machine dialog box (Figure 8).
Figure 8: Selecting the Virtual Machine
All VMs on both cluster nodes are displayed in the Select Virtual Machine dialog box. Scroll through the list of VMs until you find the one you want to enable for live migration. I selected the VM ORPORTVM1 that I created earlier. The VM can’t be running while you perform this operation—it must be in the off or saved state to complete the wizard. You can use the Shutdown or Save options underneath the list in the dialog box to put the VM into the required state.
Select the check box in front of the VM name and click Next until you complete the wizard. A Confirmation screen is displayed, and then the Summary dialog box reports the status of the add role operation (Figure 9).
Figure 9: Summary Dialog Box Reporting the Status of the Add Role Operation
If you see Success in the description field, as shown Figure 9, then the VM is successfully enabled for live migration. If not, review the VM properties and make sure all the VM assets can be accessed on all nodes in the cluster. If there's an error, the most common problem is that some of the VM’s files or objects can’t be accessed by both physical nodes. One common problem is when the VM is using the host’s physical DVD drive. After the new role is added, it's listed in the Failover Cluster Manager’s Roles pane, as shown in Figure 10.
Figure 10: New VM Role
In Figure 10, you can see that the VM ORPORTVM1 is running and that the Current Owner is node WS2012-N2.
Initiating Live Migration
That’s all there is to configuring the Hyper-V live migration environment. At this point, you can initiate a live migration using the Failover Cluster Manager. Note that to perform Shared Storage Live Migration, you must use either the Failover Cluster Manager or Virtual Machine Manager. You can't use the Hyper-V Manager. To start a live migration, expand the Roles node and right-click the VM role you want to live migrate. This displays the context menu shown in Figure 11.
Figure 11: Initiating a Live Migration in the Failover Cluster Manager
Select the Move option displayed in the upper portion of the context menu. A fly-out menu prompts you for the type of move operation. You can choose to perform a Live Migration, Quick Migration, or Virtual Machine Storage migration. Select the Live Migration option as shown in Figure 11 and another fly-out menu prompts you for the target node. You can choose either Best Possible Node or Select Node. Because this example is a two-node cluster, the results of both selections are the same. However, there can be as many choices as there are nodes in the cluster. The maximum number of Server 2012 cluster nodes is 64. Server 2012’s placement optimization ranks the suitable, live migration targets according to their available capacity. In this example, I selected the Best Possible Node option to kick off the live migration.
The Server 2012 Failover Cluster Manager doesn’t give you a lot of feedback about the status of the live migration process, but in my case the live migration took only a few seconds. The length of time it takes depends on the size and activity of the VM, as well as the speed and activity of the network connection between the Hyper-V host systems. Typically, my network live migrations take between a few seconds and a minute. When the live migration has completed, the summary pane is redisplayed and the Current Owner value is updated with the name of the target node. In my example, the Current Owner is listed as WS2012-N1 following the live migration.
Live Migration Reduces Planned Downtime
Live migration reduces planned downtime for virtual machines and—when combined with technologies such as Dynamic Optimization—provides the foundation for the dynamic data center and private cloud. In this article, I demonstrated how to set up Shared Storage Live Migration on an existing two-node cluster. You might also want to check out the accompanying short video, in which I step you through the process of configuring live migration on a Server 2012 Failover Cluster.
About the Author
You May Also Like