Configuring Highly Available iSCSI Storage for VMware ESX Server 4.0
The process should be easier than it is! Here are the step-by-step instructions you need.
September 16, 2010
You have experience with VMware ESX Server. Who doesn’t? But suppose you’re tasked with the exciting job of adding a new VMware ESX server to your cluster. On top of that, you need to create a new iSCSI LUN that this server will use for VM storage. Although you use VMware ESX Server every day to administer your virtual machines (VMs), you don’t build new VMware ESX servers very often, and you’re rusty on the skills necessary to connect a new server to your iSCSI SAN. The process isn’t necessarily challenging, but some of the steps aren’t obvious—and completing them in the correct order is important. This article will help get you going.
The Sample Environment
Before getting into the step-by-step instructions, let’s take a look at a sample environment. Figure 1 shows a graphical representation of an environment in which two servers are running StarWind Software’s iSCSI SAN software. I’m using StarWind’s solution in my example and in this article, but understand that every iSCSI SAN will offer a unique management console. The experience will be different, but the steps will be similar. (You can download a copy of StarWind’s product at www.starwindsoftware.com. It installs to any Windows server and will give you a general idea of how SAN configuration works.)
Figure 1: Our sample VMware ESX environment
Configuring the LUN
You need to create a redundant iSCSI LUN, so you’ll need to create a LUN that’s mirrored between the two SAN servers.
In the StarWind Management Console, ensure that you’ve added and connected to both hosts. If you're using the trial version of StarWind, the default logon and password are root and starwind, respectively.
Right-click Targets and select Add Target. In the resulting screen, you’ll be asked to provide a Target Alias and Target Name. The Target Alias is the friendly name for the iSCSI LUN you intend to create and is generally used only on the SAN device. The Target Name will be the iSCSI Qualified Name (IQN) used for the server-to-storage connection; it's the name you’ll find yourself seeing inside VMware ESX. You can safely leave the check box next to Target Name blank, allowing StarWind to create that IQN for you.
Set Storage type to Hard Disk. Then, set Device type to Advanced Virtual and then a High Availability device. These high-availability options will be available if you are using the Enterprise HA edition of the StarWind SAN software. Creating a highly available LUN between two servers requires configuring that LUN with two partners. The first partner will be the server you used for the initial target creation. The second server is configured as a partner server. In this screen, provide the host’s name as well as a username and password. You can use the same default username and password you used earlier, as long as it hasn’t been changed.
Creating this partner connection requires also creating a Partner Target Alias and Partner Target Name, which must be different than in the earlier step. By default, the console will append the word Partner to your previous name.
You need to define the location where StarWind will store the file that will eventually become your VMware ESX LUN. You’ll also need to configure how large you want that LUN to be. Figure 2 shows those configurations for both servers. Although the figure shows storage of those files on the two servers' C drives, your production configuration should obviously be set to a separate data drive.
You also need to configure the data synchronization channel parameters, which are effectively the target’s network settings for the LUN. Figure 3 shows that an interface is configured for both partners by IP address. You can also identify which partner is primary versus secondary, as well which port number is used for iSCSI traffic.
The StarWind Management Console next asks how you want to initialize the disks. Because these are brand-new disks, select Clear virtual disks. Click Next through the following series of screens, then Finish to complete the LUN creation. StarWind will require a number of minutes to synchronize between the two servers. Allow this process to complete. You’ll know the process is over when the yellow or green warning symbols next to each target have disappeared.
Figure 2: Configuring LUN storage and size
Figure 3: Configuring data synchronization channel parameters
Remember that your SAN software will have a slightly different series of steps for configuring this LUN. However, these sample steps are useful for setting up a demonstration environment if you don’t yet have an iSCSI SAN or if you’re still learning.
SAN uptime is absolutely important in VMware ESX environments. All of VMware’s high-availability features work great—but only if your SAN never goes down. StarWind Software’s Enterprise HA edition maintains that always-on SAN through data replication. Using it, your virtual environment can survive the loss of either SAN server without causing running VMs to go down.
Configuring Storage NICs
At this point, you’ve completed half of the connection’s configuration. That first half created the LUN and prepares the iSCSI target for a connection from the VMware ESX server. The second half involves configuring the connection on the VMware ESX server itself. So, your next task is to configure two network connections from the ESX server to each of the SAN servers. This redundancy ensures that any single network connection—or even an entire SAN server—can be lost without affecting your running VMs.
Figure 4: Assigning NICs to a new virtual switch
Exiting this example’s VMware ESX server are two NICs, both of which have been dedicated to iSCSI traffic. These two NICs aren't bonded using traditional network teaming; iSCSI doesn't use traditional network teaming to aggregate its network connections. Instead, these two NICs will be bonded using iSCSI multipathing. Unlike traditional NIC teaming, which presents only a single IP address to the outside world, iSCSI multipathing uses individual IP addresses for each connection—at both the initiator and the target. Thus, a server that enjoys a multipathed connection to an iSCSI SAN will need multiple assigned IP addresses. Each IP address will connect to an IP address on the SAN storage.
Note that although the SAN storage in this example uses only one IP address, it's doing so for simplicity only. Your production SAN storage should be configured with multiple IP addresses for redundancy, load balancing, and connectivity to multiple storage processors.
Figure 5: Configuring VMkernel connection settings
The following steps assume that you’ve completed VMware ESX’s initial installation and that the server has been networked appropriately so that it can be managed through the vSphere Client.
Your first task is to configure the network cards that will be used for iSCSI storage. Do so using the vSphere Client, within the Networking link under the Configuration tab. There, click Add Networking and create a new VMkernel connection type. In the next screen, which Figure 4 shows, create a new virtual switch using only one of the NICs that you've identified for storage traffic.
On the next screen, you can label the port group with a friendly name. In Figure 5, you can see that the example’s virtual switch is named iSCSI 1. If your environment uses virtual switch tagging to trunk VLANs to the VMware ESX server, you should also identify the correct VLAN ID in the box. Be aware that your network will need to be properly configured for VLANs to function. The VMware article "Sample configuration of virtual switch VLAN tagging (VST Mode) and ESX" (kb.vmware.com/kb/1004074) outlines the steps required to accomplish this task with your networking equipment.
Figure 6 shows the wizard’s final screen, on which you'll need to configure your IP settings. Enter the IP address, subnet mask, and (optionally) the VMkernel Default Gateway for the storage connection. Completing this step configures the first NIC.
To add the second NIC, access the properties of the Virtual Switch and select the Network Adapters tab. Click Add to add each additional NIC, then choose Next and Finish.
You must create a VMkernel port for each subsequent NIC. To do so, click back on the Ports tab in the Virtual Switch Properties console and select Add. Choose VMkernel for the connection type and enter the appropriate network label and IP address information for each subsequent NIC.
You need to create a 1:1 mapping between NICs and VMkernel ports. By default, all network adapters will appear as active for each VMkernel port on the Virtual Switch. This doesn’t work with iSCSI; iSCSI multipathing requires that you override this default setup so that each port maps to only one corresponding NIC. View the properties of the Virtual Switch again, and select the Ports tab. Select one of the VMkernel ports you just created (labeled in the example as iSCSI 1 and iSCSI 2), click Edit, and access the NIC Teaming tab. There, select the Override vSwitch failover order check box and ensure that only one NIC is set as an Active Adapter. Figure 7 shows how vmnic2 is set as the only active adapter for the iSCSI 2 port. Repeat this step for each NIC, ensuring that each NIC maps to only one port. Figure 8 shows the Virtual Switch configuration for this example’s connection.
Figure 6: Configuring IP connection settings
Figure 7: Overriding the vSwitch failover order
Figure 8: A fully configured virtual switch for redundant iSCSI traffic
Now, you need to connect the VMkernel ports you just created to the iSCSI initiator. Start by enabling the iSCSI initiator itself. Inside the vSphere Client’s Configuration tab, click Storage Adapters. Scroll through the list of Storage Adapters to find the iSCSI Software Adapter. Select this adapter, and click Properties, then Configure. Select the Enabled box, click OK, then click Close to enable the iSCSI initiator. Back at the vSphere Client, if you access the adapter’s properties again, you’ll see that it's now populated with a Name and Target discovery methods.
You'll need to use the vSphere command-line interface (CLI) to create the connection between the VMkernel ports you created in the earlier step and the iSCSI initiator. You can do so by logging on to the Service Console directly as root. The command syntax to accomplish this task is:
esxcli swiscsi nic add –n -d
Take another look at Figure 7. You’ll see that the two created ports were labeled vmk0 and vmk1. Now, take another look at the Storage Adapters screen in the vSphere Client. For example, if the iSCSI Software Adapter is set to vmhba33. you would use the following syntax inside the vSphere CLI. The first two lines make the connection, and the third line lists the results:
esxcli swiscsi nic add –n vmk0 -d vmhba33
esxcli swiscsi nic add –n vmk1 -d vmhba33
esxcli swiscsi nic list –d vmhba33
Connecting NICs to SAN LUNs
Your NICs are now ready for connecting to your iSCSI LUN. Recall that two servers comprise the StarWind Software SAN. Both servers will need to be addressed to create the highly available connection.
In the Properties console of the iSCSI initiator, click the Dynamic Discovery tab, then click Add. Enter the IP address for the StarWind Software SAN’s iSCSI connection. (This will be the address you set in Figure 3). Repeat this process for the partner SAN server.
Configuring a Send Target Server instructs the iSCSI initiator to send a Send Targets request to that server—essentially with the question “What LUNs do you have for me?” The server responds to that request by returning a list of available iSCSI targets that have been configured for the initiator. The initiator’s Static Discovery tab displays the two targets that were sent back for this example.
Figure 9: A configured LUN with four paths
Click Close. The vSphere Client should present a dialog box prompting you to rescan the adapter to complete the configuration change. Choose Yes to rescan the adapter. If you’ve done everything correctly, the Storage Adapters screen should show a single LUN now available to the VMware ESX server, as you see in Figure 9. Notice that four paths are available to the LUN. Those four paths correspond to the two storage NICs on the VMware ESX server, each of which is now connected to the two StarWind SAN servers.
Figure 10: Changing the Path Selection for a LUN
You can right-click the LUN and select Manage Paths to perform additional configuration on the paths themselves, as Figure 10 shows. By default, an iSCSI connection will use the Fixed (VMware) path selection. This path selection instructs the host to always use the preferred path to the disk when that path is available, falling back to the other paths only when the preferred path goes down. The Fixed (VMware) path selection, as you can imagine, doesn't perform load balancing across your configured paths. To get load balancing, set the Path Selection to Round Robin (VMware).
Figure 11: Selecting a disk for new storage
Now, you need to add the newly connected storage to the VMware ESX server and create a Datastore. To do so, you access the vSphere Client’s Configuration tab under the Storage link. There, click Add Storage and select a Storage Type of Disk/LUN. If you’ve done everything correctly to this point, the next screen should appear similar to Figure 11, with the connected LUN displayed in the list. Not all SANs have special configurations over and above what I discuss in this article. However, some do. If you’re using StarWind Software’s solution, there are two more settings you’ll want to add in the vSphere Client under the Configuration tab’s Software Advanced Settings link. Navigate there and ensure the following two settings are configured:
Disk.UseDeviceReset = 0
Disk.UseLunReset = 1
Select the disk and complete the steps in the Add Storage wizard to create a new Datastore on the LUN.
Enjoy!
You're now ready to install VMs and enjoy your brand-new VMware ESX server! At this point, you have created a multiply redundant connection that can survive the loss of any network connection, or even a complete SAN server failure. You’ll want to incorporate these same levels of redundancy into your production environment as well.
About the Author
You May Also Like