Bringing iSCSI SAN and Virtualization Together
Improve long-standing processes while streamlining your systems
June 25, 2008
Sometimes, it takes a few years for a technology to reach mass acceptance in the enterprise space. And to bring powerful tools within the reach of SMBs, you need to add another year or two of product advances and more aggressive pricing. Two technologies that are now reaching broad acceptance—iSCSI SANs and system virtualization—create an opportunity for forward-thinking IT organizations to improve or completely reinvent some long-standing processes involving system provisioning and data protection. Fortunately, both technologies are now within your reach. So, now is the time to learn the ins and outs of implementing iSCSI SANs and virtualization in your Windows environment, and to understand some of the key synergies between SAN and virtualization technologies so that you can implement them to their full advantage.
Why iSCSI?
In a nutshell, iSCSI is a simple, powerful, and effective storage solution for SMBs—without the price tag or learning curve of a Fibre Channel storage architecture. Because iSCSI arrays are connected through standard Ethernet, you can leverage your existing expertise and investment in that technology and take advantage of reasonably priced gigabit-over- copper Ethernet switching (thanks to a higher level of vendor competition than you’ll find among Fibre Channel hardware vendors). As iSCSI vendors target the SMB space, they’re developing tools to simplify the setup, configuration, provisioning, and ongoing management processes for their hardware.
iSCSI SANs offer a range of configurations and features that let IT organizations choose appropriately sized and equipped configurations, and most vendors typically permit relatively seamless expansion through the addition of modular hardware. In addition to traditional RAID configuration support, redundant, hot-swappable components (e.g., disks, control modules, fans, power supplies) can be specified for maximum data availability. Other availability and load-balancing features—such as snapshots, replication, and Microsoft Multi-Path I/O (MPIO)—are available as standard or upgradeable options from most iSCSI SAN vendors.
Most vendors offer solutions that use internal drives connected via Serial Attached SCSI (SAS), Serial ATA (SATA), or a combination of both technologies, giving IT organizations the latitude to tailor the storage environment to specific performance and reliability needs. By nature, SANs are shared storage, meaning that multiple systems can carve out their own piece of the overall capacity. This strategy yields a better utilization ratio than trying to right-size DAS on individual servers. Furthermore, thin provisioning—a storage-virtualization technique that most vendors use—lets you logically allocate more storage space to a volume without fully committing physical storage resources. As the data on the volume grows and more physical storage is actually needed, it’s automatically allocated. The result is more efficient use of your investment in storage.
Why Virtualize?
Virtualization is all about driving down costs and maximizing the utilization of hardware resources. The insanity of adding a server for a single application is only exacerbated by the faster processors and larger memory and disks that ship in today’s standard servers. Virtualization technologies let you run multiple isolated systems on one piece of hardware. Therefore, not only do you get to actually use the CPU cycles available to you, but you also need to buy fewer servers, resulting in less rack space consumed and less reliance on other datacenter resources such as cooling and power.
Virtualization also provides for more flexible and nimble systems management. Because virtual machines (VMs) aren’t tied to a specific piece of hardware, tasks related to provisioning, deployment, and configuration are much simpler and more quickly performed. Backup, maintenance, and migration operations are also simpler, thanks to the nature of a VM’s self-contained, portable system image and emulated hardware description.
Setting Up the Environment
Now, let’s dig into some of the specifics of how to configure these technologies in your environment and see how they can work together. To give this article some hands-on perspective, I built an environment specifically to test some virtualization and disaster-recovery scenarios. For my iSCSI SAN, I used a Dell EqualLogic PS5000X storage array and I installed both Microsoft Virtual Server 2005 R2 SP1 and VMware ESX Server 3.5 to create a combination of virtual server and client systems.
Installing and configuring the iSCSI array. Installation of the EqualLogic iSCSI array was pretty simple, thanks to the Host Integration Tools provided on an included CD-ROM. If you’ll be using Microsoft Storage Manager for SANs (SMfS)—a simple storage- management tool available in Windows 2003 R2 and later—you’ll want to ensure that your storage vendor provides a Virtual Disk Service (VDS) hardware provider, which is essentially an interface between the storage system and the Microsoft VDS. In my tests, the EqualLogic tools’ installation process detected the SMfS installation and automatically installed its VDS hardware provider. I used the vendor-provided tools to initialize the storage array, configure a storage group, and set my server’s iSCSI configuration to access the SAN. I used the Web-based SANTest Group Manager tool, which Figure 1 shows, to provision an initial volume and perform basic SAN monitoring and management tasks throughout my usage of the storage system. It took about an hour to get the array configured and ready to manage through SMfS.
Systems that will connect to an iSCSI resource need to have a dedicated NIC or an iSCSI host bus adaptor (HBA) card specifically for connecting to iSCSI storage. There are a few advantages to using an HBA instead of a standard NIC, including performance and a simpler boot-from-SAN configuration, but for my testing I used standard gigabit-over-copper NICs from Intel and Broadcom. Also, you need to give some consideration to the network infrastructure over which your iSCSI traffic will travel. You should employ enterprise-class, nonblocking gigabit-over-copper Ethernet switches. If you don’t want to (or can’t afford to) maintain a completely separate network environment for your iSCSI devices, you should at least use a Virtual LAN (VLAN) for the ports through which iSCSI traffic flows.
Setting up the virtualization platforms. When you’re considering a virtualization tool, you have a few vendor choices, and those vendors typically offer multiple platform choices and management tools. To keep things simple, I’ll stick to Microsoft and VMware’s popular virtualization products. Setting up the virtualization platforms for my tests was relatively easy. I downloaded Virtual Server 2005 R2 (see the Learning Path at InstantDoc ID 99229 for download details) and followed the simple installation instructions to set it up on two Windows Server 2003 R2 systems. After the installation, an information page outlined how to access the Virtual Server Administration Web site. The management interface is intuitive, and the process of creating and provisioning new systems is straightforward.
I downloaded an evaluation version of VMware Infrastructure 3, which contains ESX Server 3.5 (for download details, see the Learning Path) and created an installation CD-ROM from the ISO image. Unlike the Microsoft virtualization software, ESX Server doesn’t run on top of Windows Server. I booted from the installation media and had the ESX Server system running in about 30 minutes. To manage the ESX Server system, I could either install the VMware Infrastructure Client or use the Web-based client.
At this point, both virtualization platforms were ready to create and provision new VMs on local disk volumes. To leverage the iSCSI SAN, I needed to prepare and connect SAN volumes to the Virtual Server and ESX Server systems. For more information about configuring SAN storage on Windows and VMware platforms, see the Web-exclusive sidebars “Configuring SAN Volumes for Windows Virtual Server” (www.windowsitpro.com, InstantDoc ID 99231) and “Configuring SAN Volumes for VMware ESX Server” (InstantDoc ID 99232). And remember that an essential part of any VM strategy is backup. The Web-exclusive sidebar “Backing Up Virtual Systems” (InstantDoc ID 99254) discusses recommended practices.
Continue on Page 2
Working with Virtual Server
I first configured a couple of VMs on one of the server’s local disks and made sure they were completely configured and operational before adding the SAN volumes to the server. Then, I migrated the existing VMs to the new SAN volume, using the method that follows.
First, to make the job of moving any VM easier, I recommend performing a clean shutdown of the VM. You can use the Virtual Server Administration Web site, which Figure 2 shows, to perform clean shutdowns of the systems you want to move. Now, assuming you’re moving all your VMs to another volume, you’ll want to change the MYVIRTUALSYSTEMS environment variable to the new path where your VM files will reside. (See the Microsoft article “The My Virtual Machines folder and virtual machine performance issues” in the Learning Path for further information.) VMs are essentially made up of two files—a VHD file (the virtual hard disk) and a VMC file (an XML description of the VM’s configuration parameters). If you configure multiple drives within your VM, or if you’re using undo disks or differencing disks, more than one VHD file will exist.
When you’re moving VMs from one location to another on the same host, and you want to keep the VMC and VHD file together, it’s easiest to remove the VM from Virtual Server Manager, then re-add it by entering the path to the location to which you copied the VHD and VMC files. This applies when the drive letter, folder name or filename, or another element of the path changes. After you add the system, you need to configure the new path to the VHD files by choosing the Configure option and selecting your newly moved system. In the configuration window, select the Hard disks item and modify the Fully qualified path to file value to reflect your new VHD location. You might also want to add or remove search paths as appropriate from the Virtual Server Manager’s Server Properties menu.
Now that you’ve moved your VM’s VHD and VMC files to a volume located on a SAN, you can use a similar process to move VMs to another host, without needing to copy the data. For example, suppose you’re replacing an old server that hosts a number of VMs. You can provision the new hardware, install the necessary software (including Virtual Server), and prepare it to connect to the iSCSI SAN. When the new server is ready to go into production, cleanly shut down the VMs, dismount the SAN volume from the old server, and mount it on the new server as discuss in the “Configuring SAN Volumes for Windows Virtual Server” sidebar. With proper planning and preparation, your VMs shouldn’t be offline for more than 10 or 15 minutes. You can use similar techniques for disaster recoverability, but more comprehensive approaches are available through third-party backup vendors. Also, Windows Server 2008’s new Hyper-V technology promises advanced, centralized VM management capabilities.
Working with ESX Server
As I did with Windows Server, I started by configuring a couple of VMs on a local disk on the ESX Server system. I performed these steps through the VMware Infrastructure Client, which Figure 3 shows. After configuring some SAN targets and formatting them with Virtual Machine File System (VMFS, as I discuss in the “Configuring SAN Volumes for VMware ESX Server” Webexclusive sidebar), I manually moved a VM to the new volumes by using the manual method proposed by VMware in its Knowledge Base article “Manual Migration Procedure for Moving a Virtual Machine on ESX Server” (see the Learning Path).
I found a utility called FastSCP from Veeam Software that simplifies this manual process with a GUI interface. Like the Windows virtualization scenario, the VMware process of getting the virtual files onto a SAN volume gives you more portable and flexible management and recoverability, but to gain the best leverage of an iSCSI SAN in a VMware environment, you need to purchase VMware’s VMotion add-on. VMotion lets you migrate an entire VM to a new host without needing to move the associated virtual disk files from their location on shared storage. VMotion automates this entire process and can perform it on hot or cold VMs. Whether or not you’re able to take advantage of VMware’s advanced add-on functionality, just getting your VMs onto SAN-based storage will give you the level of data protection afforded by the SAN hardware and features that your SAN vendor offers.
SAN Data Protection
In addition to the shared-storage and portability advantages that a SAN brings to virtualized environments, the advanced availability and data-protection features that most vendors offer can yield numerous benefits. Vendors take varying approaches to licensing features on their platforms. Some offer a la carte options that you can pay for as you need them, whereas others, such as Dell, sell their products with every feature enabled.
You can use snapshot technology, which quickly creates a copy of a volume’s contents at a specific point in time, for instant or scheduled backups. Because snapshot operations happen quickly and because snapshots can be mounted as separate volumes, they can be useful in testing and migration operations. Some platforms also feature integration with Microsoft’s Volume Shadow Copy Service (VSS) framework, which enables snapshot backups that ultimately offload the backup process from application servers.
Replication is another technology that offers simplified data protection in a SAN environment. You can use replication to create point-in-time copies of one SAN array or group and move them to another array or group in a physically separate location. Because iSCSI runs on Ethernet, the distance between these replica partners can be virtually unlimited, offering a strong measure of protection against natural disasters or other catastrophes. Depending on the situation, you can make either replica partner the primary storage entity and you can synchronize any changes once both sites are back online. Some vendors have highly customized variations of this technology that perform realtime striping of data across physical units in geographically separate locations.
Finally, MPIO—which lets a server use more than one read/write path to an iSCSI storage device—is a technology that provides fault tolerance against single points of failure in switch or NIC hardware or cabling. Multipathing can also provide loadbalancing of SAN traffic, resulting in performance improvements in high-utilization iSCSI implementations.
More iSCSI SAN/Virtualization Benefits
SANs and virtual environments complement each other in quite a few ways; in fact, I won’t be able to do them justice in one article. However, two notable capabilities to consider are booting from SAN and iSCSI VM clustering.
Booting from SAN. Booting servers directly from a SAN is an alternative to provisioning physical servers that have a local disk with an OS installed, offering numerous benefits related to reliability, disaster recoverability, simplified backup, and manageability. Booting from an iSCSI SAN is most easily accomplished with a dedicated HBA, but you can find solutions to configure boot from SAN for standard NICs.
Clustering. Virtual Server guest clustering is a technology in which VM nodes communicate with their shared storage via iSCSI to accommodate failover from one VM to another. This relatively low-cost clustering scenario provides high-availability implementations for VMs and offers a better means for applying patches and conducting other hardware or software maintenance.
One-Two Punch for the Future
Of course, not every SMB IT organization has the budget to deploy a new iSCSI SAN and virtualization infrastructure. The key is to recognize the potential of each technology and the advantages of having both. Then, plan your roadmap to get the most out of incremental investments in these technologies—with an eye toward the ultimate goal of full-scale deployment of both.
About the Author
You May Also Like