Boot Directly from an iSCSI SAN

Booting a Windows server from an iSCSI SAN can provide economical data protection, and you might not even need new hardware to do it. These steps show you how.

Ed Roth

March 18, 2009

11 Min Read
ITPro Today logo in a gray background | ITPro Today


In my article about the advantages of using iSCSI SANs as part of your virtualization infrastructure, "Bringing iSCSI SAN and Virtualization Together," I mentioned that it's possible to boot from an iSCSI target, but I only had room to scratch the surface of what's required to do so. In this follow-up, I'll discuss some of the reasons to boot directly from an iSCSI SAN and tell you the hardware, software, and other requirements for booting Windows servers from an iSCSI target.

Why Boot From a SAN?

You're probably familiar with the data protection capabilities that SANs provide for traditional data volumes, including RAID, snapshots, replication, and Microsoft Multipath I/O (MPIO) support. Most SAN vendors also provide robust hardware platforms that include redundant, hot-swappable components to minimize the potential for downtime. You may be thinking you can get these capabilities in a server already, but these features all come with costs, and adding them to many servers would make your expenses skyrocket. In some environments, such as blade computing, booting from SAN LUNs makes for great economies of scale. You invest in making the storage pool highly available and save on server hardware.

SANs also give you a level of portability for bootable volumes that's difficult, if not impossible, to achieve using direct access storage. Consider the effort it would take to transfer a hardware RAID system from one server to another in the event of a server motherboard failure. You can get tremendous efficiency improvements by using a SAN's snapshot technology to replicate a bootable server image created with Sysprep to a number of LUNs that support multiple-server–boot from iSCSI implementations. Additionally, SANs can support shared-boot scenarios, where multiple systems use a single OS image, but that's a topic for another article.

Of course, you should always consider a balance between the pros and cons of different storage architectures, and you must evaluate and mitigate the risks that come with putting all your eggs in one basket. If you make the move to boot a number of critical servers from a storage pool, it's your duty to ensure that an appropriate level of redundancy is built into the design of that pool and its supporting network infrastructure.

Choose a Solution

As with Fibre Channel SANs, you can use dedicated iSCSI host bus adapters (HBAs) to boot from an iSCSI SAN. This option is viable but relatively expensive. Although they use more CPU power, two other options are available that deliver comparable power, performance, and simplicity of configuration at lower prices than HBAs. iSCSI boot-enabled NICs and software-based iSCSI boot solutions are both ready for prime time. I'll discuss configuring a hardware iSCSI boot in this article.

Before we dig in, there are a few requirements to discuss, none of which should pose a problem for any organization that deploys and maintains relatively up-to-date technology. For acceptable performance, you need to use Gigabit Ethernet for your iSCSI connections. Your servers should have a PCI Express slot to accommodate the iSCSI boot-capable NIC—some PCI Extended (PCI-X) NICs can be found, but PCI Express is a newer, arguably better standard and vendors won't be making new PCI-X cards. The system BIOS on your server must support booting from an iSCSI SAN. The final requirement for both scenarios is the Microsoft iSCSI Software Initiator 2.0.4 or later, iSCSI boot initiator version, available as a free download from Microsoft's site.

If these hardware requirements are too high, you can still pursue software-enabled iSCSI boot. In this case, your server can use just about any NIC that supports the Preboot Execution Environment (PXE) 2.x boot standard, and your system BIOS must support PXE booting. For more information about using a software-based iSCSI boot solution, see the sidebar "Software-Based iSCSI Booting"

IBM, Intel, and Broadcom manufacture NICs that support booting from iSCSI LUNs. As mentioned in the requirements, most of these are PCI Express devices, but the technology has also made its way into LAN-on-motherboard implementations from leading server manufacturers. I received a couple of demo NICs, then proceeded to see how easily I could put together a hardware boot from iSCSI installation.

The steps to configure boot from iSCSI are relatively simple, but you must be precise to ensure a reliable and stable implementation. You must configure a LUN on the iSCSI array that will be your boot drive, configure the NIC to boot from it, and either prepare a fresh Windows installation or migrate an existing one to the iSCSI LUN. The majority of time you spend on these tasks will be on OS preparation, but as with traditional OS deployments, you can use imaging technology and Sysprep to simplify subsequent deployments.

Step 1: Provision a SAN volume for iSCSI boot

Figure 1: Dell EqualLogic Group Manager applet. Click to expand.

There's nothing extraordinary about provisioning a SAN volume for boot from iSCSI; you just need to allocate adequate space and configure appropriate access via host IP address, Challenge Handshake Authentication Protocol, or iSCSI initiator name. You'll want to make note of the full target iSCSI Qualified Name (IQN) and, if used to limit access, the initiator IQN, as you'll need them during boot BIOS configuration, because target aliases aren't supported for booting from iSCSI. I used the Dell EqualLogic Group Manager applet, shown in Figure 1, to create a target to serve as my boot volume, and configured it to restrict access to my initiator IQN. The Dell applet is the one that came with my SAN array, you should have an equivalent applet for your array.

Step2: Configure Your NICs

iSCSI boot-capable NICs use a firmware BIOS that allows them to be configured to establish a preboot connection to an iSCSI LUN. These NICs typically come from the factory with PXE boot firmware loaded. To enable the iSCSI boot BIOS, you must flash the firmware on the NIC, replacing the PXE boot code with the iSCSI boot version. I accomplished this easily in my tests by simply creating a boot diskette containing the firmware image software. The steps may vary depending on your NIC, so visit your vendor's site for complete instructions.

Step 3: Input iSCSI Parameters

After flashing the firmware, you have two choices. One option is to configure the boot BIOS with the iSCSI initiator and target parameters for your environment. If you have multiple boot-from-iSCSI–enabled NICs or a multiport NIC, you'll need to specify and enable the primary port for booting from iSCSI. For the initiator, you must specify the IP address, subnet, and gateway. For the target, you must enter the IQN, IP address, subnet, gateway, target port, and LUN number.

You can also use DHCP to provide configuration information to the NIC's boot BIOS by setting up a reservation for each host with unique option parameters. (For the specifics of how to set up DHCP for booting from iSCSI, see the Microsoft article "How to install and configure the DHCP service for iSCSI Network Boot configuration in Windows Server 2003.") Whether you use DHCP or enter the information into the boot BIOS, you must enter the data in one place or the other.

After completing the previous steps, you should be able to reboot your server and see the NIC's boot BIOS connect to the target you specified. Getting to this point took me 30 or 40 minutes, including time spent manually configuring initiator and target parameters from a command-line utility. I later learned that the NIC's BIOS, accessible via a hotkey during power-on self-test, includes a form that makes entering this data much easier. Also, I initially struggled with the boot BIOS not being able to establish TCP connectivity with the storage array. After some investigation, I turned on the "port fast" setting on the switch port to which the server was connected. Doing so resolved the problem.

Step 4: Configure Your OS

Once connectivity between the NIC and LUN is established, you can configure your OS. You again have two choices for getting your OS on the new iSCSI target: Migrate an existing Windows installation or perform a clean OS installation. Migration requires a temporary or permanent local disk on the server. Install the OS on the local disk, then configure the OS to boot from iSCSI. Next, use the Sysprep tool to prepare the OS, then image the OS and copy it to the iSCSI LUN. When performing a fresh install of Windows Server 2003 or Windows Server 2008, press F6 during the first part of setup to copy the NIC drivers you need to establish a connection to the LUN. The installation will then proceed as if the iSCSI LUN were a local drive. The OS configuration steps are practically the same whether you're creating a new installation or migrating an old one. Other than where the installation takes place, the primary difference is how NIC drivers are installed in the OS. You should consult your NIC vendor's instructions for the proper driver installation method and order for each scenario. It's important that you configure the NICs to use DHCP within Windows so that their IP address is automatically assigned when the adapter's boot BIOS detects the iSCSI LUN.

Figure 1: Selecting the NIC port(s) to enable for iSCSI boot. Click to expand.

There are a few configuration tasks that are critical to successfully booting from an iSCSI target, whether you specify settings in the NIC bios or via DHCP. First, you need to install the Microsoft iSCSI Software Initiator with integrated software boot support. This installation isn't difficult, but you need to know which NIC port the boot BIOS will use and you need to know whether or not you'll be using MPIO. During the installation, you must check the Configure iSCSI Network Boot Support option and select the NIC port or ports to enable for boot from iSCSI, as shown in Figure 2. Also within the installation wizard, specify whether you're using MPIO. Although you just installed the iSCSI initiator, you shouldn't log on to the target LUN. The initiator will communicate with the boot BIOS to establish and maintain the connection to the target. (To see this process in action, see "How to Use the Microsoft iSCSI Initiator Command-Line Interface.")

After installing the initiator, there are a couple of configuration tasks left to perform. You need to configure the pagefile in the Advanced tab under System Properties. Make sure the pagefile uses a local disk unless the server won't have a local hard drive, in which case you need to turn off the OS's virtual memory capability. If you're running Windows 2003, you should install the hotfix described at support.microsoft.com/?kbid=939875, which corrects a problem with crash dumps on iSCSI boot volumes. Finally, you must add a shutdown script to protect the iSCSI boot sequence configuration from damage related to updates to the network stack. You need to run the Lscsibcg utility included with the Microsoft iSCSI boot software initiator and use the options to fix and verify each time the server shuts down. (For more details about adding the script, see the Microsoft article "How to enable the iSCSI boot sequence on a network adapter after you install the Microsoft iSCSI Boot Software Initiator.")

That's it for OS configuration. Now you're ready to either boot from your iSCSI LUN-based installation or use Sysprep to create an image of your temporary installation and move it to the iSCSI LUN. When you're ready to boot from the iSCSI LUN, modify the boot order or enabled boot devices in your system BIOS; this is where the requirement for system BIOS support for booting from iSCSI comes in. What if you have an older system that was produced before this type of support was mainstream? You can still boot from iSCSI targets with the help of some crafty software tools; see the sidebar "Software-based iSCSI Booting" for more information.

Options Abound

Regardless of which options you use, there are some distinct advantages to using your SAN for boot volumes. Using hardware to boot from iSCSI is easier to support, cleaner, and has fewer "moving parts" than a software solution. Then again, there's no reason you can't use both, if your needs dictate.

Sign up for the ITPro Today newsletter
Stay on top of the IT universe with commentary, news analysis, how-to's, and tips delivered to your inbox daily.

You May Also Like