Make the Most of Your SAN with iSCSI

Add your Windows server storage to your Fibre Channel network

Tom Clark

August 29, 2007

12 Min Read
ITPro Today logo in a gray background | ITPro Today


Modern data centers typically run their most mission-critical business applications on Fibre Channel SANs.Fibre Channel has a proven track record in enabling fastperformance and high availability of application data aswell as established best practices for data backup anddisaster recovery. Not all business applications, however, require the bandwidth of 4Gbps Fibre Channel, and large data centers might have hundredsof second-tier standalone rack-mounted servers still using direct-attachedstorage. Some find it hard to justify the cost of a $1,000 Fibre Channel hostbus adapter (HBA) when the server itself cost less than $3,000. On the otherhand, standalone servers incur more administrative overhead per server,particularly for backup operations.

Until the advent of iSCSI, there were few options for economicallyintegrating all application, Web-hosting, and file servers into the datacenter SAN. iSCSI and iSCSI gateways, however, now provide the meansto streamline the management and backup of second-tier servers andintegrate these servers into the Fibre Channel SAN. This integrationextends data center best practices to all server assets and can amortizethe substantial investment in a data center SAN over a much largerpopulation of attached devices.

Microsoft offers new iSCSI-enabling software, making it possible tocost effectively bring Windows servers into the data center. Let's look atthe steps required to make this happen and factors you need to consider.First—a little background on iSCSI.

iSCSI Essentials
Like traditional parallel SCSI,the iSCSI protocol enables readsand writes of data in high-performance block format. However,by serializing SCSI commands, status, anddata, iSCSI overcomes the distance limitations of parallel SCSI cabling and simplifies deployment and maintenance. BecauseiSCSI runs over TCP/IP, it can be transportedover conventional Gigabit Ethernet networksand wide-area IP networks. Figure 1, illustrates how conventional SCSI iswrapped in TCP/IP for transport.

Using economical Gigabit Ethernetinterface cards and Gigabit Ethernet switches keeps the iSCSI per-serverattachment cost low and works fine in many situations. Some vendorsdo provide iSCSI HBAs that optimize iSCSI processing via TCP offloadengines (TOEs) and onboard iSCSI processing logic. iSCSI HBAs arerequired for boot from SAN applications, and they're suitable for applications that require high bandwidth, but they increase per-server attachment costs. In this article, I assume standard Gigabit Ethernet NICs. Withthe faster 10 Gigabit Ethernet, you lose most of the cost advantage overFibre Channel.

For Windows storage management, an iSCSI target appears as justanother storage resource that can be assigned a drive letter, formatted,and used for applications and data. Instead of being housed insidethe server or connected by parallel cabling, though, the iSCSI storageresource can be anywhere in an IP-routed network. Because iSCSI is ablock storage protocol, the latency of long-distance connections overa WAN might have a serious negative effect on performance or causetimeouts. Typically, iSCSI is best deployed within a data center, campus,or metro environment.

Microsoft iSCSI Support
Microsoft's introduction of iSCSI initiator and Internet Storage NameService (iSNS) software provides an economical means to bring evenlow-cost Windows servers and workstations into the data center SAN infrastructure. Microsoft iSCSI Software Initiator enables connection of a Windows host toan external iSCSI storage array. Microsoft iSNSServer discovers targets on an iSCSI network.

As of this writing, iSCSI Software Initiator =2.04 is available free on the Microsoft Download Center and requires Windows Server2003 or later, Windows XP Professional SP1 orlater, or Windows 2000 SP3 or later. Downloadit at http://www.microsoft.com/downloads/details.aspx?familyid=12cb3c1a-15d6-4585b385-befd1319f825&displaylang=en. Microsoft iSNS server code is also available as afree download and requires Windows Server2003 or Windows 2000 SP4. Download itat http://www.microsoft.com/downloads/details.aspx?familyid=0dbc4af5-9410-4080a545-f90b45650e20&displaylang=en.

Microsoft has included some attractivefeatures in iSCSI Software Initiator, including multipathing, security, and support forserver clustering to iSCSI targets. Multipathing with the Microsoft Multipath I/O (MPIO)driver included in iSCSI Software Initiatorprovides for higher availability through failoverand better performance through load balancing. Secure connections between iSCSI initiators and storage targets are supportedwith Challenge Handshake AuthenticationProtocol (CHAP) and IPsec for data-payloadencryption. Authentication and encryptionmight be required when storage data traversesan untrusted network segment. Support forclustering enables iSCSI storage to be used forMicrosoft Exchange Server or Microsoft SQLServer clusters. For the configurations discussed below, the Exchange or SQL Server datacan be managed centrally and protected on theSAN, while clustering provides high availabilityof applications to end users.

iSNS Server isn't mandatory, but it does simplify iSCSI deployment by enabling automatic discovery of iSCSI target resources. It can be run on a dedicated server or coexist with other server applications. Essentially, iSNS Server combines the capabilities of DNS with conventional discovery services provided by the Simple Name Server (SNS) of Fibre Channel fabrics. In Fibre Channel switches and directors, for example, the SNS contains information about all storage assets in the SAN. As a storage array or tape subsystem is attached to the SAN, it registers with the SNS. When Fibre Channel initiators connect to the fabric, they query the SNS for available storageresources. The resources that are reported to aspecific initiator can be filtered by use of zoning and LUN masking. This prevents initiatorsfrom accessing unauthorized storage assets(e.g., stopping a Windows server from bindingto a UNIX storage array).

The iSCSI Gateway
An iSCSI gateway provides protocol conversionbetween iSCSI initiators and Fibre Channel–attached storage targets. An iSCSI gateway effectively proxies for each side, presenting a virtualFibre Channel initiator to the real Fibre Channeltarget and a virtual iSCSI target to the real iSCSIinitiator, as Figure 2 shows. Consequently, whensetting up an iSCSI gateway, you must follow therespective rules of both protocols.

Because Fibre Channel connections todayare typically 2Gbps or 4Gbps and iSCSI is typically 1Gbps, you can aggregate more iSCSI servers per Fibre Channel storage port on an iSCSIgateway than you can Fibre Channel servers.In conventional business application environments running at 1Gbps end to end, a typicalratio of servers to storage ports (known as thefan-in ratio) might be 7:1. An iSCSI gateway thatprovides 1Gbps port connections for iSCSI initiators and 4Gbps connections for storage portscan enable a much higher fan-in ratio of 18:1 orgreater. For iSCSI initiators, you implement thehigher fan-in ratio by attaching multiple iSCSIservers to a Gigabit Ethernet switch, which inturn provides a 1Gbps connection to the iSCSIgateway for every fan-in group. An iSCSI gateway that offers four 1Gbps Ethernet ports andseveral 4Gbps Fibre Channel ports can support70 or more iSCSI initiators concurrently.

The other factor to consider when scoping fan-in ratios is the maximum number of concurrent iSCSI sessions per gateway port that the storage vendor has certified. An iSCSI gateway might support up to 50 iSCSI sessions per Gigabit Ethernet port, whereas the storage vendor might certify only a more conservative 20 sessions per port. Each storage vendor does its own certification and testing of iSCSI gateway products and sets its own supported limit for each.

Bringing iSCSI Servers into the SAN
As you plan for integrating iSCSI-attached Windows servers into your SAN, identify the collective storage capacity required for all the newly attached iSCSI servers, the average storage traffic generated by the second-tier applications running on the servers, and the initial fan-in ratio that best suits the aggregate traffic load to help size both SAN and iSCSI gateway requirements. It might be fairly easy to identify the amount of storage capacity each second-tier server needs, but it's usually more difficult to identify storage traffic patterns and loads, particularly for "bursty" applications. It's best, then, to start with a fairly conservative fan-in ratio (e.g., 7:1 or lower) and gradually increase the number of iSCSI servers per iSCSI gateway port until you reach the optimum fan-in for your situation.

Deploying second-tier iSCSI servers into an existing Fibre Channel SAN requires three basic steps: configuring the existing Fibre Channel storage array for additional hosts, setting up the iSCSI gateway for both virtual Fibre Channel initiator and virtual iSCSI target connections, and installing the Microsoft iSCSI initiator and iSNS (if desired) software for host connection. No one step is particularly difficult, but the process might require collaboration between server administrators and SAN administrators if those functions aren't combined in your environment.

Step 1: Configuring SAN storage for new iSCSI hosts. Because you're using an iSCSI gateway to integrate additional servers, no special process is required to configure additional storage capacity. From the SAN administrator's standpoint, the new LUNs are being configured for traditional Fibre Channel initiators, which in fact have a virtual existence within the iSCSI gateway. Consequently, you create additional LUNs with the desired capacity as usual by using the storage vendor's configuration utility, and the appropriate number of new storage ports (determined by the fan-in ratio) are connected to the SAN fabric.

Although an iSCSI gateway platform might allow direct connection between the gateway and SAN storage, data center administrators might prefer to drive all storage connections through Fibre Channel directors or switches. In this case, you connect both storage ports and iSCSI gateway Fibre Channel ports to the fabric and configure zoning or LUN masking at the fabric level. Each new storage port is represented by a unique World Wide Name (WWN), which you use to configure zoning and connectivity to the iSCSI gateway.

Every storage vendor provides its own management utility for creating LUNs from the total capacity of the storage array. Typically, these utilities are GUI-based and fairly simple to configure. Likewise, individual fabric switch vendors provide utilities for configuring switch ports, zone groups, and LUN masking. It's important to remember that although you're configuring SAN resources to connect iSCSI initiators, the storage arrays and fabric see only Fibre Channel initiators proxied by the iSCSI gateway.

Step 2: Setting up the iSCSI gateway. The iSCSI gateway configuration has two basic components. You configure and bind the iSCSI initiators to their respective virtual iSCSI targets. And, likewise, you configure and bind the real Fibre Channel targets to their respective virtual Fibre Channel initiators. Typically, the configuration utility provided by the iSCSI gateway vendor streamlines this dual process so that when you configure an iSCSI initiator, the proxy Fibre Channel initiator is created automatically.

You register iSCSI initiators by iSCSI identifiers and register SAN resources by WWNs and Fibre Channel IDs (FCIDs) on the iSCSI gateway. You must determine these respective identifiers in advance to properly configure the iSCSI gateway. In Figure 3, the configuration utility for an iSCSI gateway (in this example, a Brocade M2640) shows an iSCSI initiator defined by iSCSI identifier and alias, IP address, and proxied WWNs.

The iSCSI gateway might include additional utilities for implementing CHAP or IPsec for security. As with general address information, you should determine any CHAP parameters or IPsec addressing in advance to simplify gateway installation.

Because each iSCSI gateway vendor provides its own unique utility for configuring iSCSI hosts and SAN targets, I can't provide a step-by-step example for gateway configuration. The common requirements, though, are to configure iSCSI initiator properties, configure proxied targets, and define LUN masking parameters for the target volumes.

Step 3: Configure the iSCSI hosts. Along with its free iSCSI Software Initiator, Microsoft provides detailed installation instructions in a downloadable users' guide. Once you've installed the software on a Windows server, the basic steps are to assign an iSCSI initiator node name for the server, configure any desired security features, discover (via iSNS) or define targets available for the server, and bind the iSCSI host to the appropriate targets.

After you've set the initiator parameters on the General tab of the iSCSI Initiator Properties dialog box, use the Discovery tab to either discover through iSNS or manually enter the IP address of intended targets. If you install iSNS on a LAN-attached server, it will periodically check for the existence of any additional iSCSI targets. In this example, those targets are represented by the iSCSI gateway. Alternatively, click Add in the Target Portals area of the Discovery tab to manually identify targets.

After you've defined targets, use the Targets tab to select and log on to the proxied iSCSI targets. As Figure 4 shows, the logon window also enables you to select whether a target is persistent and whether multipathing is used for this connection. Click Advanced in the logon window to configure cyclical redundancy check (CRC), CHAP, and IPsec settings for this connection.

Once the logon session between the iSCSI initiator and proxied iSCSI target is active, you can configure the iSCSI storage volume via the Windows Disk Management utility, assign it a drive letter, and format it for use.

A Dedicated IP SAN
Compared with a messaging LAN (i.e., a LAN that carries application traffic as opposed to storage traffic), a Fibre Channel SAN is inherently a separate network, with its own cabling scheme, protocols, and fabric infrastructure. If properly designed, congestion on a Fibre Channel SAN should be minimal and high availability is enhanced through redundant pathing between initiators and targets.

One of the more marketed aspects of iSCSI is that it can be run over common LAN infrastructures by using relatively cheap Gigabit Ethernet switches. This means that storage and messaging traffic coexists on the same LAN. Certainly there are no significant technical barriers to prevent this. However, Microsoft and, in particular, storage vendors typically advise against combining storage and messaging traffic on the same network. Messaging traffic can withstand wide fluctuations in latency, congestion, and packet loss and recovery; storage traffic can't. Consequently, the Ethernet network between the iSCSI gateway and the complex of iSCSI initiators it's serving should be a dedicated IP SAN, as Figure 5 shows.

Designing a dedicated IP SAN from the start takes advantage of more low-cost perserver connection and use of commodity Gigabit Ethernet switches, and it allows you to scale the IP SAN over time to accommodate additional servers-without significantly impacting (or being-impacted by) the corporate LAN.

iSCSI is now a mature storage technology and is being deployed for small departmental operations as well as data center applications. Today, Fibre Channel is still the transport of choice for many data centers with high bandwidth and high availability requirements. Combining iSCSI and Fibre Channel SAN technologies helps administrators bring all server assets into a common storage infrastructure and provide best practices handling of all corporate data.

Sign up for the ITPro Today newsletter
Stay on top of the IT universe with commentary, news analysis, how-to's, and tips delivered to your inbox daily.

You May Also Like