The New InfiniBand Standard
InfiniBand is a channel-based, switched-fabric technology that offers somewhat faster data throughput, lower latency, new server/storage architectures, better plug-and-play functionality, and additional security and Quality of Services (QoS) services.
May 1, 2000
Two industry standards groups—the Next Generation I/O (NGIO) forum and the Future I/O (FIO) group—led last year's bickering about the next-generation high-bandwidth server I/O standard. Today, these groups have merged into one umbrella organization, the InfiniBand Trade Association. The organization's proposed new standard is InfiniBand and the 0.9 version of InfiniBand has just been released to working group members. The InfiniBand Trade Association whose members include Compaq, Dell, HP, IBM, Intel, Microsoft, and Sun Microsystems (essentially all of the major server vendors)—as well as 120+ other members—will make version 1.0 specifications publicly available in late June or early July. InfiniBand is likely to play a significant role in the architecture of the many vendors' storage systems in the next 3 to 5 years. InfiniBand is the server-side I/O pathway for server/server and server/storage subsystem communications.
InfiniBand is a channel-based, switched-fabric technology that promises somewhat faster data throughput, lower latency, new server/storage architectures, better plug-and-play functionality (i.e., autorecognition), and additional security and Quality of Services (QoS) services. InfiniBand will be built into the next generation of servers and require new I/O cards that will replace today's shared-bus I/O standards, such as PCI. We should see InfiniBand added to servers that appear in the fourth quarter of 2000 or the first quarter of 2001—about the time that Intel's McKinley chip (IA-64) will appear. With InfiniBand slots appearing alongside PCI slots in servers, ISA slots will probably begin to disappear from most server vendors' offerings.
InfiniBand's channels are called links, and each link holds up to 16 virtual lanes per wire. When you add more cables or wires to the connection, you add more links and lanes. Three specifications with different throughputs are in the works: a 1-wire version with a 500MB/sec bidirectional data throughput; a 4-wire version with a 2GB/sec throughput, and a 12-wire version with a 6GB/sec throughput. When you compare this to PCI-X, which runs at 1GB/sec, you can see that this standard doesn't provide a leap in throughput.
Instead, InfiniBand hopes to differentiate itself by providing more reliability, easier connectivity, and better design options. Consider first the fabric nature of the standard. InfiniBand connects to an InfiniBand switch box that basically defines a subnet with 64,000 devices communicating point to point. So far, only Mallanox and Intel have disclosed that they are working on InfiniBand switches, but other companies are as well. Each switch can connect to other switches and routers that define other subnets essentially providing unlimited high speed point-to-point connections. Subnets can be connected to routers on the Internet to provide high-speed WAN connections.
Many server and storage vendors share a vision of making servers a commodity, reducing server footprints, and providing high-speed storage interconnects. Because the significant server vendors are likely to adopt InfiniBand, the new standard will probably play a strong role in future storage systems.
IBM is one of the key players in InfiniBand's development. I recently spoke with Tom Bradicich, Director of Netfinity Architecture and Design and co-chair of the InfiniBand development group. (Netfinity is IBM's branding for its Windows 2000/NT server line.) Bradicich notes that InfiniBand's goal is "to make enterprise-standard reliability available in an industry-standard platform." When you compare the cost to that of PCI, Bradicich sees probable "cost parity at the lower standard (1-wire)."
IBM is making a major push to integrate InfiniBand in its next generation of Netfinity servers [http://www.developer.ibm.com/welcome/netfinity/edu/x-arch/da_intro.html], touting InfiniBand's advantages in building high-performance Storage Area Networks (SANs) and its potential in high-speed interprocessor communications pathways between servers—in what IBM calls "parallel clusters." InfiniBand permeates IBM's thinking. IBM's vision is to move its Netfinity line from monolithic pedestal boxes—such as duals, quads, and 8-ways—to rack-mounted server blades with cabinets that contain I/O InfiniBand backplanes. (IBM currently makes some of the thinnest rack-mounted servers—with form factors ranging from 1U to 8U. Each "U" represents a 1.75" thickness in a rack mount.) InfiniBand may support even smaller blade sizes.
When I asked Bradicich whether InfiniBand would result in new server form factors, he replied that "InfiniBand lets you outboard the server I/O and make server boxes smaller and less dense." The standards committee is working on connectivity lengths in the range of "dozens of meters over fibre or copper." Bradicich envisions a situation in which "you can have an I/O drawer connecting servers in a remote location, with hot-plug capability. This would allow for easer scaling and upgrade."
IBM's storage vision is defined in its "open" Netfinity SAN initiative, and it includes clustered servers that contain alternative paths to centrally managed scalable storage pools. IBM will offer storage consolidation services, as well as disaster protection through remote clustering and disk mirroring. InfiniBand is the high-speed, server-based I/O pathway that's supposed to make all this possible. Expect IBM to make a major server-based storage announcement at PC Expo in June.
You can find more background on standards development issues in some of my previous articles ("I/O: The Next Generation," News, August 13, 1999, ID 7123; and "Standard I/O: Next-Generation Server Bus Standards Emerge," NT News Analysis, Windows NT Magazine, November 1999, ID 7292).
To learn more about InfiniBand and participating in its development, check out the InfiniBand developer's conference, June 20 through 22 in Newport Beach, California. (See the InfiniBand Web site for conference details.)
So history buffs, which came first the server or the disk? Till next time.
About the Author
You May Also Like