InfiniBand
The new switched-fabric I/O standard is set to debut this year.
August 1, 2000
PCI's potential replacement is in the works
Last fall, supporters of the Next Generation I/O (NGIO) and the Future I/O (FIO) specifications agreed that one I/O standard will better serve the industry—and be more profitable—than competing standards. The InfiniBand Trade Association (IBTA) is developing InfiniBand—a switched-fabric I/O standard that in a few years might make PCI as obsolete as ISA. The IBTA plans to release the standard's initial version this year, with InfiniBand-based products appearing in 2001. (For more information about the IBTA, visit http://www.infinibandta.com/.)
InfiniBand is to PC I/O slots as Gigabit switches are to a 10Base-T hub. Bus architectures are inherently limited, suffering from contention, poor fault isolation, and design limitations that trade speed for distance. Whereas PCI-X will let a single-connector bus operate at a maximum speed of 1066MBps, InfiniBand will handle multiple-switched connections at speeds as high as 30Gbps full duplex. (For more information about PCI-X, see The Lab Guys, "The Chase Is On," August 2000.) And InfiniBand has more than raw speed going for it.
To get an idea of what InfiniBand will do, combine the flexibility of switched-fabric fibre channels, the convenience of PCI slots, and the device-addressing standard of IP version 6 (IPv6). The result will be one I/O interconnect technology for local device attachments ranging from plug-in modules (à la PCI) to extended fiber optic links as long as several kilometers (à la fibre channel).
Take a look at the standard's basic specifications. InfiniBand devices will plug in to an InfiniBand bay (similar to a PCI card plugging in to a PCI slot) or connect by cable to the host computer (a Host Channel Adapter—HCA—in the server will connect to a Target Channel Adapter—TCA—in the device). A 4-wire copper-cable-connected InfiniBand link will operate at distances as long as 17 meters at a speed of 2.5Gbps full duplex—a send/receive bandwidth of 5Gbps. The InfiniBand standard also defines higher-speed 16-wire and 48-wire duplex links for one-way transfer rates of 10Gbps and 30Gbps, respectively. InfiniBand will also support fiber-optic cable. A single-fiber pair will operate at distances as long as 100 meters at a speed of up to 30Gbps full duplex. Single-mode fiber extenders might support distances of several kilometers.
The IBTA likes to call InfiniBand an intelligent channel, comparing it to the IBM System 390's (S/390's) I/O subsystem's intelligent channels. Whereas PCI uses a memory-mapped, load/store data model, InfiniBand will use a networked, send/receive model. The HCA and TCA, rather than the CPU, will monitor I/O. Each HCA and TCA will be addressable, and InfiniBand's Global Route Header will include both the source (i.e., HCA) and destination (i.e., TCA) address, making InfiniBand I/O traffic inherently routable.
InfiniBand will let you move all I/O devices off the server. The IBTA envisions that InfiniBand-equipped Internet routers will connect directly to the switched fabric of the local InfiniBand network, collapsing into one fat pipe the electronics that local data I/O and traditional IP traffic need.
InfiniBand implementations will likely debut in high-end server and storage networks, which can best absorb the initial costs. Cascading InfiniBand switches will be the heart of the I/O network. Because InfiniBand will provide one channel through which all server I/O can flow, InfiniBand will simplify server-farm cabling and help server-rack densities fall below 0.5U (0.875") per CPU. Predefined InfiniBand switch zones will automatically give each server access to storage LUNs and other necessary devices—without cabling. As InfiniBand electronics-design expertise increases and chipmakers develop more sophisticated and specialized silicon, prices for InfiniBand-based equipment will drop and eventually make PCI-free computers an economical choice.
About the Author
You May Also Like