Infiniband's First Coming

What will probably be the industrywide standard for the next server bus architecture has arrived. The Infiniband standard will affect the design of many network chip sets, routers and switches, storage topologies, and more.

Barrie Sosinsky

January 21, 2001

3 Min Read
ITPro Today logo in a gray background | ITPro Today

You might have missed it, but at the October 2000 Infiniband conference in Las Vegas, what will probably be the industrywide standard for the next server bus architecture moved out of the decimal numbers and was made whole. Version 1.0 has arrived. It might be late and "bloody," but you can now pick up the 1400-page tome that will affect the design of many network chip sets, routers and switches, storage topologies, and much more. You can download the document from the Infiniband Web site for $20.

You might recall that in 1999, two industry groups—Future I/O and Next Generation I/O (NGIO)—proposed competing bus standards. The eventual outcome was a decision to redraft a new bus standard based on elements of both proposals. The result was a standard that more than 200 companies support. With most important industry players signed on, it's nearly impossible to see how this standard could fail.

We won't see server systems that reflect the Infiniband standard for about 6 months, not until the necessary compliant components are produced in some quantity. About 45 companies have committed to manufacturing Infiniband components. Full-scale volume production of Infiniband systems is more likely to start in early 2002—and to appear on low-end systems, or so some developers said in interviews.

Infiniband replaces the 8-year-old PCI bus standard, which (like the Eveready bunny) goes on and on and on. Even with extensions to PCI, the aging bus architecture has been running out of headroom and becoming the gating factor on server performance and system scalability. (The industry will make commodity PCI boards for perhaps another decade.)

The Infiniband standard specifies either copper or fibre connections with 1, 4, and 12 wires operating bidirectionally at 500MBps, 2GBps, and 5GBps, respectively. With copper connections, you can get up to a 17-meter (about 56') connection and with fibre channel up to a 100-meter (about 328') connection. In both cases, the speed and connectivity length improvements over current bus standards let you build out-of-the-box solutions.

Infiniband offers IT fast system I/O and many more connection possibilities—and a switched fabric topology that you couldn't build cost-effectively even a couple of years ago. Your captive storage will no longer need to be in or near the server; the bus is fast enough to let you "deconstruct" servers. Infiniband supports very high density rack-mounted servers comprising just the server engine: processor, bus, and memory. Everything else can be off-board because Infiniband supports offloading CPU bus-traffic processing to specialized chip sets in low-cost 256-port Channel Host Adapters. A couple of years from now, we'll see a lot more server-blade systems built. Server and storage facilities will be essentially decoupled from one another because each is connected using a Target Channel Adapter (TCA) that makes system upgrading and planning easier. You'll be able to have up to 64,000 nodes to a fabric, and essentially limitless fabrics that can hook together without reducing performance—or so say some of the standard's developers. Creating clusters will also be easier because you have a single connection to the fabric for server I/O, storage I/O, and network I/O.

Sign up for the ITPro Today newsletter
Stay on top of the IT universe with commentary, news analysis, how-to's, and tips delivered to your inbox daily.

You May Also Like