To InfiniBand and Beyond update from November 2010

The SC10 conference and new Top500 supercomputer list are just a few days away. Let's take a look at the interconnect methods used most frequently by the supercomputers on the list.

Data Center Knowledge

November 11, 2010

3 Min Read
ITPro Today logo in a gray background | ITPro Today

The SC10 conference and November edition of the Top500 supercomputer list are just a few days away now. Let's take a look at the interconnect methods used most frequently by the supercomputers on the list.

Gigabit Ethernet vs. InfiniBand
In the June 2010 edition of the Top500 list Ethernet (GigE and 10 GigE) took a 48.4 percent share, while InfiniBand (Numalink, SDR, DDR, DDR 4x, and QDR Sun M9 / Mellanox / ParTec) took over 41 percent.  The argument was made in July of this year that InfiniBand is set to outpace Ethernet. The InfiniBand Trade Association put together a nice roadmap explaining the acronym soup surrounding InfiniBand technologies, their future, and how each speed lane achieves a particular goal.  A paper from Chelsio Communications seeks to address the myths about Infiniband and claim that 10 Gigabit Ethernet is faster.  Throw iWARP into the mix and more debunking of marketing hype and you have this infoTECH feature by guest author Gilad Shainer from Mellanox Technologies.

Mellanox, Qlogic and Voltaire
Late last month Oracle announced that it was making a strategic investment in Mellanox (MLNX), a supplier of end-to-end connectivity solutions.  Last week Mellanox announced a joint cooperative agreement agreement with the Beijing Computing Center to build a joint cloud computing laboratory. The 40 Gb/s InfiniBand-accelerated Beijing Public Cloud Computing Center will provide China's academic and commercial researchers with easy access to a high-performance computing cloud for their multiple scientific applications.  Mellanox also announced that its ConnectX-2 adapter card with Virtual Protocol Interconnect (VPI) technology is now available on Dell PowerEdge C6100 ultra-dense rack servers. Mellanox ConnectX-2 VPI adapter cards can connect to either 40Gb/s InfiniBand or 10GigE networks and are suited to a variety of business and clustering applications.

Qlogic (QLGC) announced last week it has collaborated with Platform Computing to cross-integrate key components of their respective software management suites, making it easier and more efficient than ever to install, manage and operate high performance computing cluster environments. QLogic's InfiniBand Fabric Suite FastFabric tools can now be directly installed and executed from Platform Cluster Manager to greatly simplify installation and management of HPC clusters. On Tuesday QLogic announced that its 7300 Series QDR InfiniBand(R) host channel adapters (HCAs) have established new world records for non-coalesced message rate performance at both host and cluster levels. Tests were run using The Ohio State University message bandwidth benchmark.

Israel-based Voltaire (VOLT) makes scale-out fabrics for both InfiniBand and Ethernet.  Last week Voltaire announced third quarter 2010 financial results and generated a positive cash flow, for the first time, of $3.6 million.  Ronnie Kenneth, Chairman and CEO of Voltaire commented that "time and again, our software proves to be a major differentiator for both our Ethernet and InfiniBand products and enhances our competitive edge. Through effective execution of a well-defined strategy, we are experiencing strong channel development and a growing customer roster.”

The screaming SCinet network
A most impressive network is being setup in New Orleans, where the Supercomputing Conference SC10 will be held next week. Eric Dube has begun documenting the SCinet network installation, which will be capable of delivering 260 gigabits per second of aggregate data bandwidth for conference attendees and exhibitors. The network will have an InfiniBand fabric consisting of Quad Data Rate 40, 80, and 120 gigabit per second circuits.

SCinet will be used for a DICE (Data Intensive Computing Environment) program from Avetec to conduct a live Research Sandbox (System Area Network Demonstration) project utilizing a geographically diverse high-speed wide-area-network architecture (10GBs).  The project will demonstrate and test encapsulated and encrypted InfiniBand data movement between high performance computing (HPC) clusters. Each site is connected to the SC10 location by a 10GbE connection and utilizes Obsidian ES InfiniBand extenders to encapsulate InfiniBand traffic over the links.

Read more about:

Data Center Knowledge

About the Author

Data Center Knowledge

Data Center Knowledge, a sister site to ITPro Today, is a leading online source of daily news and analysis about the data center industry. Areas of coverage include power and cooling technology, processor and server architecture, networks, storage, the colocation industry, data center company stocks, cloud, the modern hyper-scale data center space, edge computing, infrastructure for machine learning, and virtual and augmented reality. Each month, hundreds of thousands of data center professionals (C-level, business, IT and facilities decision-makers) turn to DCK to help them develop data center strategies and/or design, build and manage world-class data centers. These buyers and decision-makers rely on DCK as a trusted source of breaking news and expertise on these specialized facilities.

Sign up for the ITPro Today newsletter
Stay on top of the IT universe with commentary, news analysis, how-to's, and tips delivered to your inbox daily.

You May Also Like