Comparing Virtual Interface Architecture Network Adapters

Lab tests show that Virtual Interface (VI) Architecture network adapters cLAN and ServerNet II can boost SQL Server processing performance.

John Green

November 26, 2000

18 Min Read
ITPro Today logo in a gray background | ITPro Today

New network adapters boost SQL Server 2000 performance

Does the performance of your company's SQL Server application mean the difference between a pink slip and a promotion for you? You don't have to spend much time with applications that rely on database systems to realize how important database server performance is to overall application performance. But buying bigger, faster database servers can dramatically increase the application's operational costs. Cluster configurations to support high application availability cost even more.

Using IP-based load balancing is an easy, scalable solution to meet increasing demands on Web servers. But the traditional solutions for increasing SQL Server capacity aren't so neat. Scaling up your SQL Server—using a bigger server with more and faster processors—is an expensive proposition. Microsoft's Windows 2000 Datacenter Server will be a solution for DBAs who choose the relative simplicity of scaling up their SQL Server systems. Scaling out a SQL Server infrastructure—adding more SQL Server systems that work in parallel—is a more complicated strategy, but scaling out is sometimes the only solution when a bigger server doesn't fit your budget or when the biggest server available still doesn't meet your needs.

To handle increasing Web traffic, Web-based businesses have been forced to design applications that spread their processing loads among multiple parallel SQL Server machines. The greater application complexity of such a solution translates directly into greater application costs. Scaling out by using federated databases on linked servers isn't simple either; such a step requires careful planning and vigilant, ongoing database administration.

SQL Server 2000 Enterprise Edition's native support for Virtual Interface (VI) Architecture network adapters might be the tonic you need to boost your database application's performance. VI Architecture—an industry standard developed by a group of vendors including Compaq, Intel, and Microsoft—implements a thinner, more efficient protocol stack. This design translates into fewer interrupts and fewer CPU cycles expended for network I/O. In addition, SQL Server 2000 can then use the spared CPU cycles to satisfy additional application data requests. (See the sidebar "Virtual Interface Architecture," page 56, for a more detailed description of VI.)

In the SQL Server Magazine and Windows 2000 Magazine Lab, I compared the throughput capacity of a SQL Server 2000 computer using SysKonnect's 1000Base-SX Gigabit Ethernet with the same server using two implementations of the VI Architecture specification that SQL Server 2000 Enterprise Edition supports—Giganet's cLAN and Compaq's ServerNet II. My test application was an Internet Server API (ISAPI) implementation of Doculabs' @Bench e-commerce benchmark specification. The test goal was to drive the SQL Server system to nearly 100 percent utilization, observe peak application throughput (as measured in transactions per second), and record the throughput difference between the Gigabit Ethernet and VI Architecture network implementations.

To establish a baseline of application throughput with Gigabit Ethernet, I used SysKonnect fiber-optic Gigabit Ethernet cards and an NPI Cornerstone12g Gigabit Ethernet switch. Then I tested the two VI Architecture-based NICs that SQL Server 2000 supports (cLAN and ServerNet II). After switching from Gigabit Ethernet to a VI Architecture card, I measured an average overall throughput gain of 37.5 percent for the test application workload—a significant improvement by any standard. Graph 1 shows a summary of the test results. Table 1 shows a pricing summary for the products I tested.

The overall performance improvement from switching to a VI Architecture NIC will vary depending on your application's specific characteristics. The benefits of using VI Architecture come from saving CPU cycles for network I/O. So, applications that require SQL Server to work hard to generate a small result set will yield a smaller performance improvement than applications that use simple table lookups to generate a relatively larger result set.

I encountered several problems while testing the VI Architecture-based products, including application failures that were probably caused by problems in the VI Architecture NIC driver or OS protocol stack. In the Giganet cLAN test, an event log message reported an error at the cLAN switch, and I was unable to run the test successfully with more than 80 virtual users. In the ServerNet II test, the Web application made Microsoft IIS hang with a particular Network Load Balancing (NLB) configuration. Tests with both transport protocols returned other Web page errors with no message specifying the cause for the errors. However, the Gigabit Ethernet tests ran well, without the errors I saw when I tested the VI products. (Although I expect that the vendors will successfully resolve the problems I encountered in my testing, I suggest that you stress-test your application and server configuration before you put a VI Architecture-based system in production.)

Testing Overview


Because the test goal was to determine the effect on SQL Server throughput capacity of various SQL Server-to-Web server network connectivity options, I needed a test application that would heavily use SQL Server services across a network connection. I chose a high-performance Visual C++ (VC++) and COM+ /ISAPI implementation of the Doculabs @Bench e-commerce benchmark specification, which Microsoft wrote. Quest Software's Benchmark Factory generated the load. The application ran on four load-balanced Web servers supported by one SQL Server machine; 47 computers running the Benchmark Factory Agent simulated application users.

I configured the benchmark tests to drive the SQL Server 2000 system's CPU utilization to nearly 100 percent of capacity and measured application throughput (transactions per second—tps) and transaction response time. I installed each network configuration in turn, and I ran a workload that varied the number of virtual users from 10 to 150 in increments of 10.

In the Giganet cLAN product tests, I was unable to complete the tests successfully at user loads of more than 80. At user loads of 90 through 120, where test results hovered near maximum throughput, some transactions took more than a minute to complete. In my tests of other network configurations, no transaction took longer than 30 seconds to complete. I uncovered clues to the problem in the System Event Logs of two Web servers. A message classified as informational—which usually means that the reported event has no negative impact on system operations—reported that the Web-server-to-cLAN switch link failed and that the link reinitialized. When I reported these problems to Giganet, the company sent me a replacement switch. But retesting with the new switch didn't resolve the problem.

The successful test runs clearly showed that the transaction rates I observed at the highest load levels were at or near peak throughput, as measured by transactions per second. Benchmark Factory performance monitoring results showed that SQL Server's CPU utilization was near 100 percent during these successful test runs. The maximum throughput I've reported here for cLAN is an average of the top 4 scores for test iterations that ran normally. Giganet representatives told me that they suspect that a known problem, which the company is trying to solve, caused these errors.

During the ServerNet II test, when the application was running in the medium (pooled) IIS process isolation mode, the application terminated unexpectedly and reinitialized. Microsoft's IIS technical support group found that one of the Compaq VI support modules, vipl.dll, was always involved in the problem. I discovered that when I altered the NLB configuration from Affinity of None to Single Affinity, the test ran normally. (The Single Affinity setting causes one member of the NLB cluster to handle all work originating from a particular IP address rather than rotating work among cluster members, as with the Affinity of None setting.) At press time, Compaq was investigating the problem.

My testing with Gigabit Ethernet using SysKonnect fiber-optic cards and NPI's Cornerstone12g switch was flawless. "Testing VI Architecture," which you can find by entering InstantDoc ID 15913 at http://www.sqlmag.com, provides a detailed description of the test procedures, including a diagram of the test network configuration.

Giganet cLAN


cLAN implements a 1.25-gigabit, full-duplex connection through shielded copper cables up to 30 meters long. I used the cLAN1000 host adapters, which are 32/64-bit, 33MHz PCI adapters for 3.3-volt or 5-volt slots. My test network required only one 8-port cLAN5000 switch to connect the four Web servers and the SQL Server machine. A 30-port cLAN5300 switch is also available. You can cascade switches for larger networks. Giganet supports fault-tolerant configurations that use redundant switches and redundant adapters in each server. Giganet also supplies management console software for network monitoring.

Giganet's implementation of the VI Architecture specification is somewhat different from that of other VI product suppliers. Because Giganet implements the functions of the VI Architecture specification's Reliable Reception component (the most reliable communications mode defined by the standard) on the cLAN card as a fundamental part of cLAN's communications protocol, the cLAN1000 doesn't support the lesser modes—Unreliable Delivery and Reliable Delivery. In addition, Giganet doesn't implement Remote Direct Memory Access (RDMA) Read in cLAN. Instead, cLAN initiates RDMA Write from the other end of the connection.

Installing Giganet's cLAN card and drivers on Win2K for SQL Server 2000 requires several simple steps. After you insert the cLAN card in a PCI slot, cable the card to a Giganet cLAN switch, and reboot the server, Win2K recognizes the new card's presence and presents its Install new hardware wizard. The wizard gives you two choices for configuring drivers: Search for a suitable driver for my device or Display a list of known drivers for this device. I selected the first option. The next screen displays four check boxes, which let you tell the wizard where to search for the driver. I selected the Specify a location check box and browsed to the driver directory. The wizard then displayed the cLAN Host Adapter option, and the wizard finished installing the driver.

The installation process installs the driver as a network interface. You can configure the driver as you would any other LAN connection by right-clicking the My Network Places icon and opening the Properties page. With my configuration, the available properties for the cLAN Host Adapter included a cLAN VI Architecture driver option in addition to the standard options: Client for Microsoft Networks, File and Printer Sharing for Microsoft Networks, Internet Protocol (TCP/IP), and (on Win2K Advanced Server and above) NLB. To provide consistency among the tests, I retained the default setup with all options enabled except NLB.

Giganet provides two methods for testing the cLAN hardware—a self-test for the card and a network connectivity test. I ran the self-test by clicking Configure in the cLAN Host Adapter's Properties dialog box, choosing the Advanced tab, and clicking Diagnostics for the driver. You run the network connectivity test by using a command-line utility program called tcpperf.exe, which I found on the driver diskette. The tcpperf.exe program has several options that let you test either VI Architecture or TCP/IP protocols. After installing the cLAN on all five servers, I tested connectivity with each card on the cLAN by running tcpperf.exe on the Compaq ProLiant DL580 SQL Server machine, using options to place that machine's cLAN1000 card in receive mode. In response, tcpperf.exe displayed the media access control (MAC) address of the cLAN card in the SQL Server machine. From each of the Web servers, I ran tcpperf.exe in send mode, specifying the MAC address of the cLAN card in the SQL Server machine. This test verified that the cLAN network worked and that the four Web servers could communicate with the SQL Server machine.

Next, you need to configure SQL Server 2000 and the SQL Server 2000 client on each of the servers to enable the Giganet VI Architecture Network Library (Net-Library). On the SQL Server 2000 system, I used the SQL Server 2000 Server Network Utility to enable support for the VI Architecture protocol and to tell the SQL Server machine to use its cLAN support. For all the server and client systems, I used the Client Network Utility to enable the VI Architecture protocol, configure the protocol to use the Giganet cLAN driver, and establish the driver as the highest priority driver for SQL Server by moving it to the top of the protocol list. To ensure that the Web servers would use the cLAN network to communicate with the SQL Server machine, I used the Client Network Utility to create a SQL Server alias configured to use the VI Architecture Net-Library on each of the Web servers. I used this alias when I created on each Web server an ODBC data source that told the Web application how to connect to the SQL Server machine.

Compaq ServerNet II


ServerNet II implements a 1.25-gigabit connection across industry-standard 1000Base-CX copper cables as long as 25 meters. The Compaq ServerNet II PCI Adapter is a 64-bit, 66MHz card that's also capable of operating at 33MHz in 64-bit or 32-bit slots at either 3.3 volts or 5 volts. The ServerNet II switch has 12 ports, and you can cascade the ports for larger networks. Each ServerNet II PCI Adapter has two ports, and you can configure the ports with redundant switches for fault tolerance. ServerNet II boasts the most complete implementation of the VI standard, supporting the Reliable Reception mode of data transfer as well as RDMA Read and RDMA Write.

The ServerNet II connections are more complicated to install than the cLAN equipment, requiring you to configure the switch and each node (computer) connected to the switch. The order in which you install the elements is important, too. If you try to install the components out of order, you'll need to uninstall all drivers from the node, remove the port configuration on the switch, and start over.

When you install ServerNet II, you first select one of the ServerNet II-attached computers to be the Primary Administrative Node—the system from which you configure the switch and the node that monitors ServerNet II operations. I selected the ProLiant DL580 SQL Server machine to be the Primary Administrative Node for ServerNet II testing, and I installed ServerNet II on this server first. All other computers connected to the ServerNet II switch are called end nodes by the Node Configuration program and PC nodes by the ServerNet II Administrative Utility. After placing the card in the same 64-bit PCI slot that I used for testing other transports and cabling the card to the switch, I powered up the SQL Server machine and followed the Device Manager prompts to search for a suitable driver for my device. I had previously copied the ServerNet II CD to a shared directory on the DL580, and I directed the Device Manager installation wizard to look in the ServerNet directory on this share. Device Manager added the ServerNet II PCI Adapter under Device Manager's SAN Adapter category.

Configuring the node is the next step, which you initiate by running the install.exe program in the root of the ServerNet II CD-ROM directory and selecting the Node Configuration option. The install.exe program let me choose to configure this computer as a Primary Administrative Node, an End Node, or a Switchless SAN Topology node. I selected Primary Administrative Node.

Next, you need to tell the installation routine which switch port this computer will be connected to, which was a unique requirement among the products I tested. Because ServerNet II PCI Adapters have two ports, the ServerNet II driver needs to know which of the two adapter ports you plan to use. The two connectors on a ServerNet II PCI adapter are called the X-fabric port and the Y-fabric port, respectively. You can use either port, but all cards connected to a switch must use the same port. You can configure a switch to be either an X-fabric switch or a Y-fabric switch. Because I planned to configure the switch as an X-fabric switch, I selected Single X-fabric configuration.

The next step in the installation process is loading additional drivers, which you initiate by right-clicking the Compaq ServerNet II PCI Adapter Driver under the SAN Adapter heading in Device Manager. You first select Disable, then Enable. This action loads the Compaq ServerNet II Virtual Interface (VI) driver and the Compaq ServerNet II NDIS Miniport driver. After you complete this step, all the drivers are loaded and active. I completed the card's configuration by assigning it an IP address from the Properties page for the Local Area Connection for the ServerNet II adapter in Network and Dial-Up Connections.

You now must install the ServerNet II documentation and SAN Management software on all nodes. I started this process by running the setup.exe program in the SNETII directory on the ServerNet II CD-ROM, then I selected the Typical installation option, which installs ServerNet II documentation and support software on all nodes. On the Administrative Node, this option also installs the ServerNet II Administrative Utility.

In the last step of the ServerNet II installation process, you use the Administrative Utility to configure the switch. When you start the Administrative Utility, it first displays a representation of the switch and its 12 ports, numbered from 0 to 11. I right-clicked the switch to open a context menu, selected the Add option, and selected the X-fabric option. Next, I entered a switch name, in response to the utility's prompt, and told the switch which port of the ServerNet II PCI Adapters—the X-port or the Y-port—it would be connected to. In the last switch configuration step, I configured each part of the switch that I planned to use, telling the switch the name of the computer that I would attach to each port.

After I configured the ports, the ServerNet II installation on the SQL Server machine was complete, and the switch was configured for the five computers I planned to attach to it. I completed the hardware installation by installing ServerNet II PCI Adapters in the four Web servers, cabling each to its designated switch port, and installing the software as I described previously.

As with the cLAN card, you need to configure SQL Server 2000 and the SQL Server 2000 Client Connectivity components to use the ServerNet II VI Architecture connection. The SQL Server 2000 CD-ROM doesn't contain ServerNet II support software; you need to download the support software from Microsoft. For these tests, I obtained a prerelease version of the driver file, which Microsoft distributes as a self-installing ssnetset.exe program. The program installed the files ssmssnet.dll file in the SQL Server BINN directory and the dbmssnet.dll file in the WinntSystem32 directory. Both .dll files were dated 8/16/2000 and were version 2000.80.194.4. After installing either SQL Server 2000 or the SQL Server 2000 Client Connectivity utilities on each computer, I simply ran the ssnetset.exe program, which installed the appropriate DLLs on each computer. On the SQL Server 2000 computer, I then used the Server Network Utility to enable the VI Architecture Net-Library and configure it to use the ServerNet II driver.

When I installed SQL Server 2000 Enterprise Edition from the release to manufacturing (RTM) CD-ROM, the VI Architecture protocol was available immediately from both the Client Network Utility and the Server Network Utility; I could select support for cLAN in both. However, support for ServerNet II is available only after you install the drivers with the ssnetset.exe program. On all five computers, I used the Client Network Utility to enable support for the VI Architecture protocol and configured each client to use the ServerNet II Net-Library for VI Architecture protocol support. On the four Web servers, I also used the Client Network Utility to create a server alias configured to use only the VI Architecture driver. I created my ODBC data source to use this server alias, forcing the Web application to use the VI Architecture Net-Library for all data access.

Gigabit Ethernet


Installing and testing SysKonnect's 1000Base-SX fiber-optic Gigabit Ethernet equipment was much easier than installing and testing the other network adapters in this test. I didn't need to configure the NPI Cornerstone12g Ethernet switch that I installed for my testing, but I was able to telnet into the switch's default IP address to view the configuration options. I used SysKonnect's model SK-9843 network adapters, which are 64-bit PCI cards with duplex SC-style fiber connectors. I used card version 1.4 for this test. The cards were nearly self-installing: When I restarted the servers after installing the cards, Win2K found and installed the drivers. I had to configure the IP properties only because I chose not to accept the DHCP-assigned configuration. I increased the number of transmit and receive buffers that the driver used to the maximum number supported—200 transmit buffers and 500 receive buffers. I wanted to use a frame size larger than Ethernet's standard 1518 bytes, but the Cornerstone12g switch doesn't support Jumbo Frames. My tests ran to completion without incident on the first attempt. After the initial round of tests, I obtained and installed an updated driver from SysKonnect. Retesting with this driver produced a minor (0.13tps) improvement in peak throughput.

Add a Card and Boost Performance


SQL Server 2000's native support for VI Architecture network adapters is exciting. I can't remember a time when simply adding a new network card so dramatically improved the processing capacity of any computer system. The efficient processing of native VI Architecture network adapters can free a significant amount of CPU processing power that SQL Server will put to work for your application.

The price of VI hardware is well within reach, too. As Table 1 shows, both of the higher-performing VI products cost less than the Gigabit Ethernet equipment I tested. To be fair, if you use copper-cable-based Gigabit Ethernet switches, such as the Intel 470T 6-port Gigabit Ethernet switch (priced at $3995), you'll have a Gigabit Ethernet network that costs less than the VI alternative.

If throughput capacity of your SQL Server system is important to you, you'll want to add a VI network adapter to your SQL Server 2000 upgrade plan. When you plan your upgrade, be aware that Compaq supports ServerNet II only on Compaq servers. Giganet supports cLAN on most servers that run SQL Server 2000. Although both servers implement the VI standard, they aren't interoperable.

Both Compaq and Giganet are investigating the problems that I encountered in my tests of ServerNet II and cLAN. The benefits of VI network cards to SQL Server 2000 systems that you use for transaction processing make the cards an automatic pick for your server configurations.

Sign up for the ITPro Today newsletter
Stay on top of the IT universe with commentary, news analysis, how-to's, and tips delivered to your inbox daily.

You May Also Like