Testing VI Architecture
SQL Server Magazine test results show that VI Architecture network adapters increase SQL Server processing performance.
October 17, 2000
To compare SQL Server 2000’s throughput capacity with different types of network connections between the SQL Server machine and the application Web servers, I devised a test to run the SQL Server machine to near-100-percent CPU utilization. The test application was a COM+ Internet API (ISAPI) implementation of Doculabs’ @Bench e-commerce benchmark specification, which Microsoft wrote. This application simulates an online bookstore, with test transactions logging in to the site, browsing the catalog, and purchasing books. The application let me choose the database size I wanted SQL Server to work with; I used a database with 1 million books in 21 categories and 50,000 predefined customers. Quest Software’s Benchmark Factory generated the load. The application ran on four load-balanced Web servers supported by one SQL Server machine; 47 computers running the Benchmark Factory Agent simulated application users.
My SQL Server machine was a Compaq ProLiant DL580 four-processor system that contained four Pentium III 700MHz processors with a 1MB Level 2 cache, a 100MHz front-side bus and 4GB of SDRAM. This setup turned out to provide too much SQL Server processing capacity, however. I ended up running the tests with only one processor active in the ProLiant DL580 machine so that all tests would drive this SQL Server to near-100-percent utilization.
On the Web server side, my testing configuration consisted of four Compaq ProLiant DL380 dual-processor servers, each configured with a Pentium III 867MHz processor and 256KB of Level 2 cache, a 133MHz front-side bus, and 128MB of SDRAM. These servers included an embedded Compaq NC3163 Fast Ethernet 10/100 NIC. To support full communications to each of the servers, which were configured for Network Load Balancing’s (NLB’s) Unicast mode, I installed a second NIC—a 3Com 3CR990-TX-97. I used the Compaq Smart Start 4.80 CD-ROM to upgrade various drivers, including the Ethernet NIC driver.
Server Setup
I used a ProLiant DL580 computer to host SQL Server 2000 in this test. The server contained an embedded RAID controller and a Compaq Smart Array (SA) 5300 RAID controller connected to an external storage cabinet. I created two arrays on the storage cabinet—one for SQL Server data and one for SQL Server logging. After installing Windows 2000 Advanced Server to a single-disk volume attached to the embedded RAID controller, I installed SQL Server 2000 from a release to manufacturing (RTM) version of the software. SQL Server required little tuning. In Enterprise Manager, I selected the Processor tab of the server’s Properties page, then selected Boost SQL Server Priority on Windows and Use Windows NT Fibers.
I used four ProLiant DL380 load-balanced Web servers, each running Win2K AS with Microsoft IIS 5.0. I conducted only the basic tuning common for Web servers. I enabled NLB on the embedded 10/100 Ethernet Adapter that the Benchmark Factory Agent computers used to access the test application. I boosted the NIC driver buffers for this NIC to their maximum by selecting Local Area Connection Properties, Configure, Advanced. On the Advanced tab, I set Coalesce Buffers to 32, Receive Buffers to 1024, and Transmit Control Blocks to 64. The 3Com NIC with driver version 1.0.19 supports several wake-on-LAN options but doesn’t have an option to configure NIC buffers. I double-clicked the My Computer icon, then selected Properties, Advanced, Performance Options, and Application Response. I selected the option Optimize Performance for Background Services.
Because the test goal was to let the Web application generate as much workload as possible on the SQL Server machine, I tuned IIS to free its resources for the Doculabs @Bench application. Using the Internet Services Manager, I disabled Logging and set Performance Tuning to More than 100,000 hits per day. I changed several options on the Home Directory tab’s Configuration button. Because Microsoft used Visual C++ (VC++) as a COM application to write this version of the benchmark application Web site, none of the application mappings were relevant, so I removed these mappings but retained the Cache ISAPI Applications setting. Because Microsoft also wrote this version of the application to maintain user session states in SQL Server tables, relieving IIS of this burden, I cleared the Maintain Session State check box on the Application Options tab. I retained the Enabled setting for HTML output buffering. Also on the Home Directory tab, I configured the application to run in the IIS process by setting the Application Protection option to Low. Because the home page for this site was named login.htm, I added this name to the top of the list in the Documents tab.
Because Microsoft wrote the Doculabs @Bench application to use a SQL Server 2000 database, each Web server required the SQL Server 2000 client utilities, which I installed by choosing the Client Tools Only option during a standard SQL Server 2000 installation. This option also installs the Microsoft Data Access Components (MDAC) 2.6 components required to connect to a SQL Server 2000 instance. For testing one of the VI adapters, I used the Client Network Utility to add VI Architecture to the list of enabled protocols and configured each server to use the appropriate driver—either Compaq’s ServerNet II or Giganet’s cLAN—for VI Architecture support within SQL Server 2000.
After configuring each Web server to use the proper VI Architecture driver, I created a server alias for the SQL Server machine, forcing any programs that accessed the server by using the alias to use the more efficient VI Architecture protocol. Last, I configured the Web servers’ access to the SQL Server database by defining an ODBC Data Source Name (DSN), which I did by using the server’s alias and configuring the server as the Web site application required.
Next, I installed the Doculabs @Bench Web site on each of the Web servers, copying the Web application files to each server. I defined a COM+ library application at each Web server, creating an empty application by using Component Services and dragging two DLLs into the application’s Components folder. A quick test with Internet Explorer (IE) showed that the Web site was working properly on each server.
I configured each of the four Web servers to be a member of a Win2K NLB cluster. I selected the NC3163 10/100 Fast Ethernet NIC to support simulated user Web site traffic, and I configured NLB in unicast mode with no affinity. After I added the cluster’s name and IP address to the DNS tables, the Web servers were ready for testing.
Testing with Benchmark Factory
I used Quest Software’s Benchmark Factory test with implementations of the Doculabs @Bench e-commerce benchmark application; the Benchmark Factory test has six transactions that each simulate various usage scenarios against an e-commerce Web site. Each transaction performs multiple operations on the Web site, such as logging in, browsing inventory, and purchasing books. I configured each virtual user in the test to wait half a second (500ms) between transactions, and I ran test iterations simulating from 10 to 150 virtual users in increments of 10. I used 47 computer systems running the Benchmark Factory Agent to simulate these virtual users, so each Agent computer simulated at most four virtual users. Benchmark Factory collected Win2K Performance Monitoring counters during each iteration of the test, which let me monitor SQL Server and Web server performance characteristics. Before each iteration, I restored the SQL Server database to its initial configuration, rebooted the SQL Server machine and the four Web servers, and ran a SQL query to preload the test database into the SQL Server cache. These activities ensured that each test iteration ran under identical test conditions.
I ran the benchmark tests for each network transport in turn, being careful to completely remove all support for one NIC before installing the next. For Gigabit Ethernet, I uninstalled the drivers from the Network and Dial-Up Connections page. For cLAN, I followed Giganet’s instructions for removing the driver and then manually removed from the Device Manager the final devices that the cLAN installation routine installed. Compaq supplied a program to remove ServerNet II from the system.
Because I encountered problems while testing the VI-based products, in one case I implemented a workaround so I could complete the testing. For ServerNet II, I needed to reconfigure the Web server for all tests by changing the NLB affinity setting from None to Single. In the cLAN test, I needed to limit the load level: I was unable to successfully run the test for more than 90 virtual users. Both Compaq and Giganet are working on these problems. The iterations that did run successfully showed a significant performance benefit to the VI architecture cards.
Network Traffic Profile
Because VI Architecture’s benefit to a SQL Server system depends heavily on the characteristics of the network traffic generated by the application workload, I used AG Group’s EtherPeek to monitor network traffic during a rerun of the peak-throughput workload when I tested the SysKonnect and NPI Gigabit Ethernet network installations. Graph A displays frame size versus number of frames. Because 1518 bytes is the Maximum Transmission Unit (MTU) for standard Ethernet frames, Ethernet connections send large SQL Server data result sets by using multiple, consecutive 1518-byte frames. Because network transmissions with the SQL Server 2000 VI Architecture implementation don’t go through the network protocol stack, I couldn’t directly monitor VI Architecture traffic. The VI Architecture Network Library (Net-Library) that SQL Server used combined many of Gigabit Ethernet’s 1518-byte frames into larger frames when transmitted by one of the VI network cards—as would also have been the case had I been able to test Gigabit Ethernet with the link configured for Jumbo Frames, which allow a frame size in excess of 4K bytes. Given the high number of MTU-sized frames in the test, I expect that Gigabit Ethernet likely would have turned in better performance had we used a switch that supported Jumbo Frames. But using a different switch probably wouldn’t have changed the test’s basic conclusion that VI Architecture made the SQL Server 2000 system significantly more efficient for the workload we tested.
About the Author
You May Also Like