The Testbed
Dynameasure puts a controlled end-to-end stress on a SQL Server system.
December 31, 1996
I used Dynameasure 1.0 by Bluecurve to generate my test load andrecord the results for "Microsoft SQL Server 6.5 Scaleability," page80. Dynameasure puts a controlled, end-to-end stress on a SQL Server system bydirecting the workload of a number of PC clients and recording how much work wasperformed within a predetermined period. Dynameasure uses Open DatabaseConnectivity (ODBC) as the communications methodology between the PC clients andthe test server. The test bed and test results are stored on a separate SQLServer system, and a central management console directs and monitors all tests.(See John Enck, "Dynameasure by Bluecurve: Born to Measure," November1996, for more information on Dynameasure.)
Dynameasure lets you mix reads, writes, and mixed read/write SQL databasetransactions to tune the test environment to your user environment or toapproximate various system behaviors and see how your client/server system willperform and scale under heavy loads. To measure CPU scaleability, I usedDynameasure's single-read--frequent, yet light-weight, transaction mix--tominimize disk I/O. A read/write transaction mix gave me a feel for systemperformance without isolating any subsystem. I used Dynameasure's 500MB testdata set for each server.
I used default settings for transaction weights and test duration andchanged only the think time for each user. Typically a user has a 10-secondthink time, but I reduced it to 5 seconds to increase how much stress on theserver I could obtain from a limited number of PC clients. Dynameasure lets youcontrol the number of simulated users, or motors, on PC clients thatexecute the transactions. My tests went from 50 users to 300 users in 50-userincrements. Each test run took about two hours. At the conclusion of each testrun, Dynameasure's Analyzer module reports the transactions per second (TPS) andaverage response time (ART) rates measured from the client to the server andback. The graphs on pages 82 and 83 summarize my results. The followingparagraphs describe the test equipment environment.
Network
I conducted all the tests on a 100Mbit-per-second (Mbps) Fast Ethernetnetwork running TCP/IP. The core component in this network was a 100Mbps Compaqswitching hub. Non-switching Compaq and Cogent hubs let me funnel theworkstations into the main switching hub.
Workstations
I used 15 Pentium-class systems as the workhorses to generate my PC clientload. Each system hosted 20 motors. These systems included
Telos minitower systems with 120MHz Pentium CPUs, 32MB of RAM, 1GB IDE harddrives, 4X CD-ROMs, and 100 TX 3Com Fast Ethernet controllers
Compaq Deskpro XL 5100 minitower systems, all with 100MHz Pentiumprocessors, 32MB of RAM, Netelligent 10/100 NICs, 1GB IDE drives, and 4X CD-ROMs
Two Dell Optiplex GMXT 5166s and one 5133 (166MHz and 133MHz Pentiums,respectively), each with 32MB of RAM, 1GB drive, and CD-ROM
An Innova Pro 5400ST from Canon with a 133MHz Pentium processor
Test Management Systems
A Compaq ProLiant 4500 with dual 166MHz Pentium CPUs with 2MB of independentLevel 2 cache per CPU, with 196MB of RAM and two 4.3GB Fast and Wide SCSI-2drives functioned as the SQL control server and housed the test results. AMicron Promagnum workstation with a 200MHz Pentium Pro CPU (256KB on-chip Level2 cache), 64MB of RAM, 2GB SCSI-2 hard drive, 8X CD-ROM, and Matrox Millenniumvideo card was the management console. We used a Digital Prioris HX 5133DP withtwo 133MHz Pentium CPUs and 64MB of RAM as the domain controller.
Servers
I tested several different server systems. All these systems used the sameNT settings (optimized for background network applications and a 500MB pagefile)and the same SQL settings. (For more information on optimizing settings, see "MoreEasy SQL Server Performance Tips," on page 88.) Also, I equipped all thesesystems with 384MB of system RAM, and I spread the disk I/O across multipledevices. I used
two Compaq systems (the ProLiant 4500 and the ProLiant 5000), each with 10disk drives: The 4500 used 2.1GB Fast and Wide SCSI-2 disks, and the 5000 used4.3GB Fast and Wide SCSI-2 disks (which are slightly faster, but this speed hadno visible effect on disk I/O performance). Both systems used the CompaqSMART-2/P Array Controller, and the disks were evenly distributed across bothavailable SCSI channels, with the data volume stripe sets spanning the channels.
a NEC ProServa with seven 2.1GB Fast and Wide disk drives and a Mylex 960DACP-2 RAID controller. Disk I/O was spread out on this system, too.
About the Author
You May Also Like