Configuration Settings in the Recent TPC-H Benchmark
Kevin describes the SP_CONFIGURE settings of the recent TPC-H benchmark published by HP and Microsoft.
June 20, 2007
One of the first places most people tend to think about when considering boosting SQL Server performance are the settings of SP_CONFIGURE.
Yet, when you examine the details of the recent TPC-H benchmark by HP and Microsoft, you’ll see that there really weren’t a very large number of changes to what are typically default settings within SP_CONFIGURE. The following parameters were changed from their default values:
name minimum maximum config_value run_value
affinity mask -2147483648 2147483648 -1 -1
affinity64 mask -2147483648 2147483648 -1 -1
allow updates 0 1 1 1
lightweight pooling 0 1 1 1
max worker threads 128 32767 2048 2048
max server memory (MB) 16 2147483647 120000 120000
min server memory (MB) 0 2147483647 118000 118000
network packet size (B) 512 32767 32767 32767
recovery interval (min) 0 32767 32767 32767
show advanced options 0 1 1 1
query wait -1 2147483647 2147483647 2147483647
Most of these settings seemed immediately reasonable to me. For example, it’s well known that lightweight pooling should only be attempted under extremely stressful loads and that, even then, it’s probably not going to yield a performance boost. But when Lightweight Pooling does help, it tends to help a lot. On the other hand, I’d experienced first hand that tinkering with Max Worker Threads can degrade your performance when you thought a change would instead help performance. I can only imagine that it was used here because the known workload for each worker thread was precisely gauge and understood before the test was even attempted. In those cases where I was doing high-end benchmarking, we didn’t know the workload very well in terms of overall workload, although we did know the individual SQL transactions the end-users would be running.
The 120GB of RAM doesn’t surprise me, nor does the specification of both the Min and Max server memory. You might not be aware of this, but in highly stressful workloads, defining the upper and lower bounds of SQL Server memory can yield a performance benefit. Evidently, SQL Server doesn’t have to spend as much time figuring out how much memory it needs, whether to request more memory or not, and in turn paging is reduced. Paging, as you probably know, is a bigger performance drag. So any opportunity to reduce paging is going to help in a big benchmark like this.
I’m not really sure how the network packet size adjustment would help. I’d like to hear your theories. At present, I’m guessing that by forcing a larger network packet size, SQL Server would gulp down SQL transactions in larger groups thereby maximizing the overall intake of SQL transactions to the system.
The two final settings worth discussing, Recovery Interval and Query Wait, are both set to the maximum configurable value. Why? Well, it’s a well known technique in large, public benchmarks like this to defer as much IO as possible until the very end. In fact, in previous benchmarks by many database vendors including Microsoft and Oracle, automatic checkpointing is disabled and a single checkpoint is explicitly issued as one of the final actions in the benchmark run. This enables all of the disk spindles to spin up at one time and flush the entire cache to disk at once, thereby providing the very biggest IO performance gain possible. I believe that the Recovery Interval, which interplays directly with checkpointing, was set to its maximum allowable value for precisely this reason. Furthermore, with SQL Server taking in SQL transactions in bigger clumpers (per the larger Network Packet Size setting discussed a moment before), I theorize that the Query Wait value had to also be increased so that all of the huge batches of SQL transactions wouldn’t time out.
So what are your thoughts on the SP_CONFIGURE settings?
Cheers,
-Kevin
About the Author
You May Also Like