Creating Simple Performance Baselines with SQL Server Profiler
Use SQL Server Profiler to create a performance baseline and then measure performance metrics that you can compare against the baseline.
October 22, 2013
"A good plan violently executed now is better than a perfect plan executed next week."—George S. Patton
Baselines and subsequent comparison metrics make it easy to determine whether index modifications and code changes are truly improving the performance on heavily used servers. However, many DBAs (especially reluctant DBAs) regularly fail to set up baselines before they undertake performance tuning efforts. In my experience, this oversight occurs because no perfect solution exists that can easily address all the complex variables and differences in workload, utilization, and execution that might be encountered from one execution to the next when collecting performance metrics.
Related: Debugging (Nested) Stored Procedures with SQL Server Profiler
Aggregate Values Are Good Enough
From a performance standpoint, aggregate values are good enough. In other words, if you can trace the number of operations executed in a given amount of time and then determine the average reads, writes, CPU utilization, and duration, then when you make a change it's fair to say you should expect to see a change in whatever metric or performance aspect you're targeting.
Fortunately, those metrics are trivial to obtain. Here's what I do:
Launch SQL Server Profiler. (From the Start menu, select Programs, Microsoft SQL Server, Performance Tools, SQL Server Profiler.)
Create a new trace. (Select File, New Trace, then connect to the server you'd like to use as a baseline.)
In the Trace Properties dialog box, which Figure 1 shows, give your trace a name (e.g., Baseline, Post index changes), make sure the Standard (default) trace template is selected, and select the Save to file option (any location will work).
Figure 1: Configuring Trace Properties
On the Events Selection tab, clear all the options except the RPC:Completed and SQL:BatchCompleted events, as Figure 2 shows. This means that you only want to watch for T-SQL batches and RPCs (remotely executed stored procedure calls).
Figure 2: Selecting Trace Events
Click Run.
On busy servers, a flurry of queries/stored procedures will fly by in the trace window. On less busy servers you’ll be able to see every RPC/batch call that's executed on the server (i.e., on the instance you’re profiling).
Let the trace run for a while—however long you think makes sense. On heavily used servers, this can be as little as 5 to 10 minutes. Just remember that you'll need to run the same trace for roughly the same amount of time the next time you need to gather results. (Although as we’ll see shortly, the actual amount of time doesn't really matter too much.) Store the trace data in a table, as Figure 3 shows.
Figure 3: Storing Trace Data in a Table
A Brief Warning
SQL Server Profiler is an extremely powerful tool. It's gradually being replaced by extended events (which is good news), but it remains very simple and easy to use. However, because the tool is so powerful, it's easy to make simple mistakes in using it.
Specifically, every time you enable a trace on a server, you're imposing some additional code/processing and overhead (commonly referred to as the watcher effect). Typically, you can assume that using SQL Server Profiler will add approximately 1 to 3 percent processor overhead. This should be fairly light on your disks when saving to file, because SQL Server Profiler efficiently buffers writes to disk. However, if you get carried away with the number of events you're watching/profiling, or if you grab an event that occurs multiple times (or tens or hundreds of times) per query, it's possible to create a situation in which SQL Server Profiler can quickly generate huge amounts of data.
Likewise, I recommend that you never use the Save to table feature. It's too easy to accidentally save trace data to a table on the server you're profiling, which can create a very nasty feedback loop that will negatively affect performance.
Crunching the Numbers
After you've collected some data, it’s fairly easy to run queries against it, generate averages, and examine other metadata to be able to either establish a baseline or compare the latest execution details against the baseline. I ran the query in Listing 1 against my trace data (stored in a table).
WITH Trace ([StartTime],[EndTime],[TotalMinutes],[TotalOperations],[TotalCPU],[TotalReads],[TotalWrites],[TotalDuration])AS (SELECTCONVERT(varchar(10), MIN(StartTime),8),CONVERT(varchar(10), MAX(EndTime),8),DATEDIFF(n,MIN(StartTime),MAX(EndTime)),COUNT(RowNumber),SUM(CPU),SUM(Reads),SUM(Writes),SUM(Duration)FROM )SELECTStartTime [Start],EndTime [End],TotalMinutes [Total Mins],TotalOperations [Total Ops.],--TotalCPU,--TotalReads,--TotalWrites,--TotalDuration,CAST((CAST(TotalOperations as decimal(18,4)) / CAST(TotalMinutes as decimal(18,4))) as decimal (18,4))[Ops / Min.],CAST((CAST(ISNULL(TotalCPU,0) as decimal(18,4)) / CAST(TotalOperations as decimal(18,4))) as decimal (18,4)) [CPU / Op],CAST((CAST(ISNULL(TotalReads,0) as decimal(18,4)) / CAST(TotalOperations as decimal(18,4))) as decimal (18,4)) [Reads / Op],CAST((CAST(ISNULL(TotalWrites,0) as decimal(18,4)) / CAST(TotalOperations as decimal(18,4))) as decimal (18,4)) [Writes / Op],CAST((CAST(ISNULL(TotalDuration,0) as decimal(18,4)) / CAST(TotalOperations as decimal(18,4))) as decimal (18,4)) [Duration / Op]FROM Trace;
The query in Listing 1 uses T-SQL template syntax to specify the name of the table you want to query (i.e., you need to specify the table name instead of the placeholder).
You don't have to save SQL Server Profiler trace output to SQL Server in order to query it, because you can use fn_trace_gettable('FilePath_and_Name_here.trc') instead. However, I find that it's typically less work to save the results to a table within SQL Server. (Note that this function is deprecated—as is SQL Server Profiler.)
Once you’ve crunched the numbers, you’ll get output similar to that in Figure 4.
Figure 4: SQL Server Profiler Trace Output
Making Sense of the Output
The key to making your baseline work is the idea that averages are a decent enough indicator of overall performance that if your averages go up or down from one reading to the next, you can easily make sense of whether any recent changes were good or bad from a performance standpoint. That is, the expectation is that if you add indexes or made code tweaks, you should see decreased averages—but if new functionality or features for your app are released, and the averages for reads, writes, or duration shoot up, then you know that something troublesome was introduced.
The primary metric you’ll want to compare from one trace to the next is the Operations/Minute value. This value will be a decent indicator of whether you're comparing apples to apples or apples and oranges from one execution to the next, based on different workload characteristics, increases/decreases in activity, and so on. In other words, if the Operations/Minute values are roughly within the same ballpark, then it should be safe to assume that the other metrics will line up well enough that any serious spikes or jumps in reads, writes, CPU, or duration might be a problem, whereas any noticeable decreases in these values should be an indicator that your performance tuning efforts have paid off. This is because, all things being more or less equal, if you're seeing roughly the same number of executions per minute but you're seeing decreased reads and duration (for example), then your operations have become much more efficient.
The primary benefit of the approach I've outlined here is that data collection becomes quite trivial. After you establish a baseline, that baseline is useful only if you regularly compare it against future or additional checks.
About the Author
You May Also Like