SQL Server IO Analysis Part 1
Microsoft SQL Server offers system objects called Dynamic Management Objects which offer metadata insights into performance aspects of your SQL Server instances. Using one in particular, sys.dm_io_virtual_file_stats, empowers the data professional to identify the IO impact of storage, file structure, and even poorly designed queries and high-impact end user pressure points in your databases. In this first article on the topic we look at identifying the highest read and write latency in data and log files respectively.
Never Upset a Nerd
In 2010 I wrote a book on Dynamic Management Objects with fellow SQL Server MVP Louis Davidson. Over the years I've grown even more passionate about what these constructs offer to the DBA and Developer alike in identifying (or refuting) performance issues. This entire series sprung from a debate with an application analyst over attributing poor application performance on that old scapegoat: "slow storage".
When I asked where he based these assumptions the response involved both Performance Monitor and Performance Analysis of Logs tool known more commonly as PAL and available on CodePlex. PAL is a tool that reads Performance Monitor logs and provides performance reporting based from those logs. Both Microsoft tools and known to base their alerts on more-stringent IO performance thresholds than what Microsoft even admits are more lax "real world" standards. PAL will alert on read IO greater than 20 milliseconds (ms) and on write IO greater than 5 ms. In the wild it's widely accepted that anything less than 100 ms read / 20 ms write is acceptable. Who would not want to blame "slow IO" as an issue though after seeing how PAL presents IO that is barely over their reporting thresholds:
This instance was not new to me. It's been a frequent thorn in the side of the entire DBA Team for quite some time. We had already migrated it from VNX "tier 2" storage up to our "tier 1" storage platform (HP 9500 SAN.) I also knew that the application, CA Service Desk, had a considerable number of warts and to add to its "complexion" problem two individuals in the organization were running some extremely inefficient queries against the database and not taking performance tuning feedback from the DBAs into consideration. The environment was ripe for query performance tuning of both internally and vendor-developed queries but the Analysts were only interested in having the infrastructure teams (server, storage, DBAs) throw hardware at the problem as opposed to practicing correct design and coding standards.
Slowness was blamed completely upon the fact that the PAL reports were stating IO lag of ms and ms for reads and writes respectively. In other words, in real-world situations the IO latency was fine. But since PAL highlighted these items as concerns there was no talking sense to the Analysts. I turned to my favorite tools, the Dynamic Management Objects, to throw math, facts, metadata and experience at the issue as a response.
Getting There From Here
This series is eventually going to build up to providing you with ideas on how to identify IO load across multiple instances in your SQL Server environments and correlating that to IO latency which will allow you to draw some conclusions about latency v. load as a contributor to IO performance issues. This will help you target poorly tuned queries rather than waste valuable time chasing a red herring within the storage tier. However we're going to slowly build our way there by first looking at what goes into measuring IO load and latency at the instance level and how to use some common reporting and analytical tools to present that data to DBAs and Application Analysts alike. By the time we're done with this series I hope that you'll be comfortable with using the various Dynamic Management Objects we employ here as well as Excel, Tableau and the Microsoft Power Tools we will touch upon and statistical functions of mean, median and mode (which we hit in this first episode.)
The Core Dynamic Management Function In the Solution
The heart of this entire series is the sys.dm_io_virtual_file_stats Dynamic Management Function. This function accepts two parameters:
database_id
file_id
Since we don't have our database_ids memorized (usually) it's common to convert that value from the name using the db_id() system function. The syntax is fairly straightforward as are the results. Take for example the code below that returns information about all the files associated with the master database:
SELECT *
FROM sys.dm_io_virtual_file_stats(DB_ID('master'), NULL);
The columns return provide you information relating to the file and database (database_id and file_id), amount of time in the sample in milliseconds (sample_ms) and then amount of IO in terms of both IO operations, amount of bytes read, and IO stall in ms for both reads and writes. The DMV also provides that total IO stall for both reads and writes, size of the file currently, and the file_handle, a unique identifier for the file on the SQL instance. Note that sample_ms in this output is a negative value. You can't use this value as a factor for uptime since the value rolls over once hitting the limit of it's data type.
For the sake of this discussion I'm interested in the file identifiers: database_id and file_id as well as the IO load columns - particularly num_of_reads, io_stall_read_ms, num_of_writes and io_stall_write_ms. Using those columns we can identify average io latency per read and per write and assign that to any file on any database.
One thing you need to remember is that this information is all transitory AND cumulative. The DMO metadata is not persisted between service restarts. In the case of this DMF the metadata accumulates over time too. If you wish to get a benchmark of IO load between two points in time or get trending information for baselines you need to persist this information to disk. This is exactly what I'm going to show you next.
Persisting DMO Metadata to Disk
The persisting process in extremely simple: create a table with a similar schema for the data you're collecting, add a date_stamp column and then set up routine collection of the information:
CREATE TABLE dbo.database_file_io
(
[server] sysname NOT NULL
, database_name sysname NOT NULL
, logical_name sysname NOT NULL
, physical_name NVARCHAR(260) NOT NULL
, file_size_mb BIGINT NOT NULL
, type_desc NVARCHAR(60) NOT NULL
, num_of_reads BIGINT NOT NULL
, num_kb_read BIGINT NOT NULL
, avg_daily_reads BIGINT NOT NULL
, avg_daily_kb_read BIGINT NOT NULL
, io_stall_read_ms BIGINT NOT NULL
, avg_io_stall_read_ms BIGINT NOT NULL
, num_of_writes BIGINT NOT NULL
, num_kb_written BIGINT NOT NULL
, avg_daily_writes BIGINT NOT NULL
, avg_daily_kb_written BIGINT NOT NULL
, io_stall_write_ms BIGINT NOT NULL
, avg_io_stall_write_ms BIGINT NOT NULL
, total_io_stall BIGINT NOT NULL
, avg_io_stall_ms BIGINT NOT NULL
, pct_read DECIMAL(4,1) NOT NULL
, pct_write DECIMAL(4,1) NOT NULL
, date_stamp DATETIME NOT NULL
)
ALTER TABLE dbo.database_file_io ADD CONSTRAINT
PK_database_file_io PRIMARY KEY CLUSTERED
(
logical_name,
server,
database_name,
date_stamp
)
WITH
(
STATISTICS_NORECOMPUTE = OFF
, IGNORE_DUP_KEY = OFF
, ALLOW_ROW_LOCKS = ON
, ALLOW_PAGE_LOCKS = ON
, FILLFACTOR = 70
) ON [PRIMARY];
GO
--==========================================================
CREATE TABLE dbo.database_file_io_history
(
[server] sysname NOT NULL
, database_name sysname NOT NULL
, logical_name sysname NOT NULL
, physical_name NVARCHAR(260) NOT NULL
, file_size_mb BIGINT NOT NULL
, type_desc NVARCHAR(60) NOT NULL
, num_of_reads BIGINT NOT NULL
, num_kb_read BIGINT NOT NULL
, avg_daily_reads BIGINT NOT NULL
, avg_daily_kb_read BIGINT NOT NULL
, io_stall_read_ms BIGINT NOT NULL
, avg_io_stall_read_ms BIGINT NOT NULL
, num_of_writes BIGINT NOT NULL
, num_kb_written BIGINT NOT NULL
, avg_daily_writes BIGINT NOT NULL
, avg_daily_kb_written BIGINT NOT NULL
, io_stall_write_ms BIGINT NOT NULL
, avg_io_stall_write_ms BIGINT NOT NULL
, total_io_stall BIGINT NOT NULL
, avg_io_stall_ms BIGINT NOT NULL
, pct_read DECIMAL(4,1) NOT NULL
, pct_write DECIMAL(4,1) NOT NULL
, date_stamp DATETIME NOT NULL
);
ALTER TABLE dbo.database_file_io_history ADD CONSTRAINT
PK_database_file_io_history PRIMARY KEY CLUSTERED
(
logical_name,
server,
database_name,
date_stamp
)
WITH
(
STATISTICS_NORECOMPUTE = OFF
, IGNORE_DUP_KEY = OFF
, ALLOW_ROW_LOCKS = ON
, ALLOW_PAGE_LOCKS = ON
, FILLFACTOR = 70
) ON [PRIMARY];
GO
I've actually created two tables because the intention is to provide a 90-day baseline. Daily I'll move out data from the base table into the history table and then truncate the daily table and delete any data from the history table that is older than 90 days. More about that later in the series though.
You'll see that I expanded my collection a bit to include computed columns for calculating daily values for IO load as well as altering the units of measurement up from bytes. I added these columns because we'll be expanding our assessment of IO throughout this series and with only storing 90 days of data across all 150+ of our SQL Servers I was not concern about space.
Where did I create these tables? Well you could do it locally for each SQL Server but I'm ultimately going to show you how to collect this information for all databases on your domain for comparison across your environment. If you want to find your noisy servers and databases - the squeaky wheels as it were - this is how to do it! I have a central database called "lifeboat" (in keeping with SQL Cruise) and it's truly a lifeboat when it comes to having a safety net for answering a variety of questions quickly.
Now that we have the target table(s) it's time to provide the collection script that will collect the data from sys.dm_io_virtual_file_stats (and a few other system objects) to populate that base daily collection table:
DECLARE @days INT
SELECT @days = DATEDIFF(d, create_date, GETDATE()) FROM sys.databases WHERE name = 'tempdb'
IF @days = 0
BEGIN
SELECT @days = 1
END
INSERT INTO [dbo].[database_file_io]
(
server
, database_name
, logical_name
, physical_name
, file_size_mb
, type_desc
, num_of_reads
, num_kb_read
, avg_daily_reads
, avg_daily_kb_read
, io_stall_read_ms
, avg_io_stall_read_ms
, num_of_writes
, num_kb_written
, avg_daily_writes
, avg_daily_kb_written
, io_stall_write_ms
, avg_io_stall_write_ms
, total_io_stall
, avg_io_stall_ms
, pct_read
, pct_write
, date_stamp
)
SELECT @@SERVERNAME AS [server]
, DB_NAME(FS.database_id) AS database_name
, DB.[name] AS logical_name
, DB.physical_name
, FS.size_on_disk_bytes/1024/1024 AS file_size_mb
, DB.type_desc
, FS.num_of_reads
, FS.num_of_bytes_read / 8192 AS num_kb_read
, FS.num_of_reads / @days AS avg_daily_reads
, FS.num_of_bytes_read / 8192 / @days AS avg_daily_kb_read
, FS.io_stall_read_ms
, CASE
WHEN FS.num_of_reads = 0 THEN FS.io_stall_read_ms
ELSE FS.io_stall_read_ms / FS.num_of_reads
END AS avg_io_stall_read_ms
, FS.num_of_writes
, FS.num_of_bytes_written / 8192 AS num_of_kb_written
, FS.num_of_writes / @days AS avg_daily_writes
, FS.num_of_bytes_written / 8192 / @days AS avg_daily_kb_written
, FS.io_stall_write_ms
, CASE
WHEN FS.num_of_writes = 0 THEN FS.io_stall_write_ms
ELSE FS.io_stall_write_ms / FS.num_of_writes
END AS avg_io_stall_write_ms
, FS.io_stall AS total_io_stall
, CASE
WHEN (FS.num_of_reads + FS.num_of_writes) = 0 THEN FS.io_stall
ELSE FS.io_stall / (FS.num_of_reads + FS.num_of_writes)
END AS avg_io_stall_ms
, CAST(100. * FS.num_of_reads / (FS.num_of_reads + FS.num_of_writes) AS decimal(4,1)) AS pct_read
, 100.0 - (CAST(100. * FS.num_of_reads / (FS.num_of_reads + FS.num_of_writes) AS decimal(4,1))) AS pct_write
, GETDATE() AS date_stamp
FROM sys.dm_io_virtual_file_stats(NULL, NULL) FS
INNER JOIN master.sys.master_files DB
ON FS.database_id = DB.database_id
AND FS.file_id = DB.file_id
ORDER BY 3
, 1
, 2;
You'll notice I fleshed out some of the descriptive columns for the database and file by joining to the sys.master_files system view. I also calculated the number of days we're collecting data out of the DMF from through using the create_date of tempdb. Since we're ultimately going to pool this data together from this SQL instance with all others in our environment that will have had different "up" times the only way we can keep comparisons on an even footing is to base everything on a daily timeframe.
Simple Query to Close Out Part 1
We'll start digging into analytics in the next chapter of this series. For now lets look at answering three pressing questions:
Which five data files have the highest average read IO latency?
Which five log files have the highest average write IO latency?
Where does database that originally led to all this work fall in the spectrum of IO latency?
Five Data Files with Highest Average Read Latency:
SELECT TOP 5 FIO.server, FIO.database_name, FIO.physical_name, FIO.avg_daily_reads, avg_io_stall_read_ms FROM dbo.database_file_io FIOWHERE FIO.type_desc = 'rows'AND FIO.database_name NOT IN ('tempdb', 'model', 'msdb', 'master', 'iDBA')ORDER BY avg_io_stall_read_ms DESC;
It's not surprising to me that the three worst performers in terms of latency are from our Sharepoint implementation. Clocking in at 5 seconds each though? Well that is a suprise and will lead to investigations. You'll notice I included the daily read load here as well. High latency on a file with low activity is not concerning to me. The activity does put this on my radar however.
Five Log Files with Highest Average Write Latency:
This query will be identical in form, but pointed at the write side of the equation and focused on transaction logs rather than the data files:
SELECT TOP 5 FIO.server, FIO.database_name, FIO.physical_name, FIO.avg_daily_writes, avg_io_stall_write_ms FROM dbo.database_file_io FIOWHERE FIO.type_desc = 'log'AND FIO.database_name NOT IN ('tempdb', 'model', 'msdb', 'master', 'iDBA')ORDER BY avg_io_stall_write_ms DESC;
In this case it's evident why I included the daily write activity. The worst performers here have very low write activity. Though I masked the server and database name I know this is our legacy Sharepoint server (Sharepoint again!) that is on permanent life support.
Instead my focus won't be on anything until record 4 where the daily write load is significant enough that I'd consider investigating. Particularly considering my threshold for write latency is half of what these are clocking in at.
Finally let's look at the database that kicked off this series. For my own protection I've parameterized the database name and server (which allows you to use this script as a template without any re-work on your part too! I'm ranking IO load and IO latency from worst to best so just like in golf a low ranking is good here.
WITH Results AS(SELECT ROW_NUMBER() OVER (ORDER BY (FIO.avg_io_stall_read_ms) DESC) AS latency_rank, ROW_NUMBER() OVER (ORDER BY (FIO.avg_daily_reads) DESC) AS io_load_rank, FIO.server, FIO.database_name, FIO.physical_name, FIO.avg_daily_reads, FIO.avg_io_stall_read_ms FROM dbo.database_file_io FIOWHERE FIO.type_desc = 'rows'AND FIO.database_name NOT IN ('tempdb', 'model', 'msdb', 'master', 'iDBA'))SELECT Results.latency_rank, Results.io_load_rank, Results.physical_name, Results.avg_io_stall_read_ms , Results.avg_daily_reads FROM ResultsWHERE Results.server = ''AND Results.database_name = ''
From a standpoint of data files and read IO these files all are within tolerance of what I would consider good IO latency. What's interesting is that the main data file is in the top 10 most active for daily read IO load and still only has a 21 ms average read latency which is excellent. Yet the Analyst thought otherwise and does not still understand my assertion that poor queries resulting in high, unnecessary reads are the root of their issues. On the write side of the equation we see quite similar results.
WITH Results AS(SELECT ROW_NUMBER() OVER (ORDER BY (FIO.avg_io_stall_write_ms) DESC) AS latency_rank, ROW_NUMBER() OVER (ORDER BY (FIO.avg_daily_writes) DESC) AS io_load_rank, FIO.server, FIO.database_name, FIO.physical_name, FIO.avg_daily_writes, FIO.avg_io_stall_write_ms FROM dbo.database_file_io FIOWHERE FIO.type_desc = 'log'AND FIO.database_name NOT IN ('tempdb', 'model', 'msdb', 'master', 'iDBA'))SELECT Results.latency_rank, Results.io_load_rank, Results.physical_name, Results.avg_io_stall_write_ms , Results.avg_daily_writes FROM ResultsWHERE Results.server = ''AND Results.database_name = ''
Again, the IO load on a daily basis is quite high, the 7th highest write IO daily, yet latency sits at an average of 1ms per write - well within even Microsoft tolerance.
Summary from Part 1
At this point we've seen how to collect IO metrics from the sys.dm_io_virtual_file_stats DMF and I've given you the ability to centrally locate that data across all instances through appending code into Rodney Landrum's earlier article or using your method of choice. We then wrapped up this chapter with identifying top (worst) performers as well as proving that pesky know-it-all Analyst wrong.
What we didn't do was look at the distribution of the data, use statistical analysis to toss the outliers (like those 5 second results and the low IO volume instances) nor look at how to calculate a mean, median, and mode for the results. That's all yet to come.
About the Author
You May Also Like