Maximize Storage Performance

Increase performance without increasing costs

Michael K. Campbell

March 24, 2009

14 Min Read
Maximize Storage Performance

Once upon a time, a client engaged me to help justify the pending purchase of a $100,000 SAN. With 3,000–4,000 concurrent users accessing over 150GB of data, the client was starting to encounter performance problems. Hardware vendors had recommended a new SAN to tame the I/O bottleneck, and I was on hand to OK the proposed solution and help with the migration, should one be needed.

However, an evaluation of the client’s workload and current environment made it painfully obvious that although more than $35,000 worth of disks, RAID controllers, and other hardware was already in place, it simply wasn't being utilized efficiently. By using only the existing hardware, we increased performance and eliminated the “need” for an expensive SAN. (In hindsight, I should have charged a commission on the money saved, instead of merely charging by the hour.) By applying the following strategies, which I used to help my client, you can boost performance and possibly save yourself significant hardware costs.

Storage Is Not Created Equal

Disk drives keep getting less expensive, but the cost for swarms of high-end hard drives and RAID controllers is still a major concern for most organizations. However, not all hardware is a good choice for use with SQL Server. Table 1 lists some common types of RAID arrays, Table 2 lists common ways for storage to connect to computers, and Table 3 lists common disk rotational speeds. Each table tells you how good different hardware choices are for SQL Server. The tables make it obvious that not all storage is created equal—which is fine, as performance requirements vary from deployment to deployment. Accordingly, I like to approach disk optimization scenarios by identifying what I call “premium disk,” the most performant storage option available. The key to identifying premium disk is the fact that almost all SQL Server deployments have access to at least two kinds of disk. For example, in the disk configurations that Figure 1 shows, the premium disk on Server 1 is obviously the 220GB volume, while on Server 2 it's the 640GB volume.

Figure 1: Sample SQL Server Configurations
Server 1
8 cores; 64GB RAM
220GB RAID 1+0 with 15,000rpm Fibre Channel–attached drives
1TB SAN (RAID 5 on 10,000rpm iSCSI-attached drives)

Server 2
2 cores; 32GB RAM
640GB RAID 5 on 7200RPM SATA disks
800GB RAID 1 accessed via Network File Share

Premium disk capabilities vary widely, but you can usually count on two things. First, fast premium disks are also small because high-performance drives generally sacrifice capacity for speed. Second, there’s less premium disk available than non-premium disk and, if the system has been around for a while, there’s less premium disk capacity available than you want. The key to striking a balance between performance and cost lies in optimizing the performance of premium disk while off-loading as many tasks as possible to non-premium disk.

TABLE 1: Commonly Used RAID Types

Type

Pros

Cons

Recommendation

RAID 0 (striping)

Best performance. Reads and writes are split evenly over all disks in the array, making them very fast.

No redundancy. Loss of a single drive results in complete data loss.

Not recommended for use in SQL Server environments due to the risk of data loss.

RAID 1 (mirroring)

Reliable fault-tolerance because data is written to all disks in array. Reads can be split over all the disks in the array.

Writes are slowed to the speed of a single disk because all writes are written to each disk.

Works well for low-demand data files. Excellent for log files.

RAID 5 (striping with parity)

Excellent redundancy: A single drive can fail without jeopardizing the entire array. Decent read performance. Very affordable.

To enforce redundancy, RAID-5 solutions require two reads and two writes for every logical write operation, making writes very expensive.

Affordable solution for high-performance workloads that don’t do lots of writes. Don't use in systems where writes represent more than 10 percent of overall activity.

RAID 1+0 (mirroring plus striping)

Combines strengths of RAID-0 and RAID-1 to deliver fault protection and great performance.

Requires lots of disks and high-end controllers. One of the most expensive RAID solutions.

Excellent for all high-performance database needs. Slight overkill for log files in some cases.

 

TABLE 2: Common I/O Buses

Bus

Speed

Summary

SATA

Up to 300Mbps1

Inexpensive successor to ATA. Easy to manage and configure. Currently there are no 15,000rpm SATA disks available and only a few 10,000rpm SATA disks.

SCSI

Up to 320Mbps2

Historically high performers. Slightly difficult to configure. More expensive than SATA but have best support for high-speed (10,000 and 15,000rpm) disks. Supports multiple devices per channel.

Serial Attached SCSI

Up to 375Mbps

Successor to SCSI (despite slightly lower speeds). Easier to configure than SCSI but still maintains access to high-performance disk drives. Provides higher redundancy than SATA controllers and can use high-speed SCSI disks or cheaper SATA disks if desired. Marketed at enterprise servers and widely used today.

iSCSI

Up to 125Mbps3

Commonly used in SANs, where SCSI commands are relayed over Ethernet. Low throughput is acceptable for backups and other low-level needs, but not a good candidate for highly used SQL Server resources.

Fibre Channel

Up to 425Mbps4

Lots of configuration options exist; can be used to connect to both internal and external drives. Commonly used in clustering scenarios against high-speed SCSI disks for high-performance and high-availability solutions. Extremely fast, but expensive.

For comparison purposes, USB is 60Mbps and FireWire 800 is almost 100Mbps.
1SATA 300. SATA 150 is only half of a SATA 300 bus and doesn’t support multipath I/O.
2with Ultra320 SCSI; other types of SCSI buses also exist.
3Over Gigabit Ethernet.
4with 4GBps Fibre Channel

TABLE 3: Disk Rotational Speeds

Rotation Speed

Rough Seek Time

Summary

5400rpm

5+ ms

Low speed, low heat, low performance. Common in laptops.

7200rpm

4 ms

Suitable for servers where disk activity is minimal. Common in desktops.

10,000rpm

3 ms

Capable of 125Mbps throughput.

15,000rpm

2 ms

Trades capacity for quickest access speeds and throughput.

Multiple Data Files

A key aspect of optimizing premium-disk performance is to adequately harness and utilize controllers and disks. Resource contention can result in hotspots where storage requirements aren’t evenly spread across all disks, so avoiding contention is one way to make sure that your storage systems are properly utilized. For large, heavily used databases, Microsoft recommends that you use between 0.25 and 1 data file per file group for each CPU on a server. With this approach, SQL Server can spawn multiple threads to satisfy I/O requirements simultaneously for queries that it can process in parallel instead of forcing query execution to wait on a single thread to pull data from disk. Even with expensive RAID controllers and 15,000rpm disks, if you have only one spindle or disk handling your workload, you could be missing out on performance benefits that SQL Server can provide by efficiently using multiple I/O threads to satisfy large and complex queries. Even if you have plenty of high-performance disk space, disk is much slower than RAM or CPU, so it has the greatest potential to destabilize your workloads if It’s not being used efficiently.

Merely adding multiple data files to large, heavily used databases can help SQL Server achieve greater parallel execution and shorten backup times, but intelligently distributing data over multiple files will lead to the best results. For example, if you have a large table that’s frequently targeted by many ranged queries, spreading that table across several files can allow multiple sequential reads to satisfy query needs instead of forcing execution to pull everything from a single file. The effect of splitting files is more pronounced when you’re able to place these files on different disk arrays. Likewise, it’s usually also a good idea to split the indexes for large, heavily used tables into a dedicated file group. Splitting the indexes this way allows parallel processing of index and table data in more complicated queries that require lots of filters and bookmarking operations.

Log File Placement

Unlike data files, log files don’t benefit from being spread across multiple files because of the sequential way SQL Server writes log file data. But you should place log files on dedicated disk arrays when possible because they can generate large amounts of I/O when many updates are being made. Using dedicated disk arrays helps isolate log-file write activity from the I/O activity of queries and data modifications in the rest of the database. RAID 5 arrays require two reads and two physical writes for every logical write, so you should generally avoid using RAID 5 arrays for log files. RAID 1+0 arrays are the best option, but RAID 1 arrays with fast disks can handle very heavy workloads when used exclusively for log files.

Mind Fragmentation and Fill Factor

On large tables and indexes that see lots of updates and deletes, fragmentation can easily result in wasted space that can hurt I/O operations as SQL Server pages table and index data into memory. Often, fragmentation can get so bad that it results in a noticeable lag in response time for end users on heavily queried tables. Many experts, including some at Microsoft, recommend regularly rebuilding and defragmenting indexes and fine-tuning the fill-factor setting using the CREATE INDEX or ALTER INDEX statement to combat excessive fragmentation and lag. (See the Learning Path for more information about the fill-factor setting.)

Fine-tuning the fill-factor setting on heavily queried and written tables can be difficult. Too much fragmentation results in wasted read operations, and too little free space can significantly affect insert speeds. There’s no one-size-fits-all approach, nor even a reliable rule of thumb: The best solution depends on environmental concerns and usage patterns. The best way to approach the fill-factor setting is to routinely gather and analyze performance metrics, then make small changes, monitor their impact, and readjust when necessary. This approach can take a bit of effort, but if you can establish optimal fill-factor settings for key tables and use off-peak times to rebuild your tables and indexes accordingly, you can dramatically boost performance in cases involving large tables. There’s no reason you can’t rebuild indexes nightly, as long as you have some downtime available.

Available Free Space

Although maximizing every ounce of premium disk is a worthwhile goal, make sure to keep an eye on the amount of free space available. If your disks have to work overtime to find free space on the drive, you’ll see performance degrade dramatically. The point at which this saturation occurs varies from disk to disk and depends on a wide variety of factors, including the raw performance characteristics of the drives and their associated controllers, but a good rule of thumb is to maintain at least 15 to 25 percent of free space on your drives.

One great way to free up premium disk is to off-load all but the most critical databases to less expensive storage. By putting such databases on a SAN or other inexpensive storage, you can free up premium disk for mission-critical databases and operations. This is especially true for small shops whose staging and development databases commonly reside on the same hardware as production databases.

Partitioning can be a powerful tool for off-loading data, and SQL Server 2005’s partitioning schemes make off-loading data a breeze. Just create your file groups and add your files, putting your less frequently used files on cheaper disk. Create your partition scheme, and SQL Server will do the rest.

Intelligently Manage Backups

Backups are a double-edged sword. Ideally, you want to back up large databases to premium disk to maximize throughput. The problem is that after your backups are finished, the only thing they require is lots of disk—and they can gobble up premium disk in a hurry. Because most full backups are done during off-peak hours, I typically recommend putting backups on non-premium disk when there isn’t enough premium disk space available. This approach allows backups to take up space only on non-premium disks, and when done during off-peak hours the performance overhead associated with using non-premium disk doesn’t have a big impact.

Another option for backups is a third-party compression backup agent. These are typically much cheaper than additional premium disk and provide the added benefit of decreasing recovery times. Off-box backups to an iSCSI SAN or a network file share can also be good options, but a good rule of thumb is to always have one or two days of complete backups, including full backups and logs, in easily accessible storage such as on the box in case of disaster. The last thing you want in an emergency is to wait while your backups are sucked over the wire on a 100 megabit or gigabit network.

Efficient Log File Management

Many SQL Server environments have only one RAID controller or one RAID array, which data and log files must share. Consequently, log files end up on premium disk and can devour lots of high-performance disk that they don’t need. For example, if you have a 100GB database that sees about 12GB of logged activity each day, you likely want a log file of about 25GB, enough to handle daily load and account for spikes in activity. This, in turn would consume 25GB of premium disk, but log file information is perfectly safe on lesser storage, such as a RAID 1 array. The net result is that you can end up wasting lots of premium disk if you're only backing up your log files on a daily basis—and backing up only once a day is a bad move from a recoverability standpoint anyhow.

Because log file backups and the associated truncation operation are extremely performant, using regular and frequent log-file backups can help maximize the amount of premium disk capacity available. For example, consider the typical workday schedule in Figure 2, where most updates and modifications happen during peak hours. In the case of the 100GB database mentioned earlier, implementing hourly backups of the log allows the log file to be only 4GB, a comfortable size, assuming roughly 1GB–2GB of modifications per hour during peak hours plus ample padding. Hourly backups allow completed transactions to be safely logged and backed up to non-premium disk, freeing more than 20GB of premium disk without sacrificing reliability or performance. On larger systems that are slowly growing over time, this approach, properly used, can reduce the need to purchase additional premium storage.

More Performance, Less Money

Ultimately, managing disk performance can require effort, tuning, tweaking, and even educated guesses. It’s worth the effort, however: By using your existing resources creatively, you can maximize the efficiency of your expensive I/O subsystems, which in turn can boost SQL Server’s overall performance and save the money you'd otherwise spend to upgrade to a more expensive storage solution or purchase new storage.

Sign up for the ITPro Today newsletter
Stay on top of the IT universe with commentary, news analysis, how-to's, and tips delivered to your inbox daily.

You May Also Like