Sizing Your Transaction Log

I’ve read a lot of conflicting advice on the Internet regarding how large a database’s transaction log should be, ranging from very small to half the size of the data. It seems that whichever size I pick, it’s wrong and the transaction log grows. How can I more accurately determine the correct size to use?

Paul Randal

March 29, 2010

4 Min Read
ITPro Today logo in a gray background | ITPro Today

Q: I’ve read a lot of conflicting advice on the Internet regarding how large a database’s transaction log should be, ranging from very small to half the size of the data. It seems that whichever size I pick, it’s wrong and the transaction log grows. How can I more accurately determine the correct size to use?

A: This is a common question I’m asked, and the simple answer is that there’s no right answer! However, there are plenty of wrong answers—including that a transaction log should be a certain percentage of the size of the data in a database. There’s no justification for such advice, and I urge you not to follow it.

There are quite a few factors you need to consider when figuring out how large a transaction log should be, but the size of the transaction log is really a balancing act of how quickly log records are being generated against how quickly they’re consumed by all the various operations that might need them to be available. The trick is to determine what factors could affect your database and cause transaction log growth—no matter what size you initially make it.
  
It’s relatively easy to figure out the volume of transaction log records being generated by an average daily workload. To avoid log growth, you need to make sure that nothing is causing the transaction log records to still be required by SQL Server. Log records that are still required can’t be discarded and their space reused. The following are some reasons the records could still be required:
•   The database is in the FULL or BULK_LOGGED recovery model but transaction log backups aren’t being performed, so the transaction log records can’t be discarded until they have been backed up.
•   Database mirroring is enabled and there’s a queue of transaction log records that haven’t yet been sent to the mirror database.
•   Transactional replication is enabled and the Log Reader Agent job hasn’t processed all the transaction log records for the database.
•   There’s a long-running transaction that’s preventing transaction log records from being discarded.

The transaction log needs to be managed correctly to ensure that it doesn’t grow out of control, even if you’ve sized it appropriately. I discuss this in much more depth in my blog post “Importance of proper transaction log size management” at www.sqlskills.com/BLOGS/PAUL/post/Importance-of-proper-transaction-log-size-management.aspx.

You also need to consider unusual transactions that perform very large changes to the database and generate a lot of transaction log records—more than the regular day-to-day workload. The most common culprit here is an index rebuild operation, which is a single-transaction operation, so the transaction log will potentially grow to accommodate all the transaction log records it generates.

Here’s an example scenario. Say the regular workload on the PaulsDB database generates 12GB of transaction log records every day, so I might choose to perform a transaction log backup at hourly intervals and assume that I can therefore safely set the transaction log size to be 0.5GB. However, I need to take into consideration whether that 12GB is generated in a uniform manner over 24 hours, or whether there are “hot-spots.” In my fictional example, I discover that from 9 a.m. to 10 a.m., 4GB of transaction log records are generated from a bulk load into PaulsDB, with the rest of the log being generated uniformly. In that case, my hourly log backup won’t be enough to contain the size of the transaction log at 0.5GB. I can choose to size the transaction log at 4GB to avoid autogrowth or take more frequent log backups and keep the transaction log smaller.

In addition, I need to consider other unusual transactions. It turns out there’s a 7GB clustered index in PaulsDB that’s rebuilt once a week as part of regular index maintenance. The BULK_LOGGED recovery model can’t be used to reduce transaction log generation because there are user transactions occurring 24 × 7 and switching to BULK_LOGGED runs the risk of data loss if a disaster occurs while in that recovery model (because a tail-of-the-log backup wouldn’t be permitted). So the transaction log has to be able to accommodate the single-transaction 7GB index rebuild. I have no choice but to make the transaction log size for PaulsDB 7GB, or alter the regular index maintenance that’s performed.

As you can see, it’s not a simple process to determine the size of a transaction log, but it’s not an intractable problem either, once you understand the various factors involved in making the decision.

Sign up for the ITPro Today newsletter
Stay on top of the IT universe with commentary, news analysis, how-to's, and tips delivered to your inbox daily.

You May Also Like