Log-File Growth and DBCC DBREINDEX

Microsoft’s SQL Server development team explains why a log-file backup is huge after large tables are reindexed.

ITPro Today logo in a gray background | ITPro Today

I have a 30GB database, and I use the Full recovery model. Whenever I use Database Consistency Checker (DBCC) DBREINDEX to periodically reindex certain large tables, I change the recovery model to Bulk_Logged, then return it to Full after the reindexing is complete. I expected that this change would help me avoid huge transaction-log—file growth, but the subsequent log-file backup is always enormous—around 15GB. Logically, the data in the database is the same as before the reindexing, except that the indexes are reorganized, so why is the log file so large? And how can I avoid such significant growth?

Yes, the data is the same, but the indexes are on completely new pages. When you perform a DBCC DBREINDEX, SQL Server logs only extent allocation (8-page units) instead of each row or page that's changed. That kind of logging avoids physical-file corruption in the event of system failure, and it also minimizes the impact that more detailed logging would have on throughput. When you back up the log, SQL Server has to back up the pages allocated in those extents so that it can retain database-backup and log-backup consistency. If SQL Server didn't back up those pages, you wouldn't be able to switch back to the Full recovery model until you did a complete database backup. You have to be able to restore the database from the last full backup plus any differential backups plus any later transaction-log backups.

Sign up for the ITPro Today newsletter
Stay on top of the IT universe with commentary, news analysis, how-to's, and tips delivered to your inbox daily.

You May Also Like