Identifying Transaction Log Backup Gaps

Transaction log backups are critical for point-in-time recovery. If you're in full recovery mode you also better be sure to be backing up those transaction logs because it's the only way to keep your log files healthy and lean.

Tim Ford, Owner

April 17, 2015

4 Min Read
ITPro Today logo

Transaction log backups are critical for point-in-time recovery.  If you're in full recovery mode you also better be sure to be backing up those transaction logs because it's the only way to keep your log files healthy and lean through proper check-pointing and log truncation.  It's sad to say, but many environments I've been called in to support over the years miss that critical step towards good recovery.

Metadata is Metacool

I like writing metadata queries - I really like writing metadata queries that answer multiple questions at the same time!  When I first take over custody of an environment or working with a client and assessing their SQL Servers I use the following script to discern a few things:

  • Are there any databases (in either Bulk-Logged or Full recovery) which are missing transaction log backups of any kind?

  • Are there databases whose log backup cadence may be too long for the amount of transaction log activity?

  • Are there opportunities to tune the auto-grow settings for the database log files?

  • Is there any ability to reclaim space on the transaction log volume(s)

Let's look at the query and some sample output so I can explain how these questions are answered.

The Code

The script makes user of just a few system catalog views from the master and msdb databases:

  • master.sys.databases (for database columns pertaining to recovery model and database name)

  • msdb.dbo.backupset (for backup history metadata)

  • master.sys.master_files (for all file characteristics such as size, growth settings and file type)

 

--=================================/*IDENTIFY LOG BACKUP GAPSby Tim Ford, SQL CruiseFree to use with attribution(Use at your own risk.)*/--=================================              WITH log_backups AS                (                SELECT                ROW_NUMBER() OVER(PARTITION BY database_name ORDER BY database_name ASC, backup_finish_date DESC) AS [Row Number]                , database_name                , backup_set_id                , backup_finish_date                FROM msdb.dbo.[backupset]                WHERE [type] = 'L'                )                                SELECT                @@SERVERNAME AS server_name                    , SD.name AS database_name   , SD.recovery_model_desc, LB.backup_finish_date AS last_log_backup                  , MFR.data_size_mb                    , MFL.log_size_mb                    , MFL.is_percent_growth                    , MFL.growth                    , (MFL.log_size_mb / MFR.data_size_mb) *1.0 AS [log::data]                FROM sys.databases SD                INNER JOIN                    (                    SELECT database_id                        , SUM(size)*8/1024 AS data_size_mb                    FROM sys.master_files                    WHERE type_desc = 'ROWS'                    GROUP BY database_id                    ) AS MFR ON SD.database_id = MFR.database_id                    INNER JOIN                    (                    SELECT database_id                        , SUM(size)*8/1024 AS log_size_mb                        , is_percent_growth                        , CASE is_percent_growth                        WHEN 0 THEN CAST(growth*8/1024 AS VARCHAR(15)) + 'mb'                        ELSE CAST(growth AS VARCHAR(15)) + '%'                        END AS growth                    FROM sys.master_files                    WHERE type_desc = 'LOG'                    GROUP BY database_id                        , is_percent_growth                        , growth                    ) AS MFL ON SD.database_id = MFL.database_id LEFT JOIN log_backups LB ON SD.name = LB.database_name                 WHERE SD.name NOT IN ('tempdb', 'ReportServerTempDB')                    AND SD.recovery_model_desc != 'SIMPLE'   AND ( LB.[Row Number] = 1 OR LB.[Row Number] IS NULL)               ORDER BY SD.name;

 

Figure 1 shows you what the output looks like against a freshly-inherited SQL instance (names changed to protect the innocent DBA):

 

What Story Does the Code Tell?

Let's go back and address the questions I said get answered by this single query:

Are there any databases (in either Bulk-Logged or Full recovery) which are missing transaction log backups of any kind?

Looking at the last_log_backup column answers that question. (Unless you're doing some ridiculous purging of your backup history.  Personally I keep my backup history and job history pruned to 5 weeks so I can do comparisons if need be on monthly-run jobs without accruing too many records in msdb to make that poorly-indexed system database perform worse than it already does.)

Are there databases whose log backup cadence may be too long for the amount of transaction log activity?

This is what the data_size_mb, log_size_mb and log::data columns are for.  Use these to get an idea as to whether you need to increase the frequency of log backups to compensate for high levels of transaction log activity (or look into possibility that at some point in the past an event "blew-out" the log file.)

Are there opportunities to tune the auto-grow settings for the database log files?

The key for this assessment are the is_percent_growth and growth columns.  Fundamentally you want to set your log files to grow at a consistent size that provides an optimal number of VLFs (virtual log files) inside the log file to ensure the best-possible backup behavior/health for the transaction log.  I point you to an excellent post by Kimberly Tripp-Randal on the subject of VLFs.  Anything I'd write here would just re-iterate what she's written so well.  You also want to minimize the instances of transaction log auto growth.  It should be an insurance policy - not a crutch.  Pre-size those transaction logs out using a fixed size - not a percentage value.  What I tend to do when I see situations like the ORMReporting database example above is to:

  1. Cut a transaction log backup

  2. Shrink the file down to the smallest size possible.  I shoot for the rarely-attainable 1mb.

  3. Grow the log file out in increments that meet the target for VLF size and count that best fits by backup frequency and transactional load.

Is there any ability to reclaim space on the transaction log volume(s)

When I see the log::data column heavily weighted towards the log side of that ratio I see room to reclaim space AND make the transaction log healthier.  Again, I follow the same 1-2-3 process as indicated above.

 

About the Author

Tim Ford

Owner, SQL Cruise

http://thesqlagentman.com/

Sign up for the ITPro Today newsletter
Stay on top of the IT universe with commentary, news analysis, how-to's, and tips delivered to your inbox daily.

You May Also Like