Data Tales #2: The Case of the Exploding Table
This table had increased from around a small size to well over 50GB during the single operation. The deployment operation involved adding the column and writing one row to a deployment history table. Because they were only writing a single new row, they were blaming SQL Server for bloating the database size when a column was added.What puzzled me was that as they were running SQL Server 2012 SP2, this should not have been happening. I'll show you what I mean.
August 13, 2015
I was recently at a customer site where the developers were very concerned about the impact of adding columns to a table. They told me that when they added a new column that their deployment code was timing out and the database was massively increasing in size. It had increased from around a small size to well over 50GB during the single operation. The deployment operation involved adding the column and writing one row to a deployment history table. Because they were only writing a single new row, they were blaming SQL Server for bloating the database size when a column was added.
What puzzled me was that as they were running SQL Server 2012 SP2, this should not have been happening. I'll show you what I mean.
Consider the following table definition:
USE master;GOCREATE DATABASE AlterDBTest;GOUSE AlterDBTest;GOCREATE SCHEMA Membership AUTHORIZATION dbo;GOCREATE TABLE Membership.Companies ( CompanyID bigint IDENTITY(1,1) CONSTRAINT PK_Membership_Companies PRIMARY KEY, TradingName nvarchar(50) NOT NULL CONSTRAINT UQ_Membership_Companies_Trading_Names_Must_Be_Unique UNIQUE);GO
Now let's populate the table with enough rows that we would notice a "whole of data" operation if one occurred:
WITH CompanyIDsAS( SELECT TOP(2000000) ROW_NUMBER() OVER (ORDER BY ac1.object_id) AS CompanyID FROM sys.all_columns AS ac1 CROSS JOIN sys.all_columns AS ac2)INSERT Membership.Companies (TradingName)SELECT N'Company_' + CAST(cid.CompanyID AS nvarchar(8))FROM CompanyIDs AS cid;GO
Prior to SQL Server 2012, adding a NOT NULL column to a table, and specifying a DEFAULT value would lead to a whole of data operation i.e. each row in table would have been rewritten to add the extra column. We can easily verify that this no longer occurs.
Let's start by locating the pages that some of the company rows are located on:
SELECT TOP(10) sys.fn_PhysLocFormatter(%%physloc%%), *FROM Membership.Companies;GO
On my system, this returned the following results:
We can see that the first row is in file 1, page 291, and slot 0. Let's take a look at the contents of page 291:
DBCC TRACEON (3604);GODBCC PAGE (0,1,291,3);GO
From the DBCC PAGE formatted output, we can locate the first row's data by noting the contents of slot 0:
Slot 0 Offset 0x60 Length 37Record Type = PRIMARY_RECORD Record Attributes = NULL_BITMAP VARIABLE_COLUMNSRecord Size = 37 Memory Dump @0x000000002174A0600000000000000000: 30000c00 01000000 00000000 02000001 00250043 0................%.C0000000000000014: 006f006d 00700061 006e0079 005f0031 00 .o.m.p.a.n.y._.1.Slot 0 Column 1 Offset 0x4 Length 8 Length (physical) 8CompanyID = 1 Slot 0 Column 2 Offset 0x13 Length 18 Length (physical) 18TradingName = Company_1 Slot 0 Offset 0x0 Length 0 Length (physical) 0KeyHashValue = (1b7fe5b8af93)
Now let's see what happens when a NOT NULL column with a DEFAULT value is added:
ALTER TABLE Membership.Companies ADD IsFoundationMember smallint NOT NULL CONSTRAINT DF_Membership_Companies_IsFoundationMember DEFAULT (1);GO
Next, we check the contents of the page that contained the first row again:
DBCC PAGE (0,1,291,3);GO
The output is interesting:
Record Type = PRIMARY_RECORD Record Attributes = NULL_BITMAP VARIABLE_COLUMNSRecord Size = 37 Memory Dump @0x000000002174A0600000000000000000: 30000c00 01000000 00000000 02000001 00250043 0................%.C0000000000000014: 006f006d 00700061 006e0079 005f0031 00 .o.m.p.a.n.y._.1.Slot 0 Column 1 Offset 0x4 Length 8 Length (physical) 8CompanyID = 1 Slot 0 Column 2 Offset 0x13 Length 18 Length (physical) 18TradingName = Company_1 Slot 0 Column 3 Offset 0x0 Length 2 Length (physical) 0IsFoundationMember = 1 Slot 0 Offset 0x0 Length 0 Length (physical) 0KeyHashValue = (1b7fe5b8af93)
At first glance, it might appear that the row has changed because there is an entry for the IsFoundationMember column but note that its length is zero. Also note that the overall record size is still 37, and that the KeyHashValue has not changed. SQL Server is basically implying that the column is there.
In the end though, the proof is that the table has not changed in size and the operation to add the column was near instantaneous.
That made me adamant that the problem wasn't caused by adding the column. The problem had to be related to adding the single row to the deployment history table.
The next thing that I checked was the configuration of the database files:
SELECT * FROM sys.database_files;GO
The interesting part of this output was this:
And there was the problem. When I first read this, I had to check that I really was reading what I thought that I was. Somehow the team had managed to configure the database file for 131072% filegrowth:
Adding the single extra row to the table had triggered the abnormal filegrowth and caused the database size bloat, not adding the additional column to the large table.
I'd encourage you to check the filegrowth settings for your databases. While most people don't misconfigure them this badly, they are one of the most commonly misconfigured settings on SQL Server systems. Data files (mdf and ndf files) should grow by a sizeable chunk as long as you have IFI (Instant File Initialization) enabled. Log file increments often need to be smaller as the log files still need to be zeroed out before us. You have to consider how much data writing you are prepared to wait for, keeping in mind that this will typically happen during a user transaction.
In general, I consider that automatic filegrowth should be enabled but should never occur. It should be avoided by proactively managing the size of the database files.
About the Author
You May Also Like