Can Container Virtualization Unlock the Promise of Stacked SQL Server Instances?
In spite of technical and operational challenges, the stacking of SQL Server instances is something we should be driving toward in order to achieve the significant economic benefits of stacking instances.
November 13, 2014
In my last post, I talked about the huge economic benefits of stacking SQL Server instances—that stacking instances is the secret ingredient to saving potentially millions of dollars on your SQL Server deployments. In this post I talk about the three reasons why users seldom stack instances, and I wonder if all the talk these days about containerization (such as the Docker approach) might not lead to an eventual solution even in the Windows-based world of SQL Server.
Reason #1: Too Hard to Move
Moving an instance from one host to another is complex and takes a long time—about 8 hours. Here's a table that outlines the general steps to migrate an instance with 200GB of databases to a new server starting at 21:00 (9:00pm):
21:00 - Document SQL parameters and directories on Old Server | 03:15 - Stop SQL Server on New Server |
21:00 - Make current backup of all databases | 03:20 - Export SQL Server registry keys (backup) |
23:15 - Shut down database on Old Server | 03:25 - Copy the SQL Server folder on New Server (backup) |
23:20 - Copy the SQL Server folder to New Server | 03:35 - Copy SQL Data from Old Server to New Server over network (including the entire instance folder and all databases) |
23:22 - Export SQL Server registry keys | 04:15 - Shut down and power off Old Server |
23:25 - For each user database: -Detach database from source instance -Copy database to destination instance disk -Attach database to destination instance disk | 04:20 - Import the SQL Server registry keys on New Server |
20:25 - Move msdb database to New Server | 04:25 - Change New Server IP address to Old Server IP address |
20:45 - Move master database to New Server | 04:30 - Change New Server name to Old Server |
01:05 - Move the model database to New Server | 04:35 - Reboot New Server |
01:25 - Copy the registry keys to New Server | 04:45 - Start SQL Server on New Server |
01:30 – Backup | 05:00 - Test SQL Server and connectivity to ensure successful move |
01:45 - Install SQL Server on New Server (parameters and directory structures must be the same) | 05:10 - Complete |
Even if you're able to cut this time in half, this lack of mobility makes stacking instances impractical for production environments. Imagine if you've stacked five mission-critical instances and then, under a new workload condition, two of the instances start to negatively impact each other's performance. In the short term the admin team will need to bring one or both of the instances down to see if that solves the problem. All of this will kill customer productivity. Ideally, the admin team could quickly move the two troubled instances to another server in a way that would minimize the impact to performance, but they would need to be able to do a quick stop and restart of the instances on a different server while preserving the instance's application connection strings. That's no easy feat, but if the connection string could be held within a virtualized container, that would go a long way toward allowing this capability.
Reason #2: High Availability is Harder
Developing and implementing a high availability (HA) solution that provides consistent performance for a single mission-critical SQL Server instance is a challenge. Adding stacked instances to the solution just introduces too much operational risk. If a single instance fails, only one system is impacted. An HA failure for a group of stacked instances could have catastrophic impact on the operational integrity of the business. Better to isolate instances in their own two-node clusters to minimize the costs of downtime. If you wanted to stack instances as part of an HA solution you would need a way to set policies that would "fan-out" instances to target servers that maintain service level agreement (SLA) objectives. Could a container hold those kinds of policies? Maybe.
Reason #3: Patch Management is a Nightmare
Patch management of separate instances isolated on separate hosts is administratively intensive, but very straight-forward. Each instance's patch level can be managed to meet the specific requirements of the underlying system without regard to any other instance. In a stacked-instance environment, greater care must be taken to make sure that the patching of one instance does not have an unintended consequence on any of the other stacked instances. In a single-instance environment, an unstable patch revision is isolated to a single system. In a stacked-instance environment, the unstable patch revision of a single instance could compromise the performance of multiple systems. To simplify patch management in a stacked-instance environment, you need a way to quickly move instances back to unpatched servers if the patched server has issues. Again, a lightweight container approach might allow that kind of mobility.
In spite of these technical and operational challenges, the stacking of SQL Server instances is something we should be driving toward in order to achieve the significant economic benefits of stacking instances. With all the recent hype around Docker, I have to believe that container virtualization for big apps is a possible solution.
Until next time . . .
About the Author
You May Also Like