SQL Server Standard Edition Licensing Limits

SQL Server 2008 R2 and SQL Server 2012 Standard Edition are both limited to a maximum of 4 sockets/16 cores and a measly 64GB of RAM.

Michael K. Campbell

January 28, 2013

5 Min Read
computer software disks with handcuffs laying on top

Glenn Berry (blog | twitter) recently posted about SQL Server Standard 2012 Edition Licensing Limits. In his post he did two things really well:

  • Called out that that SQL Server 2008 R2 and SQL Server 2012 Standard Edition are both limited to a maximum of 4 sockets/16 cores and a measly 64GB of RAM.

  • Did a fantastic job of arguing AGAINST these limitations by pointing out that they’re fairly petty set of limitations imposed by Microsoft to try and ‘force’ the adoption of Enterprise Edition SKUs when Enterprise Edition SKUs really have enough value in and of themselves to drive adoption when and where needed.

Overall, I think his post was excellent. It came off very fair, balanced, and informative – while still making it clear that the limitations in place are clearly superficial and just don’t track with modern hardware capacities.

In fact, I can’t help but call out that my WORKSTATION/DESKTOP can actually, natively, address 64GB of RAM – the maximum amount of RAM supported by SQL Server 2008 R2 Standard Edition and SQL Server 2012 Standard Edition.

Put simply, I think the limitation on RAM is simply too restrictive, too draconian, and too greedy. (The 16-core restriction is a bit oppressive too, but not quite so much.) And, like Glenn, I think that Enterprise Edition features (like Transparent Data Encryption, CDC, Snapshots, Online Index Rebuilds, etc.) all do a great enough job of ‘selling’ Enterprise Edition – to the point where artificially limiting Standard Edition (which costs roughly $6K per processor (2008 R2) or roughly $3k per every 2 processor cores) just feels abusive.

Mitigation Techniques

If you’ve somehow found yourself in a situation where you have Standard Edition licenses and more hardware than Microsoft thinks you can ‘handle’, there are a couple of ways around this problem or issue. They’re not exactly pretty – though they can still represent viable solutions under the right circumstances.

Named Instances
A key point to recognize is that SQL Server 2008 R2 Standard Edition and SQL Server 2012 Standard Edition do NOT put any sort of limits on your underlying hardware or host. Instead, they simply limit the amount of Memory (or processors) that SQL Server can use. PER INSTANCE.

Microsoft, in fact, calls this out explicitly in the Capacity Limits section of Books Online for SQL Server 2012:

These limits apply to a single instance of SQL Server. They represent the maximum compute capacity that a single instance will use. They do not constrain the server upon which the instance may be deployed. In fact deploying multiple instances of SQL Server on the same physical server is an efficient way to use the compute capacity of a physical server with more sockets and/or cores than the capacity limits below.

Consequently, an obvious way around this problem is to use multiple, named, instances on the same server. In doing so you’ll need to potentially constrain access to processors (i.e., set affinity – typically to distinct NUMA nodes for best results) and watch out for situations where you might run into IO contention, but, otherwise, the use of multiple instances CAN be one way to get the most ‘bang for your buck’ out of SQL Server Standard Edition licenses – provided you’re willing to ‘babysit’ and tune your workloads as necessary.

Consolidation Scenarios
SQL Server Consolidation within larger organizations with hundreds or thousands of databases is a definite scenario where the limitations of SQL Server Standard Edition are going to come into play. On the one hand, consolidation is all about building out big, beefy, servers with lots of RAM and CPU and trying to cram as many ‘workloads’ on them as possible. On the other hand, SQL Server Standard Edition seems 100% designed to prevent such situations in terms of the artificial licensing limits put in place in terms of RAM (and to a lesser extent: CPU).

That said, in many cases where databases are being consolidated, SQL Server Enterprise Edition is REALLY going to be the best and most logical choice – especially for any key databases or workloads that need high availability and optimal performance. Of course, not all databases are created equal. And, in any consolidation effort, there are going to be gobs of ‘lower priority’ and ‘ancillary’ databases (such as older, legacy databases, dev/testing databases, departmental databases, etc) that may require decent up-time and performance on a day-to-day basis, but which can also afford nightly maintenance windows for things like off-line index rebuilds, and which don’t require transparent data encryption, change data capture, snapshots, and a host of other enterprise ‘goodies’.

Consequently, for organizations that are undertaking serious consolidation efforts, the ability to conceive of database workloads in a ‘tiered approach’ is one situation where, say, mission-critical and important workloads could and should be consolidated on hosts running SQL Server Enterprise Edition (even when and where Virtualization is being used to enable additional consistency), but where Virtualization can then be used as another mitigating technique for ‘lower priority’ or ‘tier 3’ databases that really don’t need Enterprise Edition features and which can typically take up significant resources in terms of hardware requirements simply because of the sheer size and number of lower-priority databases involved.

Or since a picture is worth a thousand words, consider the following:

In the example above, trying to consolidate non-priority SQL Server databases on to a single host running SQL Server 2012 Enterprise Edition in a ‘pig pile’ fashion provides substantially higher licensing prices. Furthermore, when using virtualization as a partitioning technique, organizations can both overcome the 64GB Memory limitation of SQL Server Standard Edition (by building-out VMs with anywhere from an ideal configuration of around 72-78GB of RAM (host, SQL OS, plus buffer pool of 64GB) and gain a bit of additional load-balancing and availability flexibility by means of using virtualization techniques like vSphere vMotion to shunt VMs around from one host to the other.

And, in terms of the numbers specified above, I’m using ROUGH figures of approximately $3K/2-core pack for Standard Edition vs $18K/2-core pack for Enterprise Edition. And, again, when undertaking consolidation efforts, MANY databases are going to simply NEED Enterprise Edition features. But, in situations where large amounts of databases can be consolidated without needing these figures, the diagram above showcases some ways that you can easily overcome Standard Edition Licensing limitations if and where possible. Especially since the numbers above are per HOST – meaning that if you end up with 2, 3, 4 or more ‘low tier’ database hosts, savings could be substantial for non-priority, non-mission-critical, workloads.

Sign up for the ITPro Today newsletter
Stay on top of the IT universe with commentary, news analysis, how-to's, and tips delivered to your inbox daily.

You May Also Like