Maximizing your Application Performance with SQL Server 2016 In-Memory Technologies and HPE Superdome X
A look at how the in-memory technologies can address your performance issues and how you can leverage the server hardware platform to take full advantage of them.
April 8, 2016
Mission critical application performance is always one of the DBA and database professional’s top concerns. Meeting demanding SLAs is consistently growing more difficult because virtualization hosts and cloud platforms typically share resources with multiple ongoing workloads. The days of having a server dedicated to a single workload are gone. Now almost all workloads run in VMs on a shared host.
With CPU speeds hitting a plateau, Microsoft began a move toward increasing database performance through memory optimization. This trend began with the Vertipaq technologies that Microsoft first released with PowerPivot -- and the SQL Server 2008 release. PowerPivot used memory compression techniques to allow Excel to perform BI analysis over multi-million row data sets. These in-memory technologies later evolved into the Columnstore index technology that was first released with SQL Server 2012. Unlike the original Veritpak technology that was hosted in Excel, the SQL Server 2012 columnar index is a separate processing engine that’s hosted in the SQL Server instance itself. With SQL Server 2014, Microsoft introduced the In-Memory OLTP engine, which enables you to move select tables into memory for high performance, low latency data access.
Let’s take a closer look at how these in-memory technologies can address your performance issues and how you can leverage the server hardware platform to take full advantage of them.
Using In-Memory OLTP to Boosts the Speed of OLTP Applications
SQL Server’s In-Memory OLTP technology works by moving select tables into memory and compiling select stored procedures into native code. The In-Memory OTLP feature is completely compatible with standard X86 server hardware and it uses a new optimistic multi-version concurrency control mechanism that has been optimized for memory-resident data. Microsoft has stated that depending on the workload, you can expect anywhere from a 2X to a 30X improvement in transaction performance. The In-Memory OLTP engine works best where the server is experiencing waits do to locks or latches. The redesigned optimistic locking in the OLTP engine eliminates locking delays. If you’re an experienced DBA who has worked hard at tuning your queries to squeeze out every last ounce of performance, you’ll instantly realize that even a 2X performance improvement is a very substantial improvement. The built-in AMR (Analysis Migrate Report)-in wizard can help you to choose which tables should be moved into memory and to select the stored procedures that will be compiled into machine code for high performance execution. For SQL Server 2016, memory optimized tables have been boosted from a recommended 256 GB to 2 TB. In addition, the capabilities of memory-optimized tables have been extended to support foreign keys, check constraints, unique constraints, and outer joins.
Using the Columnstore Index to Accelerate Data Warehousing Queries
Designed for data warehousing implementations Microsoft has said that Columnstore index can provide anywhere from 10X to 100X performance improvement. The Columnstore Index is different from a traditional SQL Server index because it organizes the index data by columns instead of rows. The Columnstore index compresses column data and move it into memory and then queries that compressed memory resident data. The columnar organization of the index makes it very efficient in dealing with ad-hoc queries. SQL Server’s Columnstore indexes must use all of the columns in the table and it cannot be combined with other indexes. In the original implementation of Columnstore indexes, the underlying table had to be read-only. SQL Server 2014 eliminated that restriction. SQL Server 2016 further extends the Columnstore index capabilities by providing the ability to place a Columnstore Index on a memory optimized table.
Leveraging Maximum Server Memory Capacities
When you’re running in a virtualized environment, the capabilities of today’s enterprise servers can make a tremendous difference in your ability to leverage these in-memory technologies. While both of In-Memory OLTP and the Columnstore Index technologies can provide significant performance improvements, both require additional memory beyond your standard system requirements.
Plus, today’s modern ERP applications are designed to take full advantage of scale-up environments and will provide the best performance on highly scalable platforms. To make the most of them – especially in a virtualization environment where your SQL Server relational and Data Warehousing instances may be running on the same host – you need high memory and CPU capacities like those provided by the HPE Superdome X. Based on an advanced x86 architecture the Superdome X supports up to 288cores and up to 24 TB of memory if you take advantage of partitioning -- providing the headroom you need for memory resident technologies. The Windows Server 2016 operating system can use up to 12 TB of RAM. Windows Server 2012 R2 was limited to 4 TB. Plus, its HP hard partitions (nPars) capability enables you to create electrically isolated hard partitions where each partition has its own CPU, memory, and I/O resources. Each nPar is able to run its own operating system and applications in isolation from the other partitions. This gives the HP Superdome X the capability to support extreme memory capacity and well as the ability to simultaneously run OLTP and BI workloads out of the same box handling the workloads in each partition completely independently.
Sponsored by HPE and Microsoft
About the Author
You May Also Like