SQL Server 2016 Virtualization Best Practices
With 288 cores and up to 24 TB of RAM, the HPE Superdome X is capable of running many demanding and high performance workloads simultaneously. While the virtualization hardware platform is definitely the foundation for successfully virtualizing SQL Server enterprise databases, there are also a number of other configuration factors that you need to pay attention to. Here are the top SQL Server virtualization best practices.
June 20, 2016
The HPE Superdome X provides the extreme scalability and reliability that you need in your mission critical virtualization hosts. With 288 cores and up to 24 TB of RAM, the HPE Superdome X is capable of running many demanding and high performance workloads simultaneously. While the virtualization hardware platform is definitely the foundation for successfully virtualizing SQL Server enterprise databases, there are also a number of other configuration factors that you need to pay attention to. Here are the top SQL Server virtualization best practices.
Don’t over commit the server – It’s important not to overcommit the CPU and networking resources on the server. There’s nothing stopping you from creating and running VMs that use more virtual CPUs (vCPUs) than you have physical cores. However, it’s important to remember that if you do they may well be switched out from time to time and available processing power is required for resource intensive production workloads like SQL Server. The same is true for networking resources. It’s easy to overcommit the physical NICs on a virtualization hosts when you’re running a scores of VMs on that host. It’s important to track CPU and network utilization and configure the host appropriately.
Be aware of your NUMA configuration – This can be important for VMs, which have a lot of vCPUs. NUMA system architecture is used in multiprocessor systems where processors and memory are assigned into groups called nodes. When a VM is started, Hyper-V will try to allocate all the memory for that VM from a single physical NUMA node, which is what you want. If your VM’s vCPUs exceed the number on the NUMA node or the memory exceeds the NUMA configuration that would force cross NUMA node processing, which would reduce performance.
Use Dynamic Memory for SQL Server – When a SQL Server workload causes the sqlserver.exe process to grow, Dynamic Memory can add memory to the VM. The SQL Server database engine is capable of detecting the added memory and can grow its buffers to meet the increased workload demand. To take advantage of dynamic memory you need to use the Enterprise Edition of SQL Server 2016. The standard edition doesn’t support hot add for CPU or RAM. In addition, the VM guest OS needs to support hot add RAM. Hot Add RAM is supported by Windows Server 2012, 2008 R2 SP1 and 2003 R2 SP2 Enterprise and Datacenter editions.
Use fixed virtual hard disks – The fixed virtual hard disk (VHD/VHDX) is the best choice for virtualized SQL Server systems that run production workloads. Dynamic virtual hard disks are a good choice for labs, test environments or noncritical production workloads as they use less disk space but they do not provide the same level of performance. Workloads running on dynamic virtual hard disks can experience occasional pauses when the dynamic disk needs to be extended to accommodate storage growth. In the past, pass-through disks were preferred for workloads with the highest input/output requirements but they do not have the flexibility of fixed virtual hard disks and now fixed virtual disks offer essentially the same performance.
Use separate VHDs/VHDXs for your OS, data and log files –The default configuration uses a single virtual hard disk for storage. Almost any SQL Server production workloads would immediately run into disk contention problems with this configuration. For production SQL Server instances you need to separate your operating system, data file and log files on to different VHDs/VHDXs and these VHDs/VHDXs should be on different drives to avoid input/output contention. Typically, log files require drives with fast write capabilities while the OS and data files require more read performance.
HPE and Microsoft are the underwriters of this article.
About the Author
You May Also Like