Essential Techniques for SQL Server Memory Management

In my free webcast on Essential Techniques for SQL Server Memory Management, I made a number of references to additional resources that I would make available for anyone bumping into particular problems, or looking for additional help and guidance.

Michael K. Campbell

October 25, 2011

8 Min Read
ITPro Today logo in a gray background | ITPro Today

In my free webcast on Essential Techniques for SQL Server Memory Management, I made a number of references to additional resources that I would make available for anyone bumping into particular problems, or looking for additional help and guidance.

The webcast (sponsored by SQL Server Magazine and Red Gate Software) was delivered on October 25th, but remains on-demand for roughly 90 days after being delivered. (And, as an FYI, it might take one to two days AFTER it was given before it comes online as an ‘on demand’ event.)

Memory Basics

In my presentation, I tried to cover a quick overview of SQL Server memory details – starting with the absurdly simple and moving on up into a taste or overview of how Systems Administrators (i.e., not quite DBAs but reluctant DBAs) could determine if they were experiencing bottle-necks and where to start looking to clear those up.

In that regard, my presentation started with an overview of how to use Windows Task Manager as a quick way to gut-check whether or not memory on 32-bit systems has been configured correctly. Because, as I point out in Enabling SQL Server AWE Memory, I’ve actually lost count of the number of times I’ve logged into a client’s box only to find that they’re stuck using 2GB of RAM on their host when they have way more RAM available – and NEED it.

Happily, as I pointed out in my presentation, this problem IS going away as more and more environments switch to 64-bit machines (which natively support more than 2GB of RAM).

Otherwise, in terms of memory basics, one of the things I also showcased in my presentation was how a query against 2.6GB of data took SQL Server roughly 52 seconds to pull in from disk on 15K SAS drives, but which only took 2-4 seconds to query when the data was already in memory – outlining just how essential Memory (or RAM) is to SQL Server as RAM is exponentially faster than disk.

Consequently, one of the big things that I pointed out in the introduction to my presentation was that the age old question that commonly surfaces on forums of “Why is SQL Server taking all of my RAM and how do I fix it?” is actually a very common expression of how many Sys Admins simply don’t understand how or why SQL Server needs and benefits from more RAM.

For more information on SQL Server Memory basics, however, I recommend the following resources:

- First, an overview of how to Investigate Windows Memory Usage with Windows 7 Resource Monitor. (Yes, that article IS for Windows 7, but the concepts and techniques apply equally well to Windows Server 2008 R2 in terms of how it uses memory.)

- Second, Brent Ozar has a great overview of what Sys Admins (i.e., reluctant DBAs or non-DBAs) need to know about how and why SQL Server uses RAM in A Sysadmin’s Guide to Microsoft SQL Server Memory.

- Taking a peek at Books Online will also be worthwhile – especially in terms of looking at ‘constraining’ how much RAM SQL Server can use IF you’ve got other apps or a greedy OS running on your SQL Server host.

- Finally, I also barely touched upon the subject of nodes, clerks, and caches in my presentation. And if you’d like to see a bit more in-depth information about how SQL Server uses memory internally (something you could spend weeks, months, or years aggressively learning about), then I recommend SQL Server Memory Explained by Craig Outcalt as a great intro that starts to actually crack the surface of just what’s lying underneath the covers in terms of complexity (which I never even came close to addressing in my presentation).

Detecting Memory Pressure

In the second part of my presentation I took a look at how DBAs and SysAdmins can use both Performance Monitor and SQL Server DMVs as complimentary ways to take a peek into both how underlying SQL Server hardware is handling existing workloads and as a way to see what is using memory from within SQL Server itself.

To that end, we focused just a few minutes on core performance monitors to watch for against the OS itself – including the Page File % Usage Peak (and % Usage) counters – which show how much of the Page File a system is using at any given time. And, while SOME servers or workloads CAN be tuned to actively use the Page File without representing memory pressure or problems, in my presentation I mentioned that if you’ve got > 10-15% peak usage after a given point in time, then you need to do some additional research – because, frankly, most OLTP SQL Server deployments should NOT be using the Page File. (Usage of the page file indicates, by definition, that you’ve got insufficient memory available in RAM and are therefore ‘dropping’ to disk to temporarily store data/instructions and ‘memory’ only to retrieve it later on through a page Fault. And, since disk is exponentially slower than RAM, paging is therefore an immediate and real performance problem on all systems.)

Similarly, I also covered how keeping tabs on the Memory Object’s Page Reads/sec will help you spot trends or spikes in hard-faults which (if they’re occurring regularly and at high numbers) can indicate potential memory problems or bottlenecks on the host. I also briefly mentioned that keeping tabs on the Physical Disk object’s Seconds/Read and Seconds/Write counters are also a good way to correlate memory pressure – because memory pressure regularly manifests as an increase in disk activity which, in turn, can add additional kernel overhead or processing requirements that, in turn, can also cause increases in CPU.

That said, in many ways, reading performance counters is a bit of a combination of science, skill, and dark-art. But, if you’re interested in using them for performance tuning, they’re a great set of tools to use from a bottom-up perspective (i.e. to look at underlying hardware performance and problems in order to try and ‘correlate up’ to what might be the root cause of problems). And, to that end, a great resource for getting started with performance counters (other than just using them and avoiding the temptation to jump to conclusions) is a great, free, poster compiled by Quest Software and found here.

There’s also an article on MSSQLTips.com by Tibor Nagy that provides some additional information on How to Identify Microsoft SQL Server Memory Bottlenecks that you’ll want to check if you think you may not have enough RAM.

DMVs and Dealing with Memory Pressure

Otherwise, as part of my presentation I also showcased how you can easily use DMVs as a way to query not only where potential memory problems might be found, but also as a way to easily track down which databases and even which tables/indexes within a database are taking up the most RAM or Memory. And, to that end, I pinched some queries from an article that Aaron Bertrand had written for MSSQLTips.com where he outlined how to Determine SQL Server memory use by database and object.

Similarly, I also used a query similar to the following:

SELECT p.size_in_bytes,usecounts,objtype,LEFT(sql.[text], 100) AS [text]FROM sys.dm_exec_cached_plans pOUTER APPLY sys.dm_exec_sql_text(p.plan_handle) sqlWHERE usecounts < 2ORDER BY size_in_bytes DESC;

As a way to show how you can query your server for single-use queries that might be bloating the cache-plan. And, as I mentioned in my presentation relative to tuning the plan cache, what you’re looking at isn’t REALLY a technique to remove single-use (or less commonly used) queries from the plan cache. Rather, or instead, the point of using DMVs to analyze the plan-cache is to try and spot which queries are bloating the plan cache so that you can start looking into actually rewriting them to be better parameterized (either through SQL Server’s native or innate ability to use simple parameterization or through more explicit approaches such as the use of sp_executesql and the use of stored procedures).

Additional resources that you might want to check in terms of dealing with the plan cache include the following:

- Plan Caching in SQL Server 2008. A Microsoft White Paper that provides great information and details about how SQL Server leverages memory for execution plans and caching – along with best practices for how to tune the cache.

- Execution Plan Caching and Reuse. A section of Books Online that reiterates that the use of fully-qualified-object-names are essential for plan reuse.

- Managing Plan-Cache Size. An article by Kalen Delany outlining advanced management techniques for plan-cache management.

Otherwise, as I pointed out in my presentation, another obvious way to boost memory utilization is to use indexes – simply because the use of properly defined indexes and covering indexes can dramatically reduce the amount of data that SQL Server needs to pull into (and keep in) memory in order to satisfy common queries. So, while it might sound trite or simplistic, proper indexing strategies are a great way to actually help avoid memory bottlenecks and problems for many SQL Server workloads.

Sign up for the ITPro Today newsletter
Stay on top of the IT universe with commentary, news analysis, how-to's, and tips delivered to your inbox daily.

You May Also Like