A Brief History of Plan Caching in SQL Server

Changes in plan caching over successive versions of SQL Server affect its memory usage.

Kalen Delaney

November 19, 2007

3 Min Read
A Brief History of Plan Caching in SQL Server

In SQL Server’s early days, the ability to reuse compiled plans for stored procedures was one of the product’s most heavily marketed features. As I described back in September 1999 (see “Inside SQL Server: SQL Server 7.0 Plan Caching,” InstantDoc ID 5915), although reusing procedure plans was (and still is) a good thing, it wasn’t the only—or even the most important—reason for using stored procedures. Starting with SQL Server 7.0, the cost of optimization became orders of magnitude more expensive compared with earlier versions, because Microsoft rewrote the optimizer and added many new execution techniques that the optimizer needed to consider. The increased cost of optimization made it more important than ever for SQL Server to avoid recompiling queries whenever possible. As I further discussed in “Inside SQL Server: SQL Server 7.0 Plan Caching,” SQL Server 7.0 began caching and reusing other types of plans, in addition to stored procedure plans, to avoid recompilation. The other types of plans include plans for ad hoc queries and for autoparameterized or prepared queries.

Another change introduced in SQL Server 7.0 was how the size of procedure cache is determined. (In general, we now call procedure cache plan cache because it caches more than just the plans for procedures.) Before SQL Server 7.0, a system administrator would configure a total amount of memory for SQL Server to use. Then within this total memory, the administrator would configure the memory to be used for certain specific purposes, such as user connections, open objects, locks, and devices. Whatever memory remained was used for cache and was split between data cache and procedure cache. The final configuration that the administrator did was to set a percentage that specified how this remaining memory was to be split. The default of 30 meant that 30 percent of this remaining memory would be used for procedure cache.

Back before SQL Server 7.0, in many cases the default of 30 percent of the remaining memory for procedure plans turned out to be far too large. Since only procedure plans were cached, and the plans were much less complex than in current versions, SQL Server rarely needed this much space for plans. Keeping most of the procedure-cache space unused meant there was less room for caching data pages, which could negatively affect your query performance. A good administrator could set the procedure cache percentage at a lower value, but the lowest possible value was one percent. On some of the biggest systems that SQL Server was running on at the time, with the largest amounts of possible memory, even one percent was too much memory, and SQL Server still ended up wasting memory.

But as mentioned, SQL Server 7.0 changed things. Not only could SQL Server cache other kinds of plans besides procedures, but the methods of configuring and sizing memory changed completely. As of 7.0, the total size of your SQL Server system’s memory can be dynamically determined based on your needs, but within that total memory, SQL Server will use memory for purposes other than data buffers only when it needs the space. When SQL Server no longer needs space for other uses such as plans, user connections, and open objects, it will return the space to the buffer cache.

Sign up for the ITPro Today newsletter
Stay on top of the IT universe with commentary, news analysis, how-to's, and tips delivered to your inbox daily.

You May Also Like