Sidebar: Other Strategies for Report Performance
May 25, 2010
The time it takes for SQL Server Reporting Services (SSRS) to deliver a finished report can be broken down into several sequential operations, summarized and simplified here:
Launching SSRS: SSRS is an XML Web Service that, like any .NET assembly, must be compiled from intermediate language on first invocation and recompiled if it's paged out of memory. This means the first time any report is invoked or deployed to the server, you can expect a considerable delay while SSRS is launched and initialized. On my development system (an Intel Core 2 Quad with 8GB RAM), first invocation can take 30-45 seconds, pounding on all four processors.
Loading and interpreting the RDL: Once the report processor validates the user and identifies the report, the report must be fetched from the SSRS report catalog, decompressed and the RDL interpreted. Reports with embedded graphics and code are considerably larger than those with de-referenced graphics or DLL references and, of course, larger, more complex reports take longer to process and cache.
Capturing input parameters: Once the report processor begins interpreting the RDL, the specified database connection(s) are opened, but before any queries can be executed, any unset query parameters (like those used to feed the WHERE clause of a query) or report parameters (those to feed RDL expressions) must be captured from the user or the system. This means that parameter values can be entered or chosen by the user or generated from system information like the time, date, the user’s credentials or other system-provided information.
Note that the Report Manager can also be used to specify default parameter values (and prompts) for specific reports. The report processor does not wait on the user if all parameters have default or computed values but when there are unset values, SSRS must create appropriate UI elements and capture values from the user before the report data region queries are executed. Yes, this might mean that specific queries must be executed beforehand to populate parameter pick lists or deal with cascading parameter schemes.
Generally, it does not take much time to perform this step, as the server simply waits (for awhile) the user to supply the values. However, if the user is required to supply defaults each time, this can increase their impatience so it’s a good idea to figure out what makes the most sense. For best overall performance, it’s not a good idea to execute unfocused queries—those with extremely broad-scope results but I’ll discuss this again later.
Processing the query: Once the input parameters are captured, the data region query(ies) are executed using the supplied parameter values. This phase can also take a considerable amount of time depending on how many connections must be opened, how much query processing must be performed to generate the rowset (the efficiency of the queries), if the data cache can be leveraged, how many rows are returned and other factors. You have at least a modicum of control over all of these performance-gating constraints. This is a great place to focus your attention as you try to optimize report performance.
Generating the report: Along the way, the report processor must also evaluate each of the expressions embedded in the RDL. These expressions can be simple assignment statements, more complex conditional logic, invocations of RDL embedded code or execution of external CLR .NET DLLs. They can be data-driven or as I’ll discuss later, these expressions can be parameter-driven. In any case they’re often used to focus the report on a subset of the data and provide the user with a far more useful report. Again, the complexity of the expressions can impact the processing time—especially if the code must be re- executed on each row in the report.
Rendering the report: One of the last phases handled by the report processor is to pass the formatted report to the report rendering extension which transmogrifies the report to the specified format (such as WMI, HTML, RTF, Excel or TIFF). Consider that the report rendering extension works somewhat like a printer’s rendering engine so it can consume considerable CPU time and memory (that it robs from other processes running on the same system). If the rendering extension has to compete for resources with the SQL Server query engine or some other resource hog like PowerPoint, report performance can suffer. This means you should run the SSRS service on another, dedicated system.
Graphics rendering and processing can also consume additional resources so strategies that move graphics (pictures) to sub-reports makes a lot of sense. In the example reports I discuss here, I created a subreport that did all of the graphics heavy-lifting so the base “query” report that located the product to examine could run faster.
Creating a snapshot of the generated report: The generated report (based on the query parameters provided and the rowsets it consumes), can be persisted in the in the form of a report “snapshot” to be recalled later from SSRS on demand. By using snapshots, you can dramatically reduce or eliminate the load on the DBMS engine caused by query processing thus reducing the time SSRS has to wait for the results to be returned. This approach makes a lot of sense for relatively static data that only changes gradually over time, so it’s especially useful for reports based on rollup or summary data where information that’s a few minutes old won’t pose a problem.
Executing an Efficient Query
As you can tell from the list of operations above, there are several opportunities to help SSRS work more efficiently. One of the easiest is to use the SQL Server data and procedure cache, which holds recently used data pages and query plans in RAM. By using a stored procedure to return the rowsets used in a report, you can help the SQL Server query optimizer reuse a cached query plan, eliminating the need to recompile the procedure. Ad hoc SQL queries are also cached, but it takes a bit longer for SQL Server to determine that the cached plan is applicable. Remember that when executing a stored procedure, only the first rowset can be used by the report processor.
When I’m asked to help work out a performance problem, the development team often wants to focus on the code used to consume the results of the query. I usually counsel to do just the opposite—focus on the question, not how the answer is consumed. By this, I mean that many more performance problems have to do with how well the query uses the ability of SQL Server and the database configuration to quickly build a result set. I expect you know that if the right indexes are in place and they can be efficiently used in the query, the query will run far faster than if no indexes were in place. The problem is, if the query returns too broad a subset of the table rows, the query optimizer can't (and shouldn't) use the indexes. This rule applies to report queries as well as application queries. Smaller, more focused rowsets will help the query optimizer use the indexes far more efficiently. Unfortunately, smaller rowsets are often not possible when building reports that have to encompass a broad range of information, so OLTP queries aren't going to help build efficient reports. When building reports against large datasets, it often makes sense to use SQL Server Analysis Services or other data engines that roll up data into logical (more easily consumed) subsets, or do brute-force summarizations before the SSRS report processor is handed the data.
The next question you should ask is how many rows you should return. The answer is that it depends. Ideally, the query should only return the rows and columns consumed by the report. However, in some scenarios (such as in the main article, \[LINK\]) you might be able to eliminate the need to re-execute the query if the rowset includes rows needed by a broader view of the data. This is especially true for queries that take more than a few seconds to complete. For example, if your query returns products from the AdventureWorks database in a narrowly focused class but the user wants to see other classes, you’ll have to re-execute the query to fetch the additional classes. On the other hand, if the query returns a broader scope of rows that includes more than one class of products, it might consume more time to generate and process the initial rowset. But if the report is persisted as a snapshot, the report processor can generate any number of alternate views on the fetched rowset without having to repeat the query.
About the Author
You May Also Like