Query–Processing-Order Quirks
Work around inconsistencies between ANSI SQL and SQL Server processing order
February 22, 2007
You write a query that's logically correct, meaning it has no syntactical errors, all referenced objects exist, and there's no apparent reason for a logical error. However, when you run the query against SQL Server, the query fails with runtime errors. You might be looking at a situation in which the physical execution plan that SQL Server chooses differs from the logical interpretation of the query per ANSI-standard SQL. Certain differences between logical and physical query processing might lead to query failures. Let's walk through a couple of examples that demonstrate those differences and explore the reasoning behind them.
Logical vs. Physical Processing
According to ANSI SQL, query elements should logically be processed in the order that Figure 1, shows to yield a correct result set (the numerals indicate the ordering). SQL Server (as well as other leading database platforms) can and often does take a different path in the physical processing of a query. SQL Server's optimizer creates multiple physical execution plans and chooses the plan that the optimizer estimates will run fastest. The process of optimization is driven by cost estimations.
In the plan that SQL Server ultimately chooses to run, physical processing isn't necessarily in accord with logical query-processing phases. For example, if a query has a selective filter in the WHERE clause and there's an index on the filtered column, chances are good that the plan would first process the WHERE clause (index seek), then the FROM clause (joins and other table operators). Ultimately, the idea is that the query would still return the correct result per logical query processing. Or would it?
Illogical Query Processing?
Apparently, in some cases a query that would otherwise return correct results according to logical query processing might actually fail for the sake of improved performance. You might or might not consider this a bug (I do). But the fact is that such behavior occurs in SQL Server and other major database platforms.
Here's an example that I tested in SQL Server 2005 Developer Edition, Service Pack 1 (SP1). Run the code in Listing 1 to create the tables T1 and T2 and populate them with sample data. Now consider the following query:
SELECT T1.keycol, T1.valFROM dbo.T1JOIN dbo.T2 ON T1.keycol = T2.keycol;
Logically, the query first joins T1 and T2 based on the join condition: T1.keycol = T2.keycol. The query then selects the columns T1.keycol and T1.val from the result set returned by the join operation and returns the result set that Table 1 shows. Note that logically SQL Server is supposed to process the SELECT clause after the FROM clause.
Next, run the following query, which attempts to cast T1.val to an integer. (Some code in this article wraps to multiple lines because of space constraints.)
SELECT T1.keycol, CAST (T1.val AS INT) AS intvalFROM dbo.T1 JOIN dbo.T2 ON T1.keycol = T2.keycol;
When I ran this code in SQL Server 2005, I got the following error: Msg 245, Level 16, State 1, Line 1, Conversion failed when converting the varchar value 'abc' to data type int.
Why did the query fail? Because SQL Server decided to scan all rows in T1, apply the computation (convert the VARCHAR column val to INT), then join to T2. Of course, the design of the tables is moot, but I'm using this scenario just for demonstration purposes.
In SQL Server 2000 the previous query runs successfully, but there are other cases when a query in SQL Server 2000 might fail when logically you expect it to run without failure. Try to focus on the general idea and its implications and not on this specific example.
Avoiding Query Failures
So, can you do anything to prevent query failures because of the optimizer's behavior in such cases? Yes: One option is to redesign the tables when the situation calls for it and when doing so is feasible. For example, you can separate different types of data (e.g., character strings and numbers) into different tables instead of mixing them in the same table. The other option is to use CASE expressions. The items of a CASE expression are guaranteed to be processed in order of appearance, and the CASE expression short-circuits (i.e., terminates processing when the answer is known) as soon as one of the items yields true. Listing 2 shows a simplistic example of using a CASE expression to resolve the problem in our query. The query now runs successfully and returns the expected result set.
Remember that when none of the items in the CASE expression yields true and there's no ELSE element, the CASE expression uses an implicit ELSE NULL element. This particular CASE expression converts val to an integer when val contains only digits. The logical expression T1.val NOT LIKE '%[^0-9]%' is simplistic and doesn't cover all cases of valid or invalid integers. But again, this is just an example. You can enhance this CASE expression to make it more robust and more accurate in terms of verifying that val is convertible to an integer.
As I mentioned earlier, I consider the discrepancy between physical and logical query-processing order to be a bug, but in the same breath I should say that I consider it an acceptable bug. The other option is to compromise performance, which is a less acceptable option than having the query fail, in this case. At least, when a failure occurs, you can take steps to avoid it by using one of the methods I've discussed.
Another situation is somewhat similar, but you shouldn't consider it a bug since it doesn't conflict with standard SQL. To demonstrate this condition, run the code in Listing 3 to create the table T1 and populate it with sample data. Look at the following query and before you run it, see if you can tell whether or not it should fail:
SELECT keycol, val FROM dbo.T1 WHERE val * 0.5 >= 1AND 10 / val <= 5;
Now run the query. When I ran this query in SQL Server 2005, I got the following divide by-zero error: Msg 8134, Level 16, State 1, Line 1, Divide by zero error encountered.
Many programming languages (such as C) physically evaluate logical expressions from left to right and short-circuit as soon as the result is known. For example, when val is zero, the expression val * 0.5 >= 1 is false, so there's no reason to evaluate the next expression. Had SQL worked the same way, such a query shouldn't have failed. But SQL differs from other programming languages in many ways. Each phase in logical query processing is considered an all-at-once operation. Thus, SQL Server logically processes all logical expressions in the WHERE phase at the same time. In physical terms, SQL Server
does support short-circuits, but it's free to evaluate the expressions in a particular phase in any order that it deems fit. The physical order of evaluation isn't necessarily left to right, rather it's driven by cost estimations. This is why I don't think that a failure in such a case can be considered a bug.
Can you do anything to resolve the issue? Yes, by using a similar technique to the one I showed in the previous scenario: a CASE expression like the one in Listing 4, in which you have full control over the order of evaluation. This query runs with no failure and returns the result set that Table 2 shows.
Circumvent Conflicts
You should be aware that in SQL Server 2005, physical query processing isn't necessarily in agreement with logical query processing for the sake of performance—even when this disparity causes query failure. However, in cases when you do get a query failure, you should reexamine the design of your tables. If you can't change their design, you can use CASE expressions to control the order in which the optimizer evaluates query elements for processing. Also, remember the all-at-once concept. Logical expressions aren't necessarily evaluated physically from left to right. But in scenarios where you need to control the order of evaluation, you can do so by using CASE expressions.
About the Author
You May Also Like