Unwanted T-SQL Expression Failures
Examples of failures, and a possible solution
May 16, 2013
When an expression in a query fails, it causes the entire query to fail. However, there are cases in which you don't want an expression failure to result in a query failure—instead, you'd prefer a different outcome. For example, in the event of an expression failure, you might be interested in the expression returning a NULL, the row in question being filtered out, or any other outcome except a query failure.
Related: Denali T-SQL at a Glance – THROW
In this article, I provide a few examples of expression failures that cause query failures. I also describe some existing workarounds and their shortcomings. In addition, I present an elegant solution to a specific class of errors introduced in SQL Server 2012. Finally, I suggest a proposal to Microsoft for a new T-SQL feature called TRY_EXPRESSION that would provide a more general solution for preventing undesired expression failures.
The examples in this article are just a small subset of all the possible examples. In addition, I used very simplified examples to demonstrate my point. In reality, there are many other possible expression failures that can be prevented with solutions that require varying levels of complexity. The point of this article is to express the desire and need for a simple, generalized solution for all cases in which you need to prevent undesired expression failures.
Query Failure: Invalid Floating Point Operation
As my first example for undesired query failures, I'll discuss function calls with input values that are outside the domain of the supported values. Suppose that you need to invoke the LOG function in a query. The first argument to the LOG function must be a positive float value. An attempt to pass a value that is less than or equal to 0 results in an error. For example, try running the following code:
SELECT LOG(-1759);
You get the following error:
Msg 3623, Level 16, State 1, Line 1An invalid floating point operation occurred.
Suppose that you need to invoke the function as part of a query—for example, in the WHERE filter. I'll use a table called T1 to demonstrate an example. Run the code in Listing 1 to create the table T1 and populate it.
SET NOCOUNT ON;USE tempdb;IF OBJECT_ID(N'dbo.T1', N'U') IS NOT NULL DROP TABLE dbo.T1;GOCREATE TABLE dbo.T1( keycol INT NOT NULL, val INT NOT NULL CONSTRAINT PK_T1 PRIMARY KEY(keycol));INSERT INTO dbo.T1 VALUES (2, 43112609), (3, 0), (5, -1), (7, 2026), (9, 1759);
Consider the following query (call it Query 1):
SELECT keycol, valFROM dbo.T1WHERE val > 0 AND LOG(val) <= 10;
When I ran this query on my system, it completed successfully and returned the output in Figure 1.
keycol val----------- -----------7 20269 1759
But was it actually so trivial that the query completed successfully? In order for the query to be guaranteed not to fail, the predicates in the WHERE clause must be evaluated in written order, and SQL Server needs to short-circuit its evaluation of the predicates when the first predicate is false or unknown.
Examine the execution plan for Query 1 shown in Figure 2—specifically, the Predicate in the Clustered Index Scan operator.
Figure 2: Plan for Query 1
You can see that the predicates I wrote in the WHERE clause were evaluated in the same order in the query plan. However, you should realize that the optimizer might decide to evaluate the predicates in a different order, if doing so according to cost estimates would lead to a more optimal plan.
For example, consider the following query (call it Query 2):
SELECT keycol, valFROM dbo.T1WHERE val*1/1 > 0 AND LOG(val) <= 10;
This query is almost identical to Query 1, with a slight revision that doesn't really change the meaning of the code but does affect the optimizer's choices. Try running this query. When I ran it on my system, it failed with the following error:
Msg 3623, Level 16, State 1, Line 1An invalid floating point operation occurred.
Examining the plan for Query 2 shown in Figure 3 reveals why the query failed.
Figure 3: Plan for Query 2
The optimizer decided to evaluate the second predicate that I wrote first, and the first predicate that I wrote second.
One attempt to prevent this failure is to write a query (call it Query 3) that uses a table expression, like so:
WITH T1_Filtered AS( SELECT keycol, val FROM dbo.T1 WHERE val*1/1 > 0)SELECT *FROM T1_FilteredWHERE LOG(val) <= 10;
However, SQL Server unnests the inner query for optimization purposes, practically internally rewriting it as Query 2. This can be verified by examining the plan for Query 3 shown in Figure 4.
Figure 4: Plan for Query 3
As a result, the code still fails. Some of the failures are aligned with standard SQL, whereas some might not be. But it's a fact that SQL Server performs such optimizations to improve the performance of our queries. If you need to prevent such failures, you need to come up with a solution. One of the typical workarounds that people employ is to use the CASE expression, like so:
SELECT keycol, valFROM dbo.T1WHERE CASE WHEN val*1/1 > 0 THEN LOG(val) ELSE NULL END <= 10;
According to Microsoft's CASE documentation, a CASE expression is guaranteed to evaluate its conditions sequentially and to short-circuit when a true condition is found. The documentation also notes that exceptions exist—for example, when using aggregates. Either way, this solution adds complexity to the code.
Query Failure: Divide by Zero
Another typical error in mathematical computations is a divide by zero error. For example, the following query (call it Query 4) fails because of similar reasons that caused Query 2 in the previous section to fail:
SELECT keycol, valFROM dbo.T1WHERE val*1/1 <> 0 AND 43112609/val < 100;
When I ran this query on my system, I got the following error:
Msg 8134, Level 16, State 1, Line 1Divide by zero error encountered.
The query plan for this query shows that the optimizer transformed the predicates in the WHERE clause to the following (simplified): (43112609)/val<(100) AND val*(1)/(1)<>(0). This explains why the query failed. As in the previous section, you can use a CASE expression as a workaround, like so:
SELECT keycol, valFROM dbo.T1WHERE CASE WHEN val*1/1 <> 0 THEN 43112609/val ELSE NULL END < 100;
As before, the use of the CASE expression adds complexity to the solution. You could simplify the solution a bit by using the NULLIF or IIF functions. But as I mentioned, these cases are just simple examples. Things can get much trickier.
Query Failure: Arithmetic Overflow
Another typical cause of errors is arithmetic overflow errors. For example, consider the following simple code:
DECLARE @i as NUMERIC(38,0) = 99999999999999999999999999999999999999, @j as NUMERIC(38,0) = 99999999999999999999999999999999999999;SELECT @i + @j;
This code generates the following error:
Msg 8115, Level 16, State 2, Line 5Arithmetic overflow error converting expression to data type numeric.
Of course, a more realistic situation is that you'll face such errors in queries with expressions that manipulate columns from some source tables.
As a challenge, add logic to the code so that it won't fail. Have the code return the result of the expression if the operation is successful and NULL if it overflows.
I'm not saying that it's impossible to prevent errors. I'm just trying to demonstrate that there are many possible cases, as well as show that preventing errors in the different cases can involve varying levels of complexity.
Query Failure: Type Conversion
Prior to SQL Server 2012, it wasn't always easy to prevent type conversion errors in expressions. For example, suppose that you were given a character string input, and you were supposed to convert it to an INT data type if it was convertible, and return NULL if it wasn't—all without causing an error. Here's an example of one way to achieve this:
DECLARE @str AS VARCHAR(100) = ' -1759 ';SELECT CASE WHEN ISNUMERIC(@str) = 1 AND @str NOT LIKE '%[^0-9!-+ ]%' ESCAPE '!' AND CAST(@str AS NUMERIC(38, 0)) BETWEEN -2147483648 AND 2147483647 THEN CAST(@str AS INT) END;
This approach is just one example for testing whether a value is convertible. With different source and target types, it can be simpler or more complex.
As you can imagine (or might have already experienced), you can run into query failures similar to the ones demonstrated earlier by Query 2 and Query 4, simplybecause of type conversion errors. To demonstrate this problem, first create and populate the table Properties by running the code in Listing 2.
SET NOCOUNT ON;USE tempdb;IF OBJECT_ID(N'dbo.Properties', N'U') IS NOT NULL DROP TABLE dbo.Properties;GO CREATE TABLE dbo.Properties( name VARCHAR(128) NOT NULL PRIMARY KEY, type VARCHAR(128) NOT NULL, val VARCHAR(500) NOT NULL);INSERT INTO dbo.Properties(name, type, val) VALUES ('property1', 'SMALLINT', '1759' ), ('property2', 'VARCHAR', 'abc' ), ('property3', 'INT', '43112609'), ('property4', 'DATE', '20110212');
The table holds object property values in a column called val as character strings. The column type indicates the logical data type of the value. The following query tries to filter only integer properties with a value greater than 10:
SELECT name, type, valFROM dbo.PropertiesWHERE type IN ('TINYINT', 'SMALLINT', 'INT', 'BIGINT') AND CAST(val AS BIGINT) > 10;
This query fails with the following error:
Msg 8114, Level 16, State 5, Line 1Error converting data type varchar to bigint.
By now, you know that such failures are possible because of the optimizer's ability to rearrange the order of evaluation of the predicates. The plan for this query shows the following Predicate (simplified) in the Clustered Index Scan operator:
CONVERT(bigint,val,0)>(10) AND (type='BIGINT' OR type='INT' OR type='SMALLINT' OR type='TINYINT')
Of course, you could use complex expressions, as I showed earlier, to check whether the value is convertible before trying to convert it. But Microsoft recognized the problem and provided an elegant solution in SQL Server 2012, specifically for type conversions.
Specific Solution to Conversion Errors in SQL Server 2012
SQL Server 2012 introduces functions called TRY_CAST, TRY_CONVERT, and TRY_PARSE for type conversions. These functions are very similar to the functions CAST, CONVERT, and PARSE, respectively—except if the source value isn't convertible to the target type, the functions return a NULL instead of generating an error. This greatly simplifies handling the aforementioned tasks. For example, to convert the variable @str to an INT data type and get a NULL back if the value isn't convertible, you simply use the following expression:
DECLARE @str AS VARCHAR(100) = ' -1759 ';SELECT TRY_CAST(@str AS INT);
This solution is much simpler than the one used in the previous section. Similarly, the query from the previous section can be rewritten as follows:
SELECT name, type, valFROM dbo.PropertiesWHERE type IN ('TINYINT', 'SMALLINT', 'INT', 'BIGINT') AND TRY_CAST(val AS BIGINT) > 10;
However, type conversion errors are only a small subset of the possible errors in expressions. What if you wanted to deal with other errors in a similarly simple and elegant way?
Proposed General Solution
Taking into account the simplicity and elegance of the TRY-versions of the type conversion functions, the idea could be generalized to work with any scalar expression as input—for example, something like a TRY_EXPRESSION function that accepts a scalar expression as input and returns the expression's result if it evaluates successfully and NULL otherwise. If SQL Server supported such a function, you could deal with the aforementioned tasks much more easily. Here are the queries you would use in the different cases (don't try running the following examples, because the TRY_EXPRESSION function doesn't exist).
Avoiding an invalid floating point operation error:
SELECT keycol, valFROM dbo.T1WHERE val*1/1 > 0 AND TRY_EXPRESSION(LOG(val)) <= 10;
Avoiding a divide by zero error:
SELECT keycol, valFROM dbo.T1WHERE val*1/1 <> 0 AND TRY_EXPRESSION(43112609/val) < 100;
Avoiding an overflow error:
DECLARE @i as NUMERIC(38,0) = 99999999999999999999999999999999999999, @j as NUMERIC(38,0) = 99999999999999999999999999999999999999;SELECT TRY_EXPRESSION(@i + @j);
You could use the generalized function instead of the specific ones added for type conversion in SQL Server 2012.
Avoiding a conversion error, example 1:
DECLARE @str AS VARCHAR(100) = ' -1759 ';SELECT TRY_EXPRESSION(CAST(@str AS INT));
Avoiding a conversion error, example 2:
SELECT name, type, valFROM dbo.PropertiesWHERE type IN ('TINYINT', 'SMALLINT', 'INT', 'BIGINT') AND TRY_EXPRESSION(CAST(val AS BIGINT)) > 10;
As I mentioned previously, I submitted a feature proposal to Microsoft for the TRY_EXPRESSION function.
Generally Speaking
Try not to let the specific examples that I use in this article distract you—instead, focus on the general problem. With so many possible errors in expressions, when you need to prevent them, you might sometimes find yourself using simple solutions and sometimes using more complex solutions. Having a simple general solution like the proposed TRY_EXPRESSION function could be handy. Such a solution would improve the reliability of the code, increase its readability and maintainability, and reduce development time.
About the Author
You May Also Like