What You Need to Know about Distinct Windowed Calculations
ARTICLE (2017_05 - Distinct Windowed Calculations):
June 6, 2017
T-SQL supports distinct grouped aggregate calculations like COUNT(DISTINCT ), but as of SQL Server 2017 CTP2, it doesn’t yet support distinct windowed aggregate calculations. There’s an open connect item requesting this feature. In this article, I explain what distinct aggregate calculations are, provide an example for a task that requires such calculations, show the desirable yet missing syntax that would solve the task, and four supported workarounds. I also cover a performance test comparing the four solutions both with plans that use only row mode processing and with plans that use batch mode processing. In my examples, I’ll use a sample database called TSQLV4. You can download the script to install the sample database here and find its ER diagram here.
The challenge and the missing syntax
Just like with grouped distinct aggregate calculations, windowed distinct aggregate calculations consider only distinct occurrences of the input expression, but unlike grouped calculations, windowed calculations do not hide the detail. For example, suppose you had a table T1 with the following data:
id grp val----------- ---- ----1 GRP1 A2 GRP1 A3 GRP1 B4 GRP1 C5 GRP1 C6 GRP2 A7 GRP2 A8 GRP2 D9 GRP2 D10 GRP2 D
Suppose that you wanted to compute for each row both the distinct count of values (val column) in the current group (grp column), as well as the grand distinct count of values. Here’s the desired result for the given sample data:
id grp val countgrp countall----------- ---- ---- --------- ---------1 GRP1 A 3 42 GRP1 A 3 43 GRP1 B 3 44 GRP1 C 3 45 GRP1 C 3 46 GRP2 A 2 47 GRP2 A 2 48 GRP2 D 2 49 GRP2 D 2 410 GRP2 D 2 4
Had distinct windowed aggregates been supported in T-SQL, you would have achieved this task like so:
SELECT id, grp, val, COUNT(DISTINCT val) OVER(PARTITION BY grp) AS countgrp, COUNT(DISTINCT val) OVER() AS countallFROM dbo.T1;
If you’re wondering what could be practical use cases for such calculations, consider a challenge involving the Sales.Orders and Sales.OrderDetails tables in the sample database TSQLV4. The former has order header data and the latter has order line data. The columns that are relevant to our challenge are the orderid, orderdate and custid columns from Sales.Orders, and the orderid, productid, qty and unitprice columns from Sales.OrderDetails. Suppose that customers get rebates based on their order activity in the previous month, and your task is to write a query that computes those rebates. The rules for the rebates are:
1. A variety discount percentage is computed as: 35% * distinct count of products ordered by customer during month in question / distinct count of products ordered during month in question by all customers. For example, suppose that during January 2016, customer 1 ordered 5 distinct products, customer 2 ordered 10 distinct products, and all customers ordered 50 distinct products. The variety discount for customer 1 in January 2016 is 35% * 5 / 50 = 3.5%, and for customer 2 it is 35% * 10 / 50 = 7%.
2. A quantity discount percentage is computed per order line as: if quantity is at least 10 then 6% otherwise none. For example, in an order line where the ordered product quantity is 15 the quantity discount is 6%, but in an order line where the quantity is 9 there’s no applicable quantity discount.
3. The final discount percentage per order line is computed as follows: if variety discount plus quantity discount doesn’t exceed 10%, apply the sum of both discounts, otherwise, apply 10%. For example, remember that in the example above in January 2016 customer 1 has a variety discount of 3.5% and customer 2 has a variety discount of 7%. In an order line for customer 1 where a quantity discount of 6% applies, the sum of the variety discount of 3.5% and the quantity discount of 6% is 9.5%. Since this discount doesn’t exceed the maximum of 10% per order line, the 9.5% discount is considered the final one for that order line. But in an order line for customer 2 where a quantity discount applies, the sum of the variety discount of 7% and the quantity discount of 6% exceeds the maximum of 10%, therefore the final discount for that order line is 10%.
Apply the final discount percentage of the order line to the order line value (pct * qty * unitprice). The sum of all order line discounts for the customer should be returned to the customer as that month’s rebate.
For example, your solution should produce the following result for January 2016:
custid rebate----------- --------1 58.813 46.825 379.666 40.927 51.789 59.8012 14.5216 48.6217 73.1220 798.9824 9.6925 109.8926 27.2832 630.7138 144.4339 1095.5440 36.9941 39.5242 3.0744 37.8546 254.4347 361.2849 59.0955 343.7256 52.5359 48.1263 354.5364 10.1965 323.6166 141.5067 290.2270 58.7271 609.8375 29.0276 559.5280 66.0381 376.0282 2.6084 283.9888 9.1789 298.87
Again, had distinct windowed aggregate calculations been supported, you would have used the following solution to achieve this task for January 2016:
WITH C AS( SELECT O.custid, O.orderid, OD.productid, OD.qty, OD.unitprice, -- varietydiscount = 35% * #distinct products per customer -- / #distinct products 0.35 * COUNT(DISTINCT OD.productid) OVER(PARTITION BY O.custid) / COUNT(DISTINCT OD.productid) OVER() AS varietydiscount, -- qtydiscount = if qty >= 10 then 6% else none CASE WHEN qty >= 10 THEN 0.06 ELSE 0 END AS qtydiscount FROM Sales.Orders AS O INNER JOIN Sales.OrderDetails AS OD ON O.orderid = OD.orderid WHERE O.orderdate >= '20160101' AND O.orderdate < '20160201')SELECT custid, CAST(SUM( qty * unitprice * linediscount ) AS NUMERIC(12, 2)) AS rebateFROM C -- line discount = min(0.10, varietydiscount + qtydiscount) CROSS APPLY ( VALUES( CASE WHEN varietydiscount + qtydiscount > 0.10 THEN 0.10 ELSE varietydiscount + qtydiscount END ) ) AS A(linediscount)GROUP BY custid;
The following sections cover four supported workarounds.
Solution 1: Using grouped queries
The first solution I’ll present (call it Solution 1) is what people would probably consider as the most obvious solution. Here’s the complete solution’s code:
WITH Base AS( SELECT O.custid, O.orderid, OD.productid, OD.qty, OD.unitprice FROM Sales.Orders AS O INNER JOIN Sales.OrderDetails AS OD ON O.orderid = OD.orderid WHERE O.orderdate >= '20160101' AND O.orderdate < '20160201'),CustCounts AS( SELECT custid, COUNT(DISTINCT productid) AS custproductcnt FROM Base GROUP BY custid),AllCount AS( SELECT COUNT(DISTINCT productid) AS allproductcnt FROM Base)SELECT B.custid, CAST(SUM( B.qty * B.unitprice * A2.linediscount ) AS NUMERIC(12, 2)) AS rebateFROM Base AS B INNER JOIN CustCounts AS CC ON B.custid = CC.custid CROSS JOIN AllCount AS AC CROSS APPLY ( VALUES( 0.35 * CC.custproductcnt / AC.allproductcnt, CASE WHEN B.qty >= 10 THEN 0.06 ELSE 0 END ) ) AS A1(varietydiscount, qtydiscount) CROSS APPLY ( VALUES( CASE WHEN A1.varietydiscount + A1.qtydiscount > 0.10 THEN 0.10 ELSE A1.varietydiscount + A1.qtydiscount END ) ) AS A2(linediscount)GROUP BY B.custid;
The CTE Base is based on a query that handles all applicable joins, filters, and any other preliminary steps that you may need, so that you can apply any later calculations to the same base result. In our case this means joining Orders and OrderDetails and filtering only orders from January 2016.
The CTE CustCounts computes the grouped distinct product count per customer from Base, and the CTE AllCount computes the grand distinct product count from Base.
The outer query joins Base, CustCounts and AllCount to match to each order line row the respective customer distinct product count and grand distinct product count. The outer query then applies the discount logic based on the aforementioned rules, and finally groups the data by custid and aggregates the line discounts to compute the total customer rebate.
The execution plan for Solution 1 is shown in Figure 1. This plan was produced on my machine against bigger tables called OrdersBig (~1M rows) and OrderDetailsBig (~3M rows). Sample data for the bigger tables and revised solutions’ code is provided later in this article in the performance test section.
Figure 1 - Plan for Solution 1 under row mode
The top two branches of the plan compute the grouped distinct aggregates. Each obtains the data from the two input tables, applies a join, and then the aggregate. The bottom branch obtains the detailed data from the two input tables and applies the final grouped aggregate. The heavy lifting of both joining and aggregating in this plan is done by hash operators, and there’s no spooling taking place in this plan. This solution completed in 5 seconds against the large tables on my machine, and that’s when using only row mode operators.
This solution can naturally work also for other distinct aggregates such as SUM(DISTINCT …).
Solution 2: Using ROW_NUMBER
The second solution I’ll present (call it Solution 2) applies a fairly simple idea. It numbers rows within each distinct group, nullifies all but one occurrence of each distinct value, and then applying the aggregate to the non-NULL values as a regular windowed aggregate. Here’s the complete solution’s code:
WITH C1 AS( SELECT O.custid, O.orderid, OD.productid, OD.qty, OD.unitprice, ROW_NUMBER() OVER(PARTITION BY O.custid, OD.productid ORDER BY (SELECT NULL)) AS rownumcust, ROW_NUMBER() OVER(PARTITION BY OD.productid ORDER BY (SELECT NULL)) AS rownumall FROM Sales.Orders AS O INNER JOIN Sales.OrderDetails AS OD ON O.orderid = OD.orderid WHERE O.orderdate >= '20160101' AND O.orderdate < '20160201'),C2 AS( SELECT custid, orderid, productid, qty, unitprice, 0.35 * COUNT(CASE WHEN rownumcust = 1 THEN productid END) OVER(PARTITION BY custid) / COUNT(CASE WHEN rownumall = 1 THEN productid END) OVER() AS varietydiscount, CASE WHEN qty >= 10 THEN 0.06 ELSE 0 END AS qtydiscount FROM C1)SELECT custid, CAST(SUM( qty * unitprice * linediscount ) AS NUMERIC(12, 2)) AS rebateFROM C2 CROSS APPLY ( VALUES( CASE WHEN varietydiscount + qtydiscount > 0.10 THEN 0.10 ELSE varietydiscount + qtydiscount END ) ) AS A(linediscount)GROUP BY custid;
The query in the CTE C1 assigns row numbers to number rows within each distinct customer and product (rownumcust) and row numbers to number rows within each distinct product (rownumall). Notice the use of arbitrary window ordering (ORDER BY (SELECT NULL)), since you don’t really care about the order in which row numbers are assigned, rather just that unique row numbers starting with 1 are assigned.
The query in the CTE C2 applies windowed COUNT aggregates to filtered values using CASE expressions that return only one occurrence of each distinct product ID, practically achieving a distinct count. This query uses the results to compute the variety discount, and also computes the order line’s quantity discount.
The outer query then computes the final line discount, and groups and aggregates the result to compute the total customer rebate.
The plan for this query (against the big tables) is shown in Figure 2.
Figure 2 - Plan for Solution 2 under row mode
The data is pulled from the two input base tables and joined only once. The two row numbers are computed against the result of the join. But then there are two expensive rounds where the result is spooled, and then the spool is read twice—once to get the detailed state of the data, and again to compute the COUNT aggregate—and the results two are joined. One round has all this work to compute the customer counts and another round to compute the grand count.
This query took 24 seconds to complete on my systems. Most of the inefficiency is due to the spooling-related work.
This solution can also work for other distinct aggregates such as SUM(DISTINCT …).
Solution 3: Using MAX(DENSE_RANK)
The third solution that I’ll present (call it Solution 3) uses a general form that is similar to the one in Solution 2, but instead of using the ROW_NUMBER and COUNT window functions, it uses the DENSE_RANK and MAX window functions. You realize that the maximum dense rank value is the distinct count—it’s as simple as that. In the previous solution in C1 you replace the ROW_NUMBER calculations with DENSE_RANK calculations that are based on the same window specifications. In C2 you replace the COUNT calculations with MAX applied to the dense rank results that you computed in C1. The rest is the same. Here’s the complete solution’s code:
WITH C1 AS( SELECT O.custid, O.orderid, OD.productid, OD.qty, OD.unitprice, DENSE_RANK() OVER(PARTITION BY O.custid ORDER BY OD.productid) AS drkcust, DENSE_RANK() OVER(ORDER BY OD.productid) AS drkall FROM Sales.Orders AS O INNER JOIN Sales.OrderDetails AS OD ON O.orderid = OD.orderid WHERE O.orderdate >= '20160101' AND O.orderdate < '20160201'),C2 AS( SELECT custid, orderid, productid, qty, unitprice, 0.35 * MAX(drkcust) OVER(PARTITION BY custid) / MAX(drkall) OVER() AS varietydiscount, CASE WHEN qty >= 10 THEN 0.06 ELSE 0 END AS qtydiscount FROM C1)SELECT custid, CAST(SUM( qty * unitprice * linediscount ) AS NUMERIC(12, 2)) AS rebateFROM C2 CROSS APPLY ( VALUES( CASE WHEN varietydiscount + qtydiscount > 0.10 THEN 0.10 ELSE varietydiscount + qtydiscount END ) ) AS A(linediscount)GROUP BY custid;
The plan for Solution 3 is shown in Figure 3.
Figure 3 - Plan for Solution 3 under row mode
As you can see, this plan is very similar to the plan for Solution 2. The DENSE_RANK function is computed very similarly to ROW_NUMBER, and MAX is computed very similar to COUNT. Therefore, unsurprisingly, this solution is also quite slow, also taking 24 seconds to complete on my machine.
Clearly, this solution is very specific to computing a distinct count; it cannot be used to compute other distinct aggregates.
Solution 4: Using DENSE_RANK asc + DENSE_RANK desc - 1
The fourth solution that I’ll present (call it Solution 4) is the most creative. It was proposed in response to a forum question on stackoverflow.com that can be found here.
The idea is that you compute two dense rank values—one ascending (call it drkasc) and another descending (call it drkdesc). The sum of the two is always exactly one more than the distinct count, so to compute the distinct count, sum the two and subtract 1. For simplicity, let me first demonstrate the technique against the table T1 which I mentioned earlier. Use the following code to create and populate the table:
DROP TABLE IF EXISTS dbo.T1;SELECT id, grp, valINTO dbo.T1FROM ( VALUES ( 1, 'GRP1', 'A'), ( 2, 'GRP1', 'A'), ( 3, 'GRP1', 'B'), ( 4, 'GRP1', 'C'), ( 5, 'GRP1', 'C'), ( 6, 'GRP2', 'A'), ( 7, 'GRP2', 'A'), ( 8, 'GRP2', 'D'), ( 9, 'GRP2', 'D'), (10, 'GRP2', 'D') ) AS T1(id, grp, val);ALTER TABLE dbo.T1 ADD CONSTRAINT PK_T1 PRIMARY KEY(id);
The following code computes the grand distinct count of values (in the val column) alongside the detailed rows:
WITH C AS( SELECT id, grp, val, DENSE_RANK() OVER(ORDER BY val) AS drkasc, DENSE_RANK() OVER(ORDER BY val DESC) AS drkdesc FROM dbo.T1)SELECT id, grp, val, drkasc, drkdesc, drkasc + drkdesc - 1 AS distinctcountFROM C ORDER BY val;
This code generates the following output:
id grp val drkasc drkdesc distinctcount--- ---- ---- ------- -------- --------------1 GRP1 A 1 4 42 GRP1 A 1 4 46 GRP2 A 1 4 47 GRP2 A 1 4 43 GRP1 B 2 3 44 GRP1 C 3 2 45 GRP1 C 3 2 48 GRP2 D 4 1 49 GRP2 D 4 1 410 GRP2 D 4 1 4
As you can see, dense rank ascending starts with 1 and increases by one with each step, and dense rank descending starts with the distinct count and decreases by 1 with each step. Therefore, their sum in all rows is always one more than the distinct count. So, to compute the distinct count, you add the two and subtract 1.
Here’s how you apply this clever idea in Solution 4 to this article’s challenge:
WITH C AS( SELECT O.custid, O.orderid, OD.productid, OD.qty, OD.unitprice, 0.35 * ( DENSE_RANK() OVER(PARTITION BY O.custid ORDER BY OD.productid) + DENSE_RANK() OVER(PARTITION BY O.custid ORDER BY OD.productid DESC) - 1 ) / ( DENSE_RANK() OVER(ORDER BY OD.productid) + DENSE_RANK() OVER(ORDER BY OD.productid DESC) - 1 ) AS varietydiscount, CASE WHEN qty >= 10 THEN 0.06 ELSE 0 END AS qtydiscount FROM Sales.Orders AS O INNER JOIN Sales.OrderDetails AS OD ON O.orderid = OD.orderid WHERE O.orderdate >= '20160101' AND O.orderdate < '20160201')SELECT custid, CAST(SUM( qty * unitprice * linediscount ) AS NUMERIC(12, 2)) AS rebateFROM C CROSS APPLY ( VALUES( CASE WHEN varietydiscount + qtydiscount > 0.10 THEN 0.10 ELSE varietydiscount + qtydiscount END ) ) AS A(linediscount)GROUP BY custid;
The plan for solution 4 is shown in Figure 4.
Figure 4 - Plan for Solution 4 under row mode
The good news in this plan is that the base data is read and joined only once, and that there’s no spooling taking place. The bad news though is that the bulk of the data needs to be sorted twice to support the dense rank calculations. Using only row mode operators, this plan took 15 seconds to complete on my machine. It’s better than Solution 2 and Solution 3, but not as fast as Solution 1.
As you can realize, also Solution 4 is limited to distinct count calculations and cannot be used for other types of distinct aggregates.
Performance test
This section provides the sample data and revised queries that I used to test the performance of the solutions against bigger tables than the original ones in the sample database.
Use the following code to create and populate the Sales.OrdersBig (~1M rows) and Sales.OrderDetailsBig (~3M rows) tables:
USE TSQLV4;DROP TABLE IF EXISTS Sales.OrderDetailsBig, Sales.OrdersBig;SELECT ISNULL(N.n * 100000 + O.orderid, 0) AS orderid, O.custid, O.empid, O.orderdate, O.requireddate, O.shippeddate, O.shipperid, O.freight, O.shipname, O.shipaddress, O.shipcity, O.shipregion, O.shippostalcode, O.shipcountryINTO Sales.OrdersBigFROM Sales.Orders AS O CROSS JOIN dbo.Nums AS NWHERE N.n <= 20000 AND O.orderdate >= '20160101' AND O.orderdate < '20160201';CREATE UNIQUE CLUSTERED INDEX idx_od_oid ON Sales.OrdersBig(orderdate, orderid);ALTER TABLE Sales.OrdersBig ADD CONSTRAINT PK_OrdersBig PRIMARY KEY NONCLUSTERED(orderid);CREATE INDEX idx_cid_od ON Sales.OrdersBig(custid, orderdate);SELECT ISNULL(N.n * 100000 + OD.orderid, 0) AS orderid, OD.productid, OD.unitprice, OD.qty, OD.discountINTO Sales.OrderDetailsBigFROM Sales.OrderDetails AS OD CROSS JOIN dbo.Nums AS NWHERE OD.orderid IN (SELECT O.orderid FROM Sales.Orders AS O WHERE O.orderdate >= '20160101' AND O.orderdate < '20160201') AND N.n <= 20000;ALTER TABLE Sales.OrderDetailsBig ADD CONSTRAINT PK_OrderDetailsBig PRIMARY KEY(orderid, productid);CREATE NONCLUSTERED INDEX idx_oid_i_pid_qty_price ON Sales.OrderDetailsBig(orderid) INCLUDE(productid, qty, unitprice);
Use the following revised queries to test the aforementioned four solutions against the bigger tables:
-- Solution 1WITH Base AS( SELECT O.custid, O.orderid, OD.productid, OD.qty, OD.unitprice FROM Sales.OrdersBig AS O INNER JOIN Sales.OrderDetailsBig AS OD ON O.orderid = OD.orderid WHERE O.orderdate >= '20160101' AND O.orderdate < '20160201'),CustCounts AS( SELECT custid, COUNT(DISTINCT productid) AS custproductcnt FROM Base GROUP BY custid),AllCount AS( SELECT COUNT(DISTINCT productid) AS allproductcnt FROM Base)SELECT B.custid, CAST(SUM( B.qty * B.unitprice * A2.linediscount ) AS NUMERIC(12, 2)) AS rebateFROM Base AS B INNER JOIN CustCounts AS CC ON B.custid = CC.custid CROSS JOIN AllCount AS AC CROSS APPLY ( VALUES( 0.35 * CC.custproductcnt / AC.allproductcnt, CASE WHEN B.qty >= 10 THEN 0.06 ELSE 0 END ) ) AS A1(varietydiscount, qtydiscount) CROSS APPLY ( VALUES( CASE WHEN A1.varietydiscount + A1.qtydiscount > 0.10 THEN 0.10 ELSE A1.varietydiscount + A1.qtydiscount END ) ) AS A2(linediscount)GROUP BY B.custid;-- Solution 2WITH C1 AS( SELECT O.custid, O.orderid, OD.productid, OD.qty, OD.unitprice, ROW_NUMBER() OVER(PARTITION BY O.custid, OD.productid ORDER BY (SELECT NULL)) AS rownumcust, ROW_NUMBER() OVER(PARTITION BY OD.productid ORDER BY (SELECT NULL)) AS rownumall FROM Sales.OrdersBig AS O INNER JOIN Sales.OrderDetailsBig AS OD ON O.orderid = OD.orderid WHERE O.orderdate >= '20160101' AND O.orderdate < '20160201'),C2 AS( SELECT custid, orderid, productid, qty, unitprice, 0.35 * COUNT(CASE WHEN rownumcust = 1 THEN productid END) OVER(PARTITION BY custid) / COUNT(CASE WHEN rownumall = 1 THEN productid END) OVER() AS varietydiscount, CASE WHEN qty >= 10 THEN 0.06 ELSE 0 END AS qtydiscount FROM C1)SELECT custid, CAST(SUM( qty * unitprice * linediscount ) AS NUMERIC(12, 2)) AS rebateFROM C2 CROSS APPLY ( VALUES( CASE WHEN varietydiscount + qtydiscount > 0.10 THEN 0.10 ELSE varietydiscount + qtydiscount END ) ) AS A(linediscount)GROUP BY custid;-- Solution 3WITH C1 AS( SELECT O.custid, O.orderid, OD.productid, OD.qty, OD.unitprice, DENSE_RANK() OVER(PARTITION BY O.custid ORDER BY OD.productid) AS drkcust, DENSE_RANK() OVER(ORDER BY OD.productid) AS drkall FROM Sales.OrdersBig AS O INNER JOIN Sales.OrderDetailsBig AS OD ON O.orderid = OD.orderid WHERE O.orderdate >= '20160101' AND O.orderdate < '20160201'),C2 AS( SELECT custid, orderid, productid, qty, unitprice, 0.35 * MAX(drkcust) OVER(PARTITION BY custid) / MAX(drkall) OVER() AS varietydiscount, CASE WHEN qty >= 10 THEN 0.06 ELSE 0 END AS qtydiscount FROM C1)SELECT custid, CAST(SUM( qty * unitprice * linediscount ) AS NUMERIC(12, 2)) AS rebateFROM C2 CROSS APPLY ( VALUES( CASE WHEN varietydiscount + qtydiscount > 0.10 THEN 0.10 ELSE varietydiscount + qtydiscount END ) ) AS A(linediscount)GROUP BY custid;-- Solution 4WITH C AS( SELECT O.custid, O.orderid, OD.productid, OD.qty, OD.unitprice, 0.35 * ( DENSE_RANK() OVER(PARTITION BY O.custid ORDER BY OD.productid) + DENSE_RANK() OVER(PARTITION BY O.custid ORDER BY OD.productid DESC) - 1 ) / ( DENSE_RANK() OVER(ORDER BY OD.productid) + DENSE_RANK() OVER(ORDER BY OD.productid DESC) - 1 ) AS varietydiscount, CASE WHEN qty >= 10 THEN 0.06 ELSE 0 END AS qtydiscount FROM Sales.OrdersBig AS O INNER JOIN Sales.OrderDetailsBig AS OD ON O.orderid = OD.orderid WHERE O.orderdate >= '20160101' AND O.orderdate < '20160201')SELECT custid, CAST(SUM( qty * unitprice * linediscount ) AS NUMERIC(12, 2)) AS rebateFROM C CROSS APPLY ( VALUES( CASE WHEN varietydiscount + qtydiscount > 0.10 THEN 0.10 ELSE varietydiscount + qtydiscount END ) ) AS A(linediscount)GROUP BY custid;
Batch processing
All four solutions use calculations that can greatly benefit from batch processing, such as grouping and aggregation and window functions, especially starting with SQL Server 2016 where the batch mode Window Aggregate and batch mode Sort operators were introduced. If you create columnstore indexes on the tables involved, this will naturally enable SQL Server’s optimizer to use batch mode processing. Alternatively, as described in What You Need to Know about the Batch Mode Window Aggregate Operator in SQL Server 2016 Part 1, Part 2 and Part 3, you can defined a dummy empty filtered columnstore index, and this will also enable the optimizer to use batch mode processing, even if the data is organized using rowstore format. Enough to create such an index on one of the participating tables. For example, use the following code to create such a dummy index on Sales.OrdersBig (assuming you’re using SQL Server 2016 or later):
CREATE NONCLUSTERED COLUMNSTORE INDEX idx_cs_dummy ON Sales.OrdersBig(orderid) WHERE orderid = -1 AND orderid = -2;
Now rerun all four solutions.
I got the plan shown in Figure 5 for Solution 1.
Figure 5 - Plan for Solution 1 under batch mode
Now most of the heavy lifting, which handles the joining, grouping and aggregate operations, is done by batch mode hash operators. This solution finished in 1.825 seconds on my machine—a third of the run time compared to when using only row mode processing. Naturally, the performance of this solution will even further improve by using actual columnstore indexes as opposed to a dummy one; but the amazing thing here is that if you need to use rowstore representation for other reasons, and can’t justify keeping duplicate representations, this trick costs you nothing, yet gives you great benefit.
The other three solutions (Solution 2, Solution 3 and Solution 4) see an even bigger factor of improvement when using batch processing compared to the original row mode plans. That’s because the new plans use the new batch mode Window Aggregate and batch mode Sort operators, which are dramatically more efficient compared to their row mode counterparts.
The new plan for Solution 2 with batch processing is shown in Figure 6.
Figure 6 - Plan for Solution 2 under batch mode
This solution ran for 2.748 seconds on my machine, compared to 24 seconds with the original row mode plan.
The plan for Solution 3 with batch processing is shown in Figure 7.
Figure 7 - Plan for Solution 3 under batch mode
This solution ran for 2.934 seconds on my machine, compared to 24 seconds with the original row mode plan.
The plan for Solution 4 with batch processing is shown in Figure 8.
Figure 8 - Plan for Solution 4 under batch mode
This solution completed in 2.332 seconds on my machine, compared to 15 seconds with the original row mode plan.
Conclusion
As of SQL Server 2017 CTP2, T-SQL doesn’t yet support distinct windowed aggregate calculations. In this article, I provided four workarounds: using joining and grouping, using row numbers, using dense rank and maximum, and using ascending and descending dense ranks. The first technique did significantly better than the rest when using only row mode processing. It did better even when using batch mode processing, but all solutions did reasonably well with batch mode, finishing in 2 to 3 seconds. So, with batch processing, you could pretty much stick to your favorite solution and worry less about performance.
If you feel that you could benefit from built-in support for distinct windowed aggregate calculations instead of needing to use such workarounds, make sure to add you vote to the connect item requesting this feature here.
About the Author
You May Also Like