Vote for Your Favorite Stickiest SQL Server Problem and Solution!

We need your help selecting the top two entries, which will win Visual Studio 2010 Ultimate with MSDN subscriptions! Please read the entries and vote for your favorite in the Instant Poll at the end of this blog post.

Megan Keller

October 1, 2010

11 Min Read
ITPro Today logo in a gray background | ITPro Today

Recently, we asked SQL Server professionals to tell us about their stickiest SQL Server problem and how they solved it. Their stories had to be real-life, SQL Server–related problems and solutions. Up for grabs are two Visual Studio 2010 Ultimate with MSDN licenses, courtesy of SQL Mag author Derek Comingore.

We received three great entries, and now we need your help selecting the top two entries. Please read the entries below and vote for your favorite in the Instant Poll at the end of this blog post.

Entry #1:

“I resolved my stickiest SQL Server problem ever by utilizing SQL Server Integration Services.  Company ABC Inc. hired me in early 2009 to develop a data warehouse for them.  A month into the job it became clear there were higher concerns than developing a data warehouse.  The company’s application development team was being pushed to migrate the remaining client data from their legacy mainframe into the new Windows .Net application that utilized SQL Server 2005 for the database.   While the data migration was on track to hit the deadline the back end EDI data file creation that the company’s clients depended on had not been started.  Since the application developers where busy preparing for the migration all data extracts were left to the SQL developers.  Apparently, the last migration that took place a couple months earlier had a similar issue where the EDI file creation for 10 files took 4 weeks and was completed 8 days after the migration.  The CEO of the million dollar client had threatened to pull their business.  Since I had the most experience with SSIS they offered, and for some reason I agreed to be the Project Manager / Lead Developer for their last big migration.  Again, we had 4 weeks to create all the EDI files created, but instead of 10 files we now had to create 60 files.  On top of that several of the files belonged to 2 clients that generated twice as much revenue than the last client. 

Out of the 60 files that had to be created 50 of them were unique.  My team lacked any real experience with SSIS, but they were all pretty good at T-SQL.  On the first day I had the team concentrate on coding the logic for each file into stored procedures.  I went to work on creating the SSIS packages, and by the end of the first week I settled on a single package that would dynamically loop through any stored procedure and produce an output file.  The single SSIS package used 10 control tables to loop through each file.  The tables contained the data necessary to group files together for a client including the destination for each file.  Other logic was needed to determine when the file should be generated.  Additional tables were used to store any variables a stored procedure would need.  The package would be fired once and loop through all client files to be generated that day.  Before the package looped through the next file the control values; such as; run time and end date would be updated by executing dynamic SQL using the update logic that was unique to each file.  Having a single SSIS package created several benefits for us.  The SQL developers concatenated the columns together which not only sped up formatting it produced a single string column that the SSIS package could consume regardless of the content to produce the output file.  Along the way I learned to fake out the SSIS metadata by implementing an always false ‘If’ statement at the top of each stored procedure.  This prevented SSIS from throwing an error message because the file row length would fluctuate anywhere from 200 to 500 characters per file.  As each stored procedure was completed the developer was able to test the output allowing us to do testing along the way.  I was then able to help develop the stored procedures and we beat the deadline by a week.  I had enough extra time to put several reports together showing file creation metrics like average processing time per file, and the average number of rows.  Those metrics allowed us to flag files for review that would typically contain 100,000 rows on Monday, but only had 100 before they went to a client.  The best part was that I was able to go on my family vacation the week following migration that I had planned prior to working at Company ABC Inc.

PS After a year of continually putting out fires unrelated to data warehousing I decided join a new company.” 

—Ross McNeely

Entry #2:

The problem - The company I work for has sophisticated rules by which they manage Accounts Receivable to determine whether a customer should go ‘on hold’ until their outstanding balance is paid down. After converting to a new system which as too simplistic an ‘on hold’ routine the CFO saw we needed something better.

I was asked to see if I could develop something that would allow us to employ all of our business rules and automate the process without causing the vendor any ‘support’ issues.

I found out to my horror that the system did not automatically calculate or store the aged AR balances. Calculation was a manual operation and balances were the result of a stored procedure performing calculations in temporary tables . There was no data to work with!

Finally, the vendor refused to provide any documentation or answer any of my questions stating that this was not in the terms of our support agreement, but we could hire their consulting services organization to assist.

The solution – instead of incurring the costs to have the vendor consult on this project, I took the following approach.

#1 SQL Profiler was used on a test bed server. The manual ‘calc’ operation was performed; with the resulting run captured. One-half day of examining the Profiler output revealed the names of the stored procedures that run the calculation and aging computations.

#2 A job was created to call the ‘calc’ stored procedure. This was scheduled to run nightly just after close of business.

#3 Another 2 days was invested understanding the second stored procedure that did all the computations of aging the AR balances of each of our customers and how this was handled. Once understood, I cloned and modified this stored procedure as our own to create and use our base tables, ensuring we had aging data we could both work with and warehouse.

#4 From there, it was the relatively “simple” matter of building jobs that would run our rules, on our base tables:

a) For each customer with an unapplied credit balance, beginning with the most outstanding aging bracket, apply the credits to the outstanding balances until either all balances are reduced to $0.00 or unapplied credits are consumed – these are the customers adjusted balances.

b) Examine the customer’s average days to pay

c) Examine the amount of each of 5 aging bucket’s adjusted balances

d) Examine each customer to ascertain if they are ‘standalone’ or are the ‘child’ of a ‘parent’ account. If standalone, apply all rules pertaining to their average days to pay and adjusted aging bracket balances and their credit limit and set a flag as to whether to put the customer on hold. If a child of a parent, accumulate the average days to pay for an overall average days to pay, accumulate the aging bracket balances of all children and compare it to the parent’s credit limit to ascertain if the entire set of customers, parent and children go on hold and set their flags.

e) Finally, a list of ‘strategic’, never-put-on-hold customers maintained by accounting should be referenced. If any customer with their on-hold flag is set, but they are in this list, turn the flag off so that they are not put on hold.

#5 After the ‘on hold’ base data is calculated for the night, run the current night’s ‘on-hold’ flag positions against the previous night’s flags. If the customer is on-hold and today’s processing changed their flag to off, remove them from being on-hold. If the customer’s previous on-hold flag was off, and today’s processing puts them into an ‘on-hold’ state, turn their on-hold flag on.

Steps 1 through 5 are done in our proprietary base tables outside the vendor’s tables.

When this processing is complete change the on-hold position of each customer in the vendor’s customer table (this is no different a process than someone in accounting opening the customer’s form in the application and checking the on-hold checkbox, and did not cause the vendor ‘support issues’.)

#6 Using WSS3.0 and SSRS2005 we created a portal and series of reports. One report for Accounting shows each customer’s aging adjusted balances, their average days to pay, and the reason they are on hold (which rule they violated; for instance $250.00 over credit limit). For the Accounting report, if the customer is in a parent-child relationship, all the customers that are children are grouped under the parent account. For each service location, (where they don’t care about parent-child relationships, but only the customers in their service area), a ‘flat’ report of their customers is provided with the same information.

The CFO’s analysis shows that our average days to collect drops 2.5 days per week; a trend that continues since putting this “AR module” into production.”

—Randy Volters

Entry #3:

“As a SQL/ETL Developer, I was asked to transform order records from a vertical set of records to a flattened horizontal format in a flat file.  In addition I was told that customers from the same event that wanted free shipping would all need to be shipped in one shipment to the consultant’s address, but the customer had the option to choose to pay for shipping direct to their home.  Also, some orders will be for a physical item and others will be for a software order that get loaded to different vendors.  I couldn’t just use the line number from the order because there are physical and software orders combined.  What I did is created a new column in the orderlineitems table called ShippingOrderID that would get assigned to each physical order record per event using a windowing function.

cast(cast(Rank() over(order by oli.eventid, a.addressline1) AS varchar(10))+cast(oli.EventID as varchar(10)) as bigint) as shippingorderid


Since the ranking would be repeated for each event it would be repeatable and not provide a good ID to key off of, I added the eventid, which would never be repeated.  The combination of the two not only creates a unique key, but a smart key that I can easily recognize the eventid.  Now I needed to assign a set of rownumbers for each shippingorderid.  I did this using a different windowing function.

ROW_NUMBER() over(partition by oli.Shippingorderid order by oli.Shippingorderid) as SC_RowNumber


These updates then allowed for the final select into a flat file that looks like this:

SELECT NULL AS CONTINUED, oli.CreatedDate AS ORDER_DATE, CONVERT(numeric(10, 0), oli.ShippingOrderID) AS ODR_NUM, oli1.SKU AS PRODUCT01, oli1.Quantity AS QUANTITY01, oli2.SKU AS PRODUCT02,   oli2.Quantity AS QUANTITY02, oli3.SKU AS PRODUCT03, oli3.Quantity AS QUANTITY03, oli4.SKU AS PRODUCT04, oli4.Quantity AS QUANTITY04,   oli5.SKU AS PRODUCT05, oli5.Quantity AS QUANTITY05, oli1.OrderID AS CUSTOM01,                       oli2.OrderID AS CUSTOM02, oli3.OrderID AS CUSTOM03, oli4.OrderID AS CUSTOM04, oli5.OrderID AS CUSTOM05FROM dbo.OrderLineItems AS oli INNER JOIN  dbo.Addresses AS a2 ON oli.OrderID = a2.ProwessOrderID AND a2.AddressTypeID = 2 AND oli.SentToSC IS NULL INNER JOIN  dbo.OrderTotals AS ot ON oli.OrderID = ot.OrderId AND ot.OrderType = 'PS' INNER JOIN  dbo.OrderLineItems AS oli1 ON oli.ShippingOrderID = oli1.ShippingOrderID AND oli.SC_RowNumber = oli1.SC_RowNumber AND   oli1.SC_RowNumber = 1 LEFT OUTER JOIN  dbo.OrderLineItems AS oli2 ON oli.ShippingOrderID = oli2.ShippingOrderID AND oli.SC_RowNumber = oli1.SC_RowNumber AND   oli2.SC_RowNumber = 2 LEFT OUTER JOIN  dbo.OrderLineItems AS oli3 ON oli.ShippingOrderID = oli3.ShippingOrderID AND oli.SC_RowNumber = oli1.SC_RowNumber AND   oli3.SC_RowNumber = 3 LEFT OUTER JOIN  dbo.OrderLineItems AS oli4 ON oli.ShippingOrderID = oli4.ShippingOrderID AND oli.SC_RowNumber = oli1.SC_RowNumber AND   oli4.SC_RowNumber = 4 LEFT OUTER JOIN  dbo.OrderLineItems AS oli5 ON oli.ShippingOrderID = oli5.ShippingOrderID AND oli.SC_RowNumber = oli1.SC_RowNumber AND   oli5.SC_RowNumber = 5 INNER JOIN  dbo.SKUs AS sku ON oli.SKU = sku.ItemNumber AND sku.Subscription = 0


You may ask why I joined the OrderLineItems table (a “many” table) on itself in this query rather than creating a join between an ordertotals table and the orderlineitems table.  In part it was because multiple orders were part of a single event.  Also, because some of these changes were thrown at me late in the game after I had already designed and built the database.  So I had to figure out how to avoid a many to many join that was creating a Cartesian effect.  I overcame this by adding constraints in the join stack.  Also, I could then tie each item to its’ original orderid in the custom field.”

—Brent Miller

Sign up for the ITPro Today newsletter
Stay on top of the IT universe with commentary, news analysis, how-to's, and tips delivered to your inbox daily.

You May Also Like