Business Intelligence for the DBA, Part 2

Derek provides the 3rd and final post to his series on BI for the DBA.

Derek Comingore

October 9, 2009

4 Min Read
ITPro Today logo in a gray background | ITPro Today

Happy Weekend SQL Pros!

I figured Saturday morning was a good time to sit down and finish out this series of "BI for the DBA" blog posts here on SQL Server Magazine.

I’ll start off by saying please feel free to post your comments/questions here on the blog or feel free to email me at [email protected]. I’ve found taking questions is a very good way to improve my own knowledge and skills.

BI Adoption Obstacles

As a new BI entrepreneur, making quite a few blunders (and learning along the way), I try my best to keep my finger on the "BI pulse." There are two major reasons why companies that want BI do not further engage:

  • Traditional BI engagements are exactly as Michael Otey described: big, bet-the-business endeavors. ROI cannot be realized quickly as a result

  • Lack of available BI expertise

I placed quite a bit of focus on the topic of Rapid BI in my prior posts because those solutions help a company overcome the first objection cited above. Regarding the second objection, companies that invest in BI produce more BI experts as a result (chicken and the egg). Support the DBA who wants to do BI in developing a small data mart to "play with." Engage with a consulting company for a pilot BI project. These are small, incremental steps that can produce initial BI-based value.

In an effort to give-back to the community I value the opportunity to assist those SQL Server DBAs and Developers who wish to transition to BI. I’m not the first and certainly won’t be the last BI community activist who discusses both the business and technical sides of business intelligence. If you have read my prior two posts you should now have a pretty good idea about what BI is. Now it’s time for ”How to implement BI”…

Let me start by saying this is a very large topic. There is no way I can explain every detailed BI concept in a book, little alone a blog post. One of the challenges associated with building and delivering BI solutions is the sheer volume of available architectures, technologies, and processes. With every BI project I participate in, I learn something new. So my first message is "don’t become so overwhelmed by the forest that you miss the trees." No one is an expert in every BI technology or platform. However, what you can do is become an expert in selective areas and maintain a good understanding of the others.

A good way to begin learning BI is to comprehend the common architectures (how each tier works with another) and then dive into those areas you wish to master.

Using the image above as a guide for the remaining discussion you can see that the process starts with the various OLTP databases the business wishes to collect data from. These targeted data sources provide us the "copper" we will transform into "gold." We use Extract, Transform, and Load (ETL) processes to collect and mold the data into a format that is useful for denormalized database schemas (data marts & warehouses).

There is an optional layer called the Operational Data Store (ODS). The ODS is a normalized solution to enable right-time analytics. If you do employ an ODS your ETLs must then move and transform the data into the downstream Data Mart & Warehouse thereafter.

The data mart (or warehouse) tier is simply denormalized schemas that are useful for reporting & analytical consumption purposes. These relational databases are designed to combine 3rd normal form tables into star and snowflake schemas. Dimension tables contain descriptive attributes and hierarchies while fact tables contain the actual measurements.

The OLAP server contains cubes that are useful for performing Fast Analysis of Shared Multi-Dimensional Information (FASMI). With a cube in place, a business analyst can slice-and-dice analytics to gain a better understanding of how (and why) the business operates in the capacity it does. Technically, there is a second major component of the analytics server called data mining. Data mining is unfortunately a seldom used yet very powerful BI component. For now just understand that data mining is an optional BI component that is employed for predictive analytics.

Finally we have the consumption layer. BI consumption is where the "forest" grows quite large. Collaboration and Analytical servers are usually deployed in-between the core BI infrastructure and the clients in the consumption layer. These servers provide the client applications with additional functionality beyond the pure information provided by the core infrastructure.

There are numerous thick and thin BI client applications on the market today. What is consistent about BI consumption are the logical classifications of functionality they provide:

  • Reporting (Tabular & Aggregative)

  • Analytical Applications that allow one to slice-and-dice cubes and metrics

  • Dashboards & Scorecards used for Performance Management

  • Self-Service applications which are used for on-the-fly reporting & analytical construction

  • Mobile Intelligence applications which facilitate consuming BI on the go

So that is how modern-day BI is implemented from an architectural overview. Hopefully, you now understand why companies implement BI and how it is implemented. For those of you new to BI I hope my three posts on "BI for the DBA" have helped you on your personal BI Voyage! Have a great weekend and thanks for your valuable time.

Sign up for the ITPro Today newsletter
Stay on top of the IT universe with commentary, news analysis, how-to's, and tips delivered to your inbox daily.

You May Also Like