The Future of SQL Server

Contributing editor and SQL Server MVP Brian Moran talked recently with Paul Flessner, Microsoft’s VP of SQL Server and middleware, about the evolution of SQL Server and its role in corporate IT environments.

Brian Moran

June 16, 2000

11 Min Read
ITPro Today logo in a gray background | ITPro Today

An exclusive interview with Paul Flessner, Microsoft Vice President of SQL Server

Editor's Note: Contributing editor and SQL Server Most Valuable Professional (MVP) Brian Moran talked recently with Paul Flessner, Microsoft's vice president of SQL Server and middleware, about the evolution of SQL Server and its role in corporate IT environments. Flessner will be the keynote speaker at PASS 2000 North America Conference and Expo October 25 through 28 in San Francisco (for more information, visit the PASS Web site at http://sqlpass.org). Following is an excerpt from the interview. You can read the entire interview at www.sqlmag.com, InstantDoc ID 8993.

Brian Moran: Paul, we'd like to help readers understand the evolution path that SQL Server 2000 is on and the types of changes we might see in the Yukon [the code name for the post-2000 SQL Server release] time frame. We also want to drill down into interoperability features of all versions of SQL Server.

Paul Flessner: We can talk about SQL Server 2000 design points and what we're trying to accomplish, which will lead us into a good discussion about futures. We wanted to accomplish a couple of things in SQL Server 2000. The feedback about SQL Server 7.0 was pretty clear. A lot of customers said they really liked the feature set and thought it was competitive. They said, "You don't have every feature that the competitors do, but you have a good feature set to cover the key areas of the market, including OLTP [online transaction processing] and decision support." We're starting to work in the area of knowledge management with our full-text search feature, among others. We got very positive feedback about the upgrade experience, backward compatibility, and just the overall quality. I was talking with a customer the other day about upgrading to SQL Server 2000, and he said, "Well, you've got a high bar because on a 1 to 10 scale in terms of quality, SQL Server 7.0 is a 12." So we knew that we'd done a good job, but in some areas, customers were pushing us to keep going forward.

We also concentrated on providing more Internet support in SQL Server 2000. The Internet exploded while we built SQL Server 7.0. We started writing code for 7.0 in mid-1995, then shipped it at the very end of 1998. So we had to get some Internet features in pretty quickly. Everybody loves what we're doing in data warehousing, but they want more out-of-the-box ease of integration. Customers also told us that they felt we're listening to them. We're running 5 to 10 customers through our development shop each week now, and we sit and listen to them.

Moran: I recognize that you're soliciting great feedback from the customers you bring in and you're engineering the product to meet those needs. But I think that the common SQL Server user often feels that Microsoft is off doing its own thing and that a clear feedback channel to Microsoft's corporate level isn't available.

Flessner: We can always do more. We do provide some forums; MSDN [Microsoft Developer Network] is the biggest. You do have to subscribe to MSDN at some cost, but I don't believe the cost is prohibitive for any professional database developer or administrator. That's our broad-reach model, and you can ask questions and get answers in that forum. Having said that, maybe I need to get more feedback on our reach.

Moran: Talk about scalability for a moment. Obviously, the TPC [Transaction Performance Processing Council] numbers shook everything up. I think the results took Oracle by surprise. Now Oracle's response is that scale-out partitioning is nice, but the Microsoft solution isn't really practical for many needs today because of the way in which you need to implement the partitioning. And, of course, Oracle claims its product can still scale up much higher in a single SMP box. So, when might we see TPC-C scores based on a 64GB Windows 2000 Datacenter Server running 32 nodes or 32 CPUs? [For more information about SQL Server's TPC results, see Michael Otey, Editorial, "SQL Server Is Tops," June 2000].

Flessner: I want to back up to a higher level. There are two fundamental ways for a database to scale and two fundamental design points in the product's architecture. One is a shared-everything type of architecture, or a scale-up strategy. Those scenarios are the ones in which you see the big 32-way, 64-way, and 96-way SMP numbers that Oracle and Sun like to talk about. The shared-everything model is great for many—and even most—applications. But in terms of scalability, this model is restrictive. I can't quote specific numbers that show where throughput falls off as it relates to processors because the model is absolutely workload-dependent. But [the throughput fall-off] isn't the hardware's fault, and it's not the operating system's fault. The definition of the database architecture causes the scalability limitations in the shared-everything model. [For more information about shared-everything and shared-nothing architectures, see Michael Otey, Editorial, "Scaling Up vs. Scaling Out," July 2000.]

Let me give you an example. You hear a lot about parallelism in a database. We have parallel operations such as create index, query, scan, sort, and a parallel read operation for the inserts and updates. What we don't talk about is serial I/O, serial log access, and serial buffer management. Every time you finish one of those parallel operations, you go back to a serialized operation. The more processors you have, the faster they queue up and wait for one of those serialized operations.

So it's a fact of life that SMP scaling—hardware scaling—doesn't scale. Every application hits a knee in the curve, and then it just flattens out. All you're doing at that point is paying a lot of money to Sun and Oracle, and they're happy to take it. They're so happy to take it that they do anything they can to disparage our model of economics, both at the SMP level and the scale-out strategy, which is the second part of the discussion.

The shared-nothing or scale-out strategy is a proven model in the dot-com world and in the ISV [independent software vendor] world. Look at SAP's scale-out strategy. The middle tier is the common model. You do your connection management and business logic at the middle tier, and you don't have to worry about the state of the data; that's managed at the back end. But if you need more connections or you need more throughput, you just add an application server. The economics of that have been proven over and over. We're advocating a proven model that has been around for 20 years.

Moran: The limitation today with this model is the manageability of the scale-out partition. Also, certain types of applications might lend themselves to the partitioning model, but other applications might not. In other words, federated databases in SQL Server 2000 aren't a catch-all solution for every problem.

Flessner: I've made it clear in my talks, even at the Windows 2000 launch, that this solution is a down payment on our scale-out strategy. What you get is transparency in query updates. You don't get transparency for manageability. So, I'm not overselling [the scale-out strategy]; it's not for all customers. It's for a very select, high-end set of customers that absolutely demand this level of scalability.

Moran: I'd like to get your perspective on Oracle and IBM. The shared-nothing model isn't Oracle's strong point, and Oracle would have trouble migrating to that model. Regarding IBM's technology in the shared-nothing model, it seems that there's a resurgence of popularity of DB2 on NT and that DB2 has some better management credentials in place because IBM has been doing this longer than SQL Server has.

Flessner: First, Oracle doesn't have shared-nothing technology. They have this hybrid thing they're caught in the middle with called shared-disk.

Moran: And that doesn't scale in a particularly graceful way.

Flessner: Oracle has to decide whether it's going to move the shared-disk model forward or justify it.

Moran: Oracle has the buzz in the database market. Oracle is perceived as the leader, especially in the e-business space. But as I look at the technology, I think that some technologies that Microsoft is coming out with and some that IBM has had all along are superior to Oracle's architecture. Has there been a shift in mind share to Microsoft and IBM from Oracle? As true scalability becomes more important, will the shared-nothing issue drive that shift in mind share?

Flessner: I'm certainly pointing the ship in that direction here. Although IBM has had shared-nothing technology, the company has been focused on decision support, not OLTP, and that's restricted them. Believe me, if they had OLTP working for scale-out...

Moran: ...they'd have their TPC numbers published already.

Flessner: A long time ago. So, IBM doesn't have it figured out yet, although it's making progress. Quite honestly, I think IBM will do a good job. Informix [Software] is the same way. They had it figured out for decision support but not for OLTP, so Informix hasn't published yet, either. But I believe it's pretty clear both from a pure scalability perspective and from a pure economic perspective that scale-out is going to win. Think of the poor operations person who has just been to the board of directors and gotten a capital appropriation approved for a $4 million Sun box and said he wouldn't be back for 24 months. Four months later, he's back for another one. That's not what customers want to do.

Moran: From an interoperability and integration perspective, I'd like to talk about the blending of different data-tier technologies. As data mining evolves, SQL Server will have some tight integration with Commerce Server. There's Host Integration Server. DTS [Data Transformation Services] is a transformation tool, among other things. There are portions of BizTalk Server that do schema mapping, transformation, and workflow management. And XML starts to blend in with everything. Are there plans to take SQL Server components that are considered to be core, such as DTS, out of the SQL Server team and lock them down as part of the operating system?

Flessner: You can look at integration in a couple of ways. You can look at it from the perspective of Enterprise Application Integration behind the firewall. And clearly Host Integration Server plays a big role for us there. Heterogeneous queries play a big role for us. DTS certainly plays a role. Then that starts to cross the line outside the firewall, where you've got XML playing a huge role. BizTalk Server plays an important role for us with XML messaging and workflow scheduling. So at some point, those technologies cry out for some integration points. Some people get excited about a central point of integration, such as a catalog or repository. Some people want the flexibility of a self-describing protocol, which is what XML provides. We're a company that provides a robust, rich, and diverse platform, so you're going to see us with entry points in each of those technologies.

Moran: It seems as if you're reinventing the wheel a couple of times. From my understanding, BizTalk Server doesn't use any of the core COM components available for DTS, although you could argue that if DTS had a really clean XML provider, BizTalk Server could have used that provider. Are different product teams working at different paces on new products going to create competing standards within Microsoft—never mind the rest of the industry?

Flessner: I think there's some overlap, but not quite as much as you might fear. For example, BizTalk centers on the ability to take different transaction formats and convert them into an XML format. The DTS design point is data transformation rather than message transformation, and there will always be a need for that. Neither DTS nor BizTalk do a good job of going after ADO and OLE DB data sources and text files, which are important to the data warehousing market. Is there opportunity for some convergence in the future? Probably. At some point, if I have XML everywhere, I don't need any of those technologies. SQL Server can do all of the transformation, right? But that's a long time coming, and I doubt that any one protocol is ever going to take the entire market. One unified XML world with blended data-tier storage is a nice vision, but it's a long time coming.

Moran: Do you have any thoughts on what DBAs in corporate America should do to revamp their skill sets if they're intent on being at the upper end of the SQL Server professional market 6 or 12 months from now?

Flessner: XML is certainly an important technology that you're going to see increasing emphasis on. You can program XML in SQL Server 2000 in multiple ways. You can do it from a SQL-centric perspective or from an XML developer and DOM [Document Object Model] perspective. The most common ways customers get into trouble are by fundamentally not understanding the product, doing poor database design, not thinking about backup and recovery, not looking at application-level locking and potential application-design deadlocking, writing poorly designed ISAPIs [Internet Server APIs] that bring down their IIS servers, and not understanding the interplay between IIS and SQL Server. Specifically in Windows 2000, we've implemented some great advantages in robustness of IIS 5.0 and in the role that COM+ can play in multithreading your application and making it much, much faster. Good, old-fashioned data modeling is always a good thing. Understand SQL Server, how stored procedures work, and how connections work. I know it's simpler said than done.

Sign up for the ITPro Today newsletter
Stay on top of the IT universe with commentary, news analysis, how-to's, and tips delivered to your inbox daily.

You May Also Like