Scalability Enhancements in SQL Server 7.0
To push SQL Server up the enterprise ladder, Microsoft introduces introduces several scalability enhancements in version 7.0
October 4, 1999
Microsoft refreshes its product from the ground up
As 2000 approaches, one of the biggest challenges that Microsoft faces with Windows NT is scalability. Addressing this challenge was the company's primary directive behind developing SQL Server 7.0. Although SQL Server 6.5 provides an excellent departmental database, many users share the tacit knowledge that the software isn't scalable to the enterprise level. Thus, many enterprise resource planning (ERP) vendors don't support SQL Server 6.5. The ERP vendors that support SQL Server 6.5 often do so begrudgingly, recommending it only for small implementations.
SQL Server 7.0 dramatically changes this situation. Microsoft has refreshed the product from the ground up. Numerous architectural enhancements have boosted SQL Server 7.0's scalability into the enterprise arena. As a result, ERP vendors have readily adopted SQL Server 7.0. In this article, I describe SQL Server 7.0's major scalability enhancements.
Keeping Up with the Hardware
For the past several years, hardware enhancements have outpaced software developments. By the end of 1999, 8-way servers based on Intel's new Profusion chipset will be widely available. The biggest stepping-stone toward high-end scalability is software's ability to fully utilize underlying hardware platforms. One of SQL Server 7.0's primary improvements is dynamic resource allocation. SQL Server 7.0 dynamically allocates memory from the OS in response to system demands and continues to allocate memory until it uses all of a system's available memory. When database demands taper off, the system gradually returns the unused memory to the OS.
In conjunction with NT Server, Enterprise Edition (NTS/E), SQL Server 7.0 Enterprise Edition raises the ceiling for the total amount of physical memory available to the database server. NT Server 4.0, Standard Edition can access a maximum of 4GB of memory: NT reserves 2GB of memory, and applications such as SQL Server draw from the remaining 2GB. Running under NTS/E, SQL Server 7.0 Enterprise Edition can directly access as much as 3GB of physical memory. The ability to access additional memory lets SQL Server 7.0 Enterprise Edition better support large databases and provide high levels of performance.
SQL Server 7.0 databases reside in standard OS files rather than in the fixed-size devices that SQL Server 6.5 used. This new functionality lets SQL Server 7.0 databases dynamically grow as database tables become larger and eliminates the need for administrative intervention and accompanying downtime. You specify growth increments and a desired maximum database size when you create the database. SQL Server 7.0 databases continue to grow until they reach the maximum boundary or they use all the available disk storage.
One of SQL Server 7.0's most important high-end scalability enhancements is its support of very large databases (VLDBs). SQL Server 7.0 supports databases as large as 1,048,516TB—a massive increase over version 6.5's 1TB limitation. Microsoft's TerraServer database (http://terraserver.microsoft.com) represents the most visible SQL Server VLDB. At 1.4TB, the TerraServer database is arguably the Web's largest online database. The TerraServer database comprises a collection of satellite photographs and their associated geographical coordinates.
Another important hardware-related scalability enhancement in SQL Server 7.0 Enterprise Edition is its improved SMP system support. SQL Server 6.5 has limited support for SMP systems and doesn't support more than four processors. Significant enhancements to the underlying database engine let SQL Server 7.0 Enterprise Edition take full advantage of the 4-way SMP systems and the newer 8-way systems. The SQL Server 7.0 database engine uses intraquery parallelism to fully utilize multiple processors, letting the system assign different parts of a query to separate processors.
Microsoft's SQL Server 7.0 Enterprise Edition tests—which used several of the ERP vendors' benchmarking tools—revealed significant scalability enhancements using 4-way systems, as Figure 1 shows. SQL Server's new architecture takes advantage of the additional processors in high-end SMP systems. The upcoming 8-way systems will boost these figures substantially.
Scalable Architecture
In addition to improved hardware support, SQL Server 7.0 offers several architectural enhancements aimed at high-end scalability. Microsoft has also enhanced virtually all of SQL Server 7.0's core database capabilities. You'll find some of the most fundamental changes in the page size and the database engine's sequential I/O scans. SQL Server's default page size has increased from 2KB to 8KB. Extents have increased from 16KB to 64KB, and the default I/O size for scans has increased to 64KB. These larger structure sizes let the system process more data with each I/O operation and thus increase system throughput.
Microsoft has also made fundamental enhancements to the query processor. SQL Server 7.0's query processor can better manage enterprise-class applications' complex queries and decision-support requirements. To more effectively handle the unstructured requirements of ad hoc queries, the SQL Server 7.0 query processor performs more effective read-ahead and parallel I/O to improve table scan performance.
The SQL Server 7.0 query processor incorporates several new high-performance query-processing techniques to better support large databases and improve the performance of the complex queries of, for example, data-warehousing applications. These new query-processing techniques include index intersection, merge joins, and hash joins. Index intersection uses the data values from one or more of the indexes to satisfy a query and thus reduces physical I/O. If the required information resides in the indexes, accessing the table's data is unnecessary. Merge joins combine existing indexes to speed up complex queries. The SQL Server query processor uses hash joins to satisfy a query when no appropriate indexes can satisfy a query. To perform a hash join, the query processor makes an initial rough pass (or hash) on the data and groups the data into subsets. The query processor can then quickly check these hash subsets before retrieving data from the base table. Microsoft directs all these techniques toward reducing physical I/O, which is the slowest component of a database system.
To determine which processing technique can best satisfy a given query, SQL Server calculates the cost of the query. SQL Server's cost basis for a query is a function of the query type, the available indexes, the estimated processor time, and the estimated I/O requirements. By default, SQL Server stores statistics that it uses to calculate its processing costs.
Clustering Support—More Is Better
Ensuring high availability is another hurdle in the road to high-end scalability. SQL Server 7.0 Enterprise Edition uses Microsoft Cluster Server (MSCS) for high-availability, mission-critical implementations. MSCS links two systems into one virtual system (i.e., a cluster). Each system in the cluster has a processor, memory, and disk storage. A private network connects the two systems, which must use a shared SCSI disk subsystem or fibre channel. The cluster provides automatic failover. If one of the systems (i.e., nodes) in the cluster fails, the remaining node takes over for the failed node. Figure 2 shows SQL Server 7.0 Enterprise Edition running with an MSCS cluster.
You can install SQL Server Enterprise Edition with MSCS in two modes: Active/Passive or Active/Active. In an Active/Passive setup, only one server in the cluster is active at a time. The second cluster node remains in a standby state. If the primary node fails, the SQL Server database activity switches to the standby server. When the standby cluster becomes active, SQL Server performs its usual system startup recovery processes, and any incomplete transactions roll back to ensure database integrity.
In an Active/Active setup, both nodes of the cluster are active simultaneously. However, the activity of the two systems isn't additive. Instead, the Active/Active setup is essentially the same as running two separate SQL Server systems. You can simultaneously access both SQL Server systems, and you can run applications on both nodes of the Active/Active cluster. If one of the nodes in an Active/Active cluster fails, the remaining node assumes the duties of the failed system. Again, when the second node takes over for the failed node, a second SQL Server system essentially starts up on the remaining node. During startup, SQL Server performs its usual recovery processes, and any incomplete transactions roll back.
Backing Up to the Scale
Although you might not immediately think about backup and restore in terms of scalability, the ability to effectively back up the database becomes crucial as the database grows. Backing up enterprise-size databases poses challenges. The sheer amount of data in a VLDB typically confounds efforts to schedule a backup of the entire database. Complicating this problem is the fact that many large databases require 99.9 percent availability or better.
SQL Server 7.0 addresses backup and restore scalability in two ways. First, SQL Server 7.0 uses a parallel backup methodology that lets the backup and restore operations scale according to the backup device's speed. Parallel backup writes to multiple devices simultaneously, increasing the throughput of backup and restore operations in proportion to the number of devices attached to the server. For example, if a backup using one tape drive typically takes 2 hours, adding a second tape drive cuts the backup time in half—to approximately 1 hour. For enterprise-class databases, using SQL Server's parallel backup capability means that adding more backup devices can significantly shorten backup time. Figure 3, page 76, illustrates SQL Server's backup scalability.
The data in Figure 3 is from a set of backup benchmark tests that Microsoft, Legato Systems, and Compaq performed. The companies ran the tests on a 263GB database using an 8-way Compaq AlphaServer 8400 with 8GB of memory, 2.2TB of disk storage, and as many as 32 Compaq AIT tape drives. As you can see in Figure 3, increases in backup rate are directly proportional to the number of drives in the system.
SQL Server 7.0 also lets you perform online backups. A necessity for applications that require 24 * 7 availability, SQL Server 7.0's online backup lets you back up databases while they're in use. The backup tests that Microsoft, Legato, and Compaq conducted also tested online backup performance. The companies tested SQL Server's online backup under medium, heavy, and very heavy system usage. The first test sustained a medium transaction load of 170 transactions per second (tps) before the backup. When online backup began, the transactions-per-second figure dropped to 133GB per hour; SQL Server still provided 78 percent of its typical throughput rate while running the online backup and its application load. A second test took place under a heavy transaction load of 197tps. When online backup began, the server managed 150tps—76 percent of its throughput. Under a very heavy transaction load of 278tps, online backup caused the numbers to dip to 188tps—still managing 68 percent total throughput under very heavy load. You can find detailed information about Microsoft's backup performance benchmarks for SQL Server at http://www.microsoft.com/sql/productinfo/backup.htm.
Data Warehousing and OLAP Partitions
Data warehouses have massive data-storage requirements. They also need to provide high-performance ad hoc queries. SQL Server 7.0 supports data-warehousing and decision-support applications through its OLAP Services. Standard relational database queries must directly access all the pertinent rows in a database. To provide support for high-performance query operations, OLAP Services uses cubes and data aggregation. Cubes consist of a set of dimensions and measures that enable multidimensional data navigation. In many cases, the data that resides in each cube's cell is a summary or aggregation of the data in the multiple rows and columns. OLAP applications use this summary data to attain extremely high-speed query-processing capabilities.
Although cubes provide the logical definition of the data that resides in a data warehouse, the actual data is in partitions. Each cube must have at least one partition. However, SQL Server 7.0 Enterprise Edition lets you create multiple user-defined partitions. These partitions let you divide the cube between different data sources on separate servers. This capability provides massive scalability because it lets clusters of SQL Server and OLAP services work in tandem. Figure 4 shows how you can split OLAP partitions among multiple data sources. The decision support application sees the OLAP cube as one entity. However, the cube actually resides in three user-defined partitions.
Climbing into 2000
Microsoft expects upcoming enhancements in Windows 2000 (Win2K) and SQL Server 7.5 (code-named Shiloh) to push SQL Server's high end even further up the enterprise ladder. Win2K and SQL Server 7.5 will increase SQL Server's ability to support VLDBs. Running on Win2K Advanced Server (Win2K AS), SQL Server 7.5 will be able to address as much as 8GB of RAM. Win2K Datacenter Server (Datacenter) will further raise the roof by letting SQL Server 7.5 address as much as 64GB of memory via the Physical Address Extension (PAE). Additional hardware improvements in Datacenter will allow 16-way SMP with 32-way support available through various OEMs. Datacenter will also allow 4-node failover clustering. SQL Server 7.5 will include support for Active Directory (AD), advanced parallelism, cascading Declarative Referential Integrity (DRI) support, improved clustering support, and materialized views.
Microsoft has targeted several SQL Server 7.0 features toward boosting SQL Server into the arena of enterprise-class scalability. Although SQL Server 7.0 still might not scale up to the levels that the largest mainframe installations require, the new version can meet 98 percent of today's businesses' needs.
About the Author
You May Also Like