Distributed Partitioned Views

Learn how to deploy the technology behind Microsoft's scale-out strategy

14 Min Read
ITPro Today logo in a gray background | ITPro Today

Editor's Note: This is the first article in a three-part series about distributed partitioned views in SQL Server 2000.

Online transaction processing (OLTP) environments and large Web site databases usually consist of many individual queries requesting or manipulating relatively small amounts of data. As the system grows, and users issue more queries against the database, DBAs usually try to improve response time by scaling up, which means increasing the server's power. You can add CPUs, replace CPUs with faster ones, add more memory, expand the network, or add faster disk drives with smarter controllers. But at some point, you exhaust the available resources for scaling up because you reach your machine's—or your budget's—limit. SQL Server 2000 introduces a solution for the growing need for more processing power—scaling out.

Part 2: Querying Distributed Partitioned Views

Part 3: Modifying Views with INSTEAD of Triggers

When you scale out, you split huge tables into smaller tables, each of which is a subset, or partition, of the original table. You place each partition on a separate server. You manage each server independently, but together the servers form a federation. To access the data on any of the partitions, you define a view with the same name on each of the servers, making transparent the fact that the data is distributed among several nodes. A user or an application connecting to any of the servers can issue all Data Manipulation Language (DML) statements (i.e., SELECT, INSERT, UPDATE, and DELETE) against the view as if it were the original table. SQL Server 2000 intercepts the statements and routes them to the relevant servers. This configuration distributes the processing load among the federation members.

SQL Server 7.0 lets you create local partitioned views (see Claude Seidman, "Creating Horizontally Partitioned Views," April 2000). In SQL Server 7.0, you can also create partitioned views across multiple servers, but they aren't updateable, which greatly reduces their usefulness. In SQL Server 7.0 and earlier versions, any view based on a UNION operation is not updateable, whether it's on a single server or distributed across multiple servers. SQL Server 2000 removes this general restriction (letting users update a certain class of views based on a UNION statement) and introduces new optimization techniques for processing partitioned views. In this article, we discuss the new optimization techniques and show you how to set up and modify distributed partitioned views.

Setting Up Distributed Partitioned Views

Setting up distributed partitioned views involves three steps: setting up the linked servers, creating the partitioned tables, and creating the partitioned views.

Setting up the linked servers. Setting up distributed partitioned views requires a preliminary step that isn't necessary when you set up local partitioned views. Because partitioned tables are distributed across multiple servers, each server needs access to every other server. So, you need to configure all the servers as linked servers. Suppose you want to partition the tables Customers and Orders across three servers called Shire, Hobbiton, and Rivendell. Shire needs to point to Hobbiton and Rivendell; Hobbiton needs to point to Shire and Rivendell; and Rivendell needs to point to Shire and Hobbiton. Figure 1 shows the linked servers setup. Each arrow stands for a linked server configuration. Listing 1 contains the script to set up the linked servers on Shire, or Node1.

The @server parameter in the sp_addlinkedserver stored procedure specifies the logical name that we use to refer to the linked server, and the @datasrc parameter specifies the name of the server we want to access. Notice that in this script, remote server names consist of two parts separated with a backslash, as in serverinstance. SQL Server 2000 supports multiple instances of SQL Server installations on the same server. So if you want to run these code examples on a single machine, you can install three instances of SQL Server 2000 on your machine and simply replace the server names used in the scripts with your serverinstance names. For example, if you have a server called Server1, you can install three instances of SQL Server 2000 called Instance1, Instance2, and Instance3, and refer to them as Server1Instance1, Server1Instance2, and Server1 Instance3, respectively. (For more information about multiple instances in SQL Server 2000, see Kalen Delaney, Inside SQL Server, "Multiple Instances," July 2000, and "Managing Multiple Instances."

After you define the connection to each of the other two servers, use the procedure sp_ serveroption to set lazy schema validation for each linked server. This setting optimizes performance by ensuring that the query processor won't request meta data for any of the linked tables until users request data from the remote member table. Microsoft introduced the lazy schema validation option in SQL Server 2000.

When a user executes a distributed query on the linked server, the local server logs on to the linked server on behalf of the current user. If the following conditions are met, you don't need to configure linked server security:

  • Users connect to SQL Server by using Windows Authentication.

  • Security account delegation is available on the client machine (where the user interface is running) and on the local server. Security delegation is available with Windows 2000.

The configuration in this example meets these requirements, so you don't need to configure linked server security. The three servers are member servers in a Win2K domain called MIDDLE _EARTH, and the users connecting to the servers use Windows Authentication. If your configuration doesn't meet these security requirements, you need to configure linked server security by using the stored procedure sp_addlinkedsrvlogin.

Next, run the script in Listing 2 to set up the linked servers on Hobbiton, or Node 2. Last, run the script in Listing 3 to set up the linked servers on Rivendell, or Node 3.

Creating the partitioned tables. After you configure the linked servers, and each server can communicate with the other servers, you're ready for the most important part of distributed partitioned views—creating the partitioned tables. On each of the servers, create a table that has the same structure as the original table that you're splitting. Each table will hold a horizontal slice of the original table. The key point in creating the partitioned tables is that each table contains a column called the partitioning column, and the value in this column determines which of the partitioned tables any row can be inserted into. The partitioning criteria, which you implement as a CHECK constraint, define the subset of the rows that each partition will host. The CHECK constraints on each table must be mutually exclusive. Each row that you want to insert must satisfy only one of the CHECK constraints, as we'll demonstrate.

We use a UNION statement to combine these tables into an updateable view. As we mentioned, views based on UNION statements are updateable in certain situations. If you want to maximize query efficiency, exploit the new optimization capabilities introduced with SQL Server 2000, and allow updates directly to the view, then you need to ensure that the following conditions are met in each of the tables:

  • The partitioning column needs to be part of the primary key of the underlying tables, must not allow NULLs, and can't be a computed column.

  • The CHECK constraint defined on the partitioning column that defines the partitioning criteria can use only the following operators: BETWEEN, AND, OR, <, <=, >, >=, =.

  • The tables can't have identity or timestamp columns, and none of the columns in any of the tables can have a DEFAULT constraint.

You need to answer two important questions: Which column will you use as the partitioning column, and which range of values does each partition need to host? The most efficient partitioning criterion is one that defines a range of values in each partitioned table that users are most likely to access locally and that results in rows distributed as evenly as possible. This criterion could be a range of customers, a range of part numbers, or a geographical region. This setup lets the local server process most of the queries, minimizing the need for distributed processing.

Now that you're aware of the requirements for the tables, you can create the tables. In this example, we show you how to create tables that are similar in structure to the Customers and Orders tables in the Northwind database. We'll distribute the rows among the tables based on the first letter of the customer ID: A through F will go in the first table, G through P in the second table, and Q through Z in the third table. The following code samples run in a database called testdb, which we created on each of the servers. Listing 4 shows the script that creates the first partitioned tables in the server ShireShiloh (Node1).

Notice a few important points in the script. We have a CHECK constraint on the customerid column, which is part of the primary key in the CustomersAF and OrdersAF tables. This CHECK constraint ensures that only customers with customer IDs in the range AAAAA through FZZZZ can be in this table, and the optimizer will use this CHECK constraint to optimize queries and modify data. If you want to partition the Orders table by customerid, you need to include the customerid column in the primary key. Partitioning the original Customers table by the customerid column is obvious, but why do we use the same partitioning criterion for the Orders table? We'll explain the partitioning criterion in an upcoming article when we demonstrate how to query the partitioned view. Note that for the purpose of guaranteeing uniqueness of rows, it isn't necessary for the customerid column to be part of the primary key. However, to effectively maintain the partitioned data, customerid needs to be part of the key of the Orders table so that you can partition the Orders table by the customerid column.

Next, create the partitioned tables in the Hobbiton Shiloh (Node2) server, with customer IDs in the range GAAAA through PZZZZ, as Listing 5 shows. And last, create the partitioned tables in the Rivendell Shiloh (Node3) server, with customer IDs in the range QAAAA through ZZZZZ, as Listing 6 shows.

Creating the partitioned views. Now that you've set up the partitioned tables, you need to assemble them, which is probably the easiest part of the process. You define a view that assembles the rows from each table by using the UNION ALL operator. In each server, you have one local table and two remote tables, so the view looks slightly different in each server. You reference the local table by using only the table name, and you reference the remote tables by using the four-part table name, such as Node2.testdb.dbo.CustomersGP.

Let's create a partitioned view in the ShireShiloh (Node1) server, as Listing 7 shows. After you create the view, users can start modifying and querying it if they're connected to ShireShiloh (Node1). Creating similar views on the other two servers enables the same modifications and queries, regardless of the server the users are connected to. Next, create similar views on HobbitonShiloh (Node2), as Listing 8 shows. And last, create similar views on RivendellShiloh (Node3), as Listing 9 shows.

As the partitioned tables do, the views need to meet a few conditions to be updateable and to exploit the new optimizing capabilities that distributed partitioned views feature:

  • The view can't reference any table or column more than once.

  • Each SELECT list must reference all the columns participating in the primary key of the underlying tables.

  • The columns in the same ordinal position in the select list in all the select statements must be of exactly the same data type, precision, scale, and collation. And the partitioning column must be in the same ordinal position in all the select statements.

  • If a column exists in the base table but doesn't exist in the select list in the view, the column must allow NULLs.

Note that the last two requirements are much stricter than any requirement for creating local, nonpartitioned views. Simple views must have compatible data types in the corresponding positions of the select lists, but the types don't have to be exactly the same. Also, you can insert data into simple views even if columns from the base table that aren't included in the view don't allow NULLs but have a defined default. But if you're inserting data into a distributed partitioned view, the columns can't have defaults.

Modifying Distributed Partitioned Views

When you create distributed partitioned views, you need to ensure that the underlying tables and the views created on them meet certain criteria. Most statements that you use to modify a distributed partitioned view also need to meet certain requirements. Delete statements have no restrictions; you can delete any or all rows from the view. However, insert and update statements need to meet the following requirements for you to use them with distributed partitioned views:

  • An insert statement against the view must supply a value to the partitioning column.

  • If you are running a beta version of SQL Server 2000, an update statement against the view might fail if an attempt is made to modify the primary key. You can issue a delete statement followed by an insert statement instead of an update statement. Note that Microsoft announced that the final version of SQL Server 2000 will support updates to the primary key.

In this example, we ran the following modifications from our connection to ShireShiloh (Node1), but you can run the modifications from a connection to any of the servers. Make sure the Microsoft Distributed Transaction Coordinator (MSDTC) service is running, because these modifications will result in a distributed transaction. You also need to set the XACT _ABORT session setting to ON. This setting determines whether an entire transaction will roll back if a runtime error occurs, but in this case, this setting is required for data modification statements in an implicit or explicit transaction against most OLE DB providers, including SQL Server. Make sure you turn on the Graphical Execution Plan in Query Analyzer to see the detailed execution plan, or use SET STATISTICS PROFILE ON if you prefer to analyze the execution plan in a textual mode.

Now you can populate the Customers view with all the rows from the Customers table in the Northwind database, as Listing 10 shows. Notice that we ran the insert statement against the view. Figure 2 shows the execution plan, which illustrates how SQL Server processed the insert statement. Let's analyze the execution plan's important steps (and ignore one trivial step):

  • Step 1 (Clustered Index Scan): Read all of the rows from the local Customers table in the Northwind database.

  • Step 2 (Compute Scalar): For each of the rows read in Step 1, compute a new column called Ptn1034, which will hold a value that specifies which of the three ranges of rows it belongs to—0 if the partitioning column is AAAAA through FZZZZ, 1 if the partitioning column is GAAAA through PZZZZ, and 2 if the partitioning column is QAAAA through ZZZZZ.

  • Step 4: Store the rows read in Step 1, including the computed column Ptn1034, which was calculated in Step 2 in a temporary table.

  • Steps 5 and 6: Read the temporary table and insert only the rows that match the criteria Ptn1034 = 0 into the local CustomersAF table.

  • Steps 7 through 9: Read the temporary table and insert only the rows that match the criterion Ptn1034 = 1 into the remote Node2.testdb.dbo .CustomersGP table.

  • Steps 10 through 12: Read the temporary table and insert only the rows that match the criterion Ptn1034 = 2 into the remote Node3.testdb.dbo .CustomersQZ table.

You can issue insert, update, and delete statements against the view. Try the modifications that Listing 11 shows against the view with a connection to Node1 and compare the execution plan for statements that modify a local table with the plan for statements that modify a remote table. You'll see that the optimizer splits and routes the modifications to the relevant nodes and can determine when to perform a local operation and when to perform a remote one.

Long Nights, Partitioned Views

Microsoft code-named the distributed partitioned view project Coyote, and we imagine that the Coyote team worked many long nights on this new feature. In the September issue, we'll demonstrate the next step in the process—querying distributed partitioned views. We'll show you the execution plans for various queries so you can see how SQL Server distributes queries across the nodes. And in a third article in an upcoming issue, we'll offer suggestions for achieving similar functionality when your tables or views don't meet the criteria for distributed partitioned views that we described here.

Sign up for the ITPro Today newsletter
Stay on top of the IT universe with commentary, news analysis, how-to's, and tips delivered to your inbox daily.

You May Also Like