Questions, Answers, and Tips About SQL Server - 09 Feb 2000

Learn about the paging file's importance, SQL Server 7.0 evaluation version's hazards, build numbers, DBCC DBREINDEX and fragmentation, mysterious indexes, permissions problems, and performance tuning.

7 Min Read
ITPro Today logo in a gray background | ITPro Today

When I run SELECT @@version against SQL Server 7.0 Service Pack 1 (SP1), the service pack reports a build number of 699. However, Help About in Query Analyzer on a SQL Server 7.0 SP1 system reports a build number of 694. What's the correct build number for SQL Server 7.0 SP1?

The correct build number for SQL Server 7.0 SP1 is 699. Thus, running SELECT @@version from Query Analyzer or Interactive SQL (ISQL) will return a build number of SQL Server 7.0.699. However, the build number for client tools is 694 even after you apply the SP1 patches. This numbering is confusing, and we hope Microsoft will synchronize these numbers in future versions to avoid confusion.

According to SQL Server 7.0 Books Online (BOL), you can use the Database Consistency Checker (DBCC) DBREINDEX command to recreate an index on a table. When DBCC SHOWCONTIG reported high fragmentation, we tried using DBCC DBREINDEX to reduce table fragmentation. However, running this command didn't improve fragmentation. Any ideas?

This question is difficult to answer without more information, but the cause of your problem might be a mistake that new SQL Server users commonly make. Users must remember that the data and the index are always separate structures, except in a clustered index, in which case the index's leaf level (i.e., the bottom level) and the data pages are the same. Running DBCC DBREINDEX on an index won't improve table fragmentation that DBCC SHOWCONTIG reports unless you're reindexing the clustered index (tables can have only one clustered index). If your table doesn't have a clustered index, you can create and drop a clustered index to eliminate the fragmentation.

Where can I find information about the weird indexes that SQL Server 7.0 automatically generates? I used the sp_helptable and sp_helpindex commands to view the indexes, but I can't script them. I want to be able to drop these indexes, but I can't find enough information about them to be able to drop them.

Trying to drop automatically created indexes for investigation and learning purposes is OK; however, in general, we strongly discourage people from messing with the decisions that the SQL Server optimizer has made (in this case, to automatically create an index).

With that said, if the entities you've discovered start with the prefix _WA_sys, they're not indexes; they're statistics that the optimizer generates and maintains to help it keep track of data distribution. These entities are a scratch pad for the optimizer, and SQL Server 7.0 Books Online (BOL) and Kalen Delaney's Inside Microsoft SQL Server 7.0 (Microsoft Press, 1999) document their role. Check both of these resources for information about statistics and the DROP STATISTICS command.

Alternatively, you might be looking at indexes that SQL Server created when you issued a PRIMARY KEY or UNIQUE constraint. You can't drop these entities as indexes; you must drop the constraints.

I'm trying to create a database on a SQL Server 7.0 Service Pack 1 (SP1) system. From my workstation, I can see a share of the data directory. The share is \SQL7SQLDATA and points to the d:MSSQL Databases directory. Under this directory, I've created the DB1DIR directory. When I try to create database files through the Enterprise Manager, the Enterprise Manager doesn't show the DB1DIR directory. When I manually input the path, I receive an error that says the directory doesn't exist. However, when I directly log on to the server, I can see the full directory path d:MSSQL Databases DB1DIR.

I use the share to access the server to set up directories and perform other tasks. When I'm at the server console, I can see the directory d:MSSQL Databases DB1DIR. However, when I try to create the database and select where I want SQL Server to put the files, I can see only the d:MSSQL Databases directory. When I try to expand that directory by clicking the plus sign (+) next to the directory, nothing appears and the plus sign disappears. Any ideas?

Our first guess is that you might have set permissions to prevent the Windows NT account that is running the MSSQLServer service from accessing the subdirectory that you're trying to create the database in. To ensure your problem isn't a quirk of the Enterprise Manager's GUI, use Transact SQL (T-SQL) commands to create the database rather than creating the database in the Enterprise Manager GUI. If the T-SQL method doesn't work and you rule out permissions problems, you might have stumbled across an Enterprise Manager bug. However, not understanding how the SQL Server account interacts with permissions on the local machine and the network is one of the most common causes of SQL Server problems. Checking SQL Server permissions for the service account needs to be on your troubleshooting checklist.

I'm looking for a basic recipe for SQL Server performance tuning that provides a checklist in order of the tasks that are most likely to result in the largest performance gain. Does Microsoft provide such a list? Can you recommend a useful book that includes a recipe for performance tuning?

We wish this magic list existed. However, not only would this list be fairly long, but we also don't know of a way to generically order the list. We think taking a more iterative approach is best: Benchmark (i.e., generate basic performance metrics for) your system, find a bottleneck, fix it, and benchmark again. Remember to also check your database designs and application coding. In our experience, these entities—rather than hardware settings—are the culprits.

Brian's article "Tuning Your Database with SQL Trace" (May 1998) describes how you can use SQL Trace to effectively find application-level bottlenecks. In SQL Server 7.0, Microsoft replaced SQL Trace with SQL Profiler, but most of the core information is the same. For more information about SQL Profiler, see Brian's July 1999 Web Exclusive "Using SQL Server Profiler to Tune Your Applications." Both of these articles provide evidence that you can usually find the most significant performance gains by tuning applications and middleware rather than by tweaking your hardware.

We've created an exercise to show you how much information you can gain from profiling your applications. You can try the following steps at home (you need Microsoft Access 2000 to proceed):

  1. Create a new Access database, and link the authors table from your SQL Server pubs database to the new Access database.

  2. Use the Access GUI to write an UPDATE query that updates the phone column to itself (but don't run the query yet). You don't want to change the data—you're simply discovering how inefficient a client application can be.

  3. Open SQL Profiler, and make sure you set tracing to at least RPC:Completed and SQL:StmtComplete event in the Transact SQL (T-SQL) event class.

  4. Run your Access UPDATE query.

Brian used the default SQL Server 7.0 Service Pack 1 (SP1) configuration and performed this exercise on his Windows NT 4.0 SP5 laptop. As Figure 1 shows, running the query from Access required 55 commands. (Note that the pubs..authors table has only 23 rows.) First, Access issued a few housekeeping commands; then it issued a SELECT command to retrieve PRIMARY KEYS for all 23 rows. Next, Access prepared a server-side cursor for SELECT positioning and a server-side cursor to issue updates. Finally, Access issued two sp_execute commands to update each row. The first command positions the SELECT positioning statement, and the second command issues the update. So, Access needed 55 commands to update 23 rows; each row required two T-SQL calls. This situation is all too common.

Recently, Brian was helping a client tune an UPDATE of 6 million rows. Assuming Access behaves the way it did in the previous exercise, updating 6 million rows (requiring 12 million commands) at a reasonable rate of 100 commands per second on one connection would take approximately 33.3 hours to perform. You can't tune away this type of problem by reconfiguring or adding hardware. Understanding why your applications are running slowly and fixing the root of the problem is your only hope, and SQL Profiler provides the best way to do that.

Sign up for the ITPro Today newsletter
Stay on top of the IT universe with commentary, news analysis, how-to's, and tips delivered to your inbox daily.

You May Also Like