The Challenge of Defragmenting an NTFS Partition

Find out how you can overcome NTFS's unique hurdles.

Tom Iwanski

January 16, 2001

6 Min Read
ITPro Today logo

Disk fragmentation is a timeless problem that isn't limited to PC hard disks. (Raxco Software and Executive Software both started out providing defragmentation utilities for OpenVMS systems.) Regardless of the platform, data has an inherent tendency to fragment as it's continually written to and erased from a disk. In Windows 2000 and Windows NT 4.0, fragmentation is present from the start—the installation process typically leaves dozens, if not hundreds, of fragmented files in its wake. To complicate matters, NTFS presents particular hurdles for defragmentation utilities.

The smallest unit of storage on an NTFS partition is the cluster. Clusters consist of one or more contiguous sectors. On my test systems, each cluster was 4KB and comprised eight 512-byte sectors. Because the 4KB clusters are the smallest discrete unit of storage, the OS writes files of 4KB or less to disk contiguously. Files larger than 4KB, however, become fragmented if no contiguous clusters are available when the OS writes the file to disk. The larger the file, the greater the likelihood that the OS will write the file in a fragmented state. Fragmented free space also increases the likelihood of fragmentation, because fewer contiguous clusters are available. Utilities defragment disks by picking up data from a portion of the disk, then reorganizing the data so that the clusters that make up a file are contiguous. The concept is simple, but NTFS complicates matters by locking its system files for exclusive use, making them unmovable through conventional means (i.e., the Microsoft MoveFile API).

The MoveFile API implements a set of rules for moving files while the OS is active. By nature, the MoveFile API presents challenges because it requires that the OS move data 16 clusters at a time. Therefore, even for online defragmentation, utilities that use the MoveFile API must do extra work to arrange files contiguously. A more serious problem, however, is that the MoveFile API contains no provisions for moving system files. The inability to manipulate these system files decreases the effectiveness of defragmentation utilities. A highly fragmented pagefile, for example, becomes a huge obstacle because it fragments available free space, and a defragmentation utility can't find contiguous space to place data files. To completely understand this problem, you need to look at the characteristics of specific system files.

Most systems administrators are familiar with the pagefile.sys file. Win2K and NT use this file as virtual memory for exchanging data between disk and RAM. The pagefile.sys file presents the following unique challenges to keeping your disk defragmented:

  • The pagefile is big. Windows typically allocates a pagefile that is 150 percent of the size of the system's physical memory. Large files are more likely to fragment from the outset because they require a lot of contiguous free space. The OS will likely fragment the pagefile from the start because a sufficiently large contiguous block of free space probably doesn't exist.

  • The pagefile expands as necessary to accommodate system needs. As the file expands, it's likely to become even more fragmented. One trick that systems administrators use to combat this tendency is to configure the pagefile as static so that it can't expand.

  • The OS locks the pagefile for exclusive use. If a defragmentation utility needs to move the pagefile, it must work around the OS by either running when the OS isn't present or programmatically circumventing the active OS.

Aside from the pagefile, the OS uses other unmovable system files known as metadata. Metadata consists of a handful of files that represent the heart of NTFS's functionality. When you format an NTFS partition, the format program designates the partition's first 16 records for files that contain the metadata. These files are permanently hidden; you can view them only with certain disk utilities. You'll recognize them by the dollar sign ($), which is the first character of their name.

A familiar metadata file is the Master File Table (MFT), which is a relational database that indexes all files and directories on an NTFS partition. The file system also uses several other metadata files, such as a transaction log ($log), partition boot sector ($boot), and the MFT mirror ($MftMirr). However, the OS places the MFT mirror at the partition's logical center. The MFT mirror duplicates the MFT's first three records and acts as disaster protection if the original MFT is damaged. Although the files that comprise the metadata aren't as large as the pagefile, they are nevertheless unmovable and can fragment or migrate on the disk, causing free-space fragmentation.

The MFT in particular can become severely fragmented because of its ability to store small files' entire contents, along with files' index entries. A partition that contains many small files will fill the MFT faster than it will fill the free space, thereby causing the MFT to expand. To accommodate this necessary expansion and reduce MFT fragmentation, the OS creates an MFT reserved zone. The OS allocates this reserved zone, but the zone doesn't appear as used space when you look at your disk's properties.

The fact that the OS doesn't list the MFT reserved zone as used space is significant to the utilities I tested in this review for two reasons. First, the MFT reserved zone can consume a significant chunk of disk space. Second, the OS locks the reserved space for exclusive use (just like a system file), and defragmentation utilities can't use that space during online passes. Therefore, defragmentation utilities that use the MoveFile API have less free space to work with. For example, my laptop reports that I have 646MB of free space on my system disk, but the actual free space outside the MFT reserved zone is 238MB. A utility that tries to defragment files larger than 238MB on this machine might experience difficulties.

The vendors represented in this review approach the limitations of the MoveFile API differently. Raxco and Executive Software have both developed offline methods for defragmenting NTFS system files. These methods require that you schedule a reboot of the system that you want to defragment. At a preliminary point in the boot process, the defragmentation utility launches and goes to work on the system files. This method is effective, but rebooting a server isn't always convenient. Symantec took a different approach by developing a utility that performs all its work online. By synchronizing every cluster move with the file system, Symantec's Norton Speed Disk 5.1 can move system files while the OS is active. The drawback to this approach is that changes to the OS might cause problems with the utility. For example, when Microsoft altered COM objects to implement NT Server 4.0, Terminal Server Edition (WTS) code, Speed Disk no longer worked on WTS—and it still doesn't.

Each vendor's approach to defragmenting system files has definite ramifications. Although you want to assess each product's usability and performance, your primary concern is undoubtedly data integrity. I'm happy to report that in dozens of testing cycles, none of these products produced an instance of data corruption. Of course, my testing doesn't guarantee that corruption won't occur, nor do my findings lessen the importance of creating backups before running such utilities. Nevertheless, you should be confident in choosing one of these products based on what it can do for you—rather than being afraid of what the product might do to your data.

Sign up for the ITPro Today newsletter
Stay on top of the IT universe with commentary, news analysis, how-to's, and tips delivered to your inbox daily.

You May Also Like