NT vs.UNIX: Is One Substantially Better

Throughout NT's history, NT has chalenged UNIX for enterprise market dominance, but does this war have a clear winner? Read our expert's comparison of these OSs and decide for yourself whether NT is enterprise-ready.

Mark Russinovich

November 30, 1998

30 Min Read
ITPro Today logo in a gray background | ITPro Today

OS heavyweights go head-to-head for the enterprise

As Windows NT's share of the workstation and server market has eroded UNIX's dominance, discussion regarding which operating system (OS) is the superior one continues to rage. Many people argue with religious fervor that whichever OS they worked with first is best. In particular, some members of the UNIX camp seem to believe that if they argue loudly enough about the merits of UNIX, the tide of NT growth will slow. In light of this heated debate, it's ironic that both NT and UNIX have roots in the mid-1970s and that both were influenced by many identical theoretical OS concepts and principles (for more information about NT's history, see "Windows NT and VMS: The Rest of the Story..." page 114). No one should be surprised to discover that NT and UNIX have many similarities as well as differences.

In this article, I'll hold NT and UNIX side by side to compare their architectural subsystems, and I'll review the major features of each, touching on process management, scheduling, memory management, and I/O processing. I'll present the results of the most objective measurements available: industry-accepted benchmark results. Finally, I'll address the question any comparison begs: "Which OS is better?" No matter which side of the NT-UNIX debate you're on, you'll find some surprises waiting for you.

A Brief History of UNIX
Ken Thompson developed the first version of the UNIX OS in 1969 at Bell Laboratories. Dennis Ritchie joined Thompson early in the project and not only invented the C programming language but contributed to UNIX's design. Thompson and Ritchie rewrote UNIX in C, converting it from PDP-7 computer assembly language. This conversion was key to UNIX's later acceptance, allowing different computers to easily recompile and run the OS code. Some estimations hold that only 3 percent of UNIX's early source code was hardware-dependent, requiring programmers to rewrite it for porting to different computers.

UNIX underwent further development at Bell Labs, debuting to the research community in an academic conference paper in 1974. Bell Labs released the first version of UNIX, Version 6 (V6), in 1976. UNIX use quickly spread to many universities and research centers, fueled in part by the OS's portability to new and different computer systems. In 1978, Bell Labs released UNIX Time-Sharing System, Seventh Edition, a version of UNIX that had portability as a specific design goal. At the time, UNIX included many features that only mainframe OSs had, and its hardware resource requirements were relatively light. Thus, UNIX was an ideal OS for smaller systems that people commonly called minicomputers.

Bell Labs distributed UNIX with full source code. Researchers took this version of UNIX and developed custom versions with experimental design modifications. These customized versions of UNIX fueled UNIX's market acceptance because they made integrating OS innovations easy for developers. However, this heritage is a mixed blessing that the UNIX community still wrestles with today. Within a year after Bell's release of UNIX's full source code, three or four major UNIX variants began to evolve.

In the early 1980s, three major branches grew on the UNIX tree: UNIX System III from Bell Lab's UNIX Support Group (USG); UNIX Berkeley Source Distribution (BSD) from the University of California at Berkeley; and a version of UNIX that ran on the x86 processor family, Microsoft's XENIX. Are you surprised to learn that Microsoft had a version of UNIX? If so, you'll be even more surprised to learn that XENIX had the largest installed base of any UNIX system during the early 1980s. Microsoft sold XENIX to The Santa Cruz Operation (SCO) in 1995, when Microsoft purchased a portion of SCO. Throughout the 1980s, the UNIX market fragmented further, with versions of the OS splitting several times; in many cases, descendant branches of a version merged with separate UNIX lines.

UNIX's fragmentation spawned many variant OS interfaces, and the result of the variation was that any particular version's programs did not port to other versions. To stem this trend, a group of vendors, working through the Institute of Electrical and Electronics Engineers (IEEE), formulated the POSIX standard. A major milestone in this effort was the creation of a definition for a standard system-call interface, or UNIX API, in 1988. This API was POSIX 1003.1. The POSIX standards have grown to include other aspects of UNIX design, including realtime processing capabilities, user interfaces, and application suites. To the detriment of standardization, however, other organizations were establishing UNIX standards in the late 1980s. The X/OPEN Group, consisting primarily of European vendors, published a standard specification called the X/OPEN Portability Guide in 1987.

Although most UNIX variants today support either the POSIX or X/OPEN standard, every UNIX vendor has tried to differentiate its offering with a proprietary interface, applications, and architecture. Several dozen widely used versions of UNIX exist today, and Sun's Solaris, HP's HP/UX, and IBM's AIX hold the largest shares of the commercial UNIX market. The Linux version of UNIX, which has become the centerpiece of the so-called open source movement, made headlines in the trade press recently. Linux is a homegrown UNIX variant designed by Linus Torvalds and further developed by hundreds of independent developers around the world. In 1993, Linux (including source code) became available on the Internet for free download. An ironic twist to the tale is that only a few years ago the computer industry press debated whether NT could challenge UNIX's market share--today the argument is whether Linux can be the UNIX challenger to NT. Recent market-research reports show that Linux is the only server OS besides NT to gain market share. Other research reports 11 million NT installations, whereas the Linux community reports an estimated Linux installed base of between 5 and 7 million. (To read more about the Linux challenge to NT, see the sidebar "Linux and the Enterprise" and David Chernicoff, "Walking the Walk and Talking the Talk," November 1998.) The bulk of those NT installations are in the business community, whereas a large percentage of the Linux installations are still in the realm of the computer hobbyist. This situation might change, however, with the recent release of Oracle8 for Linux and Netscape's and Intel's investment in Red Hat Software, a commercial provider of Linux software.

NT and UNIX
NT's roots extend back to 1977 and Digital Equipment's release of VMS 1.0. Many core members of the future NT design team left Digital in 1988 to join Microsoft, which released NT's first version, Windows NT 3.1, in 1993. Thus, NT and UNIX have been evolving since the mid-1970s, and trends in academic OS research have influenced each OS. In addition, both OSs have similar design goals: portability, extensibility, and the ability to run on computers ranging from desktop PCs to departmental servers.

Internally, NT is similar to VMS, but how closely do NT's capabilities and features match those of UNIX? Shedding light on this question is difficult, because even the top three UNIX market share leaders--Solaris, HP/UX, and AIX--are in many ways as different from one another as each is from NT. Thus, there is no definitive "UNIX" to compare with NT. Therefore, I'll use traditional or prevalent UNIX features and implementations for each UNIX subsystem in the following comparison of NT and UNIX. I'll draw from the market leaders and three other UNIX variants: Linux, BSD 4.4, and Digital UNIX.

The OS architecture of most versions of UNIX is similar to that of NT. Figure 1 and Figure 2 show the UNIX and NT architectures, respectively. (To explore NT's architecture in depth, see "Windows NT Architecture, Part 1," March 1998 and "Windows NT Architecture, Part 2," April 1998.) Both OS architectures have two modes of operation: user mode and kernel mode. Familiar applications such as word processors and database programs execute in user mode. User mode is nonprivileged, which means that the system restricts programs operating in user mode from directly accessing hardware or resources belonging to other programs. Most of the OS code executes in kernel mode. Kernel mode is privileged, which means that code running in kernel mode can access hardware and resources belonging to any application, with few limitations.

A major difference between UNIX's and NT's architecture is that UNIX does not incorporate its windowing system--the subsystem that manages GUI resources for applications--into kernel mode, as does NT 4.0. Instead, the UNIX windowing system is an add-on user-mode application that its developers wrote using publicly defined UNIX APIs; consequently, third-party products can replace UNIX's windowing system. However, the majority of the UNIX community has adopted MIT's X-Windows as a de facto, if not official, graphical interface standard. Before NT 4.0, the NT windowing system was a user-mode implementation, but Microsoft found that the performance of graphics-intensive applications improved when the windowing system operated in kernel mode.

Another difference between the OS architectures is that UNIX applications can call kernel functions, or system calls, directly. In NT, applications call APIs that the OS environment to which they are coded (DOS, Windows 3.x, OS/2, POSIX) exports. A kernel system-call interface provides APIs for managing processes, memory, and files. The NT system-call interface, called the Native API, is hidden from programmers and largely undocumented. The number of UNIX system calls is roughly equal to the number of Native APIs, around 200 to 300. The API that UNIX applications write to is the UNIX system-call interface, whereas the API that the majority of NT applications write to is the Win32 API, which translates many Win32 APIs to Native APIs.

In the following comparison of NT and UNIX subsystems, I'll contrast the way each OS names internal resources, implements processes and threads, and manages virtual and physical memory. I'll also compare and contrast UNIX's and NT's security model, file-system data caching, networking architecture, and extensibility.

Namespace and object management. An OS namespace gives applications the ability to identify and share resources. Perhaps the most visible part of an OS namespace is the file-system namespace, which in UNIX includes names such as /usr/mark/bin/csh and in NT includes names such as C:BINCSH.EXE. Other resources that require naming so that two or more applications can uniquely identify them for sharing include synchronization resources (e.g., mutexes, semaphores, notification events) and shared memory.

NT's Object Manager subsystem implements NT's namespace. The Object Manager is a collection of kernel functions that provide uniform resource tracking, naming, and security to applications and other kernel-mode subsystems. Kernel subsystems define Object Manager objects to represent the subsystems' resource types, and rely on the Object Manager's support routines for naming and security. Thus, the Object Manager represents processes as process objects, files as file objects, and semaphores as semaphore objects. The Object Manager's object-tracking mechanism notifies a subsystem that owns an object when applications close, open, or query the object. The Object Manager notifies subsystems via method functions, which the subsystems register when defining an object type. In response to Object Manager notification, a subsystem can perform actions particular to the type of object the subsystem is managing.

The Object Manager namespace allows for the naming of any object and also allows entrance to the familiar file-system namespace, which the I/O Manager subsystem implements. The file-system name C:BINCSH.EXE is the application-friendly name of the file csh.exe. In the NT kernel, csh.exe's name is similar to DeviceHarddisk0Partition1BINCSH.DeviceHarddisk0Partition1 is the name of a device object in the Object Manager namespace; this device is a doorway to the I/O Manager's file-system namespace. An object named Registry in the Object Manager namespace functions as a similar doorway to the Registry namespace.

In the NT object model, device drivers can easily implement objects in the namespace that represent nonstandard resources. For example, a device driver can create an object called Proc that, after an application reads it, returns information about the active processes in the system.

UNIX's object-tracking mechanism is not as formal as NT's mechanism. UNIX's namespace centers on the OS's file system and grew out of the original UNIX file-system namespace. Data structures called vnodes (in some older UNIX variants, inodes) are the equivalent of NT Object Manager objects and represent files as well as shared memory, synchronization objects, and nonstandard resources. (I use the term vnode to refer to inodes and vnodes in this discussion.) The example UNIX file-system name I gave earlier, /usr/mark/bin/csh, is not translated in any way in the kernel. However, any component of the name can serve as a link to a different namespace that a particular file-system driver implements. Traditionally, the /dev directory in the UNIX namespace contains objects that are doorways to non-file-system namespaces (all namespace providers in UNIX act like file-system drivers). Thus, /dev/proc is usually a doorway to a process file-system driver. An application that reads from this pseudofile can obtain information about the processes currently running on the system.

The file-system support code in the UNIX kernel notifies file-system drivers of actions that applications perform on vnodes that the file systems manage. The support code performs this notification by calling functions registered in a table that the file system associates with the vnodes the file system owns. For example, the support code calls the file system whenever an application opens, closes, or deletes an object represented by the file system's vnode.

The designs of NT's and UNIX's namespace and resource tracking are similar in their goals and even in their implementation. Both designs use a hierarchical namespace similar to a traditional file-system namespace, and both implement object notification functions and reference-count tracking. Similar object support mechanisms follow from NT's and UNIX's common goal of providing a generalized resource-tracking infrastructure that is integrated with the namespace.

Process management.Process management encompasses both the way an OS defines and implements user applications, and the method the OS uses to divide CPU time (or the CPU resource) between multiple active applications. NT and UNIX are time-sharing OSs that try to divide CPU time fairly between applications competing for the CPU. Neither OS is suitable for environments that have strict application-responsiveness guarantees. Both OSs, however, try to define execution modes that are more responsive than their standard modes so that each OS can execute applications with "soft realtime" requirements with some degree of efficiency. The way in which an OS implements its process management has a significant impact on the OS's ability to scale on multiprocessor systems.

NT defines an application using a process object, which serves as a container for all information about the application. The process object includes a memory space definition that contains the application's code and data, a table that identifies which resources the application is accessing, statistical information regarding the application's execution, the identity of the user the application is associated with, and one or more threads of execution. The NT scheduler divides time between threads (not between applications). A thread is a kernel execution state that determines where in an application the system should execute code on behalf of the thread. Applications can create additional threads, and all of a particular application's threads share the application's resources and memory space.

The NT scheduler always attempts to give CPU time to the highest-priority thread available. The scheduler defines two scheduling classes: dynamic and realtime. Threads executing with priorities in the dynamic half of the priority spectrum have a priority value of between 1 and 15 (higher numbers correspond to higher priorities). This range is called dynamic because the scheduler can adjust the priorities of dynamic threads by temporarily boosting their priority values according to various events, such as the receipt of input from the keyboard. The priority values of threads in the realtime range are between 16 and 31. Realtime priority values are fixed; the scheduler does not adjust these priority values. Typically, only a few OS-owned threads execute in the realtime range. When the scheduler gives a thread a turn on the CPU, the length of the turn, which is called a quantum, lies in the range of 20 to 120 milliseconds. When a thread's quantum is over, or if the thread cedes its turn early, the scheduler schedules other threads of the same priority (if any exist) in a round-robin manner.

NT supports symmetric multiprocessing (SMP). In SMP processors, all CPUs are identical and have equal access to memory and I/O peripherals. Internal data structures limit NT to using a maximum of 32 processors, but licensing limitations usually restrict the number of processors to 8 or fewer. NT rarely runs on systems with more than 8 CPUs. An important characteristic of the NT scheduler is that it can fully preempt the kernel. For example, even if a thread is executing in the kernel, the scheduler can end that thread's turn and schedule another thread in its place. Further, multiple threads can actively execute kernel code on separate CPUs simultaneously. These two functions--kernel preemption and execution of kernel code on separate CPUs--are necessary for multiprocessor scalability.

Process management in most modern UNIX variants is similar to NT process management. Both OSs use a process data structure to define applications. This data structure encompasses roughly the same components that an NT process encompasses. These components include an address space, resource handles, and statistics. In addition, modern variants of UNIX divide CPU time between kernel-mode threads and define processes as having at least one thread.

UNIX schedulers usually implement three priority classes--realtime, system, and dynamic--that span priority numbers from 0 to 100. Low priority numbers identify higher-priority threads. The UNIX realtime and system priority classes are similar to the NT realtime class in that the UNIX scheduler does not modify the priorities of threads executing in these classes. The UNIX scheduler can lower the priority (i.e., raise the priority number) of threads that execute in the dynamic class when the threads continue to execute without voluntarily giving up a turn. The length of a UNIX scheduler's quantum is similar to the length of NT's quantum: from ten to several hundred milliseconds.

UNIX's multiprocessor support is more advanced than NT's. Several variants of UNIX, including HP/UX, Solaris, and AIX, run on large SMPs with 32 or more CPUs. Some versions of UNIX can run on asymmetric multiprocessors. Similarly to NT's kernel, the kernels of most UNIX implementations are fully preemptible and simultaneously executable on different CPUs.

Process management in NT and UNIX have much in common: Both OSs define applications as processes, and both OSs' processes have one or more kernel threads. The differences between NT's and UNIX's process management involve differing priority schemes and subtleties in scheduling algorithms. One notable difference is that NT boosts the priorities of dynamic threads in response to events such as input, whereas UNIX depresses dynamic threads' priorities as the threads consume the CPU. Both OSs try to treat CPU-bound and I/O-intensive threads fairly with respect to other threads, but each OS goes about this task differently.

Memory management.An OS's memory manager is responsible for defining virtual address spaces for application code and data, and for sharing the physical memory resource of the computer among different applications. A memory manager should apportion more physical memory to applications that have heavy memory requirements while remaining responsive to the needs of all applications. A memory manager's policies and implementation determine how well the OS supports multiple simultaneously executing applications.

NT's Memory Manager defines a 32-bit virtual address map, which Figure 3 shows, that spans 2GB to 4GB.The space is split between user-mode application code and data and kernel-mode code and data. Usually, NT assigns the low 2GB (i.e., 0GB to 2GB) of the virtual address space to the user mode; this space is called the user space. NT assigns the upper 2GB (i.e., 2GB to 4GB) of the virtual address space to the kernel mode; this space is called the kernel space. NT does not give applications direct access to the kernel-mode portion of the address space; some versions of NT (e.g., NT Server 4.0, Enterprise Edition) support a switch that changes the virtual address space division to 3GB for the user space and 1GB for the kernel space. The kernel space permanently maps the NT kernel and device drivers, but user-space mapping changes to reflect the process address map of the currently executing thread. For example, if a Microsoft Word thread executes, Word's code and data map into the user space. However, if the scheduler switches to a Lotus Notes thread, the NT Memory Manager updates the user space with Lotus Notes' code and data.

NT's Memory Manager implements demand-paged virtual memory, in which the Memory Manager brings code and data into physical memory as an application accesses the code and data. The Memory Manager implements the required features of a modern OS, including allowing applications to share portions of their address maps with other applications; enabling copy-on-write, for efficient implementation of shared memory when changes need to remain private to the application making the changes; and enabling memory-mapped files. Memory-mapped files let applications efficiently read and modify file data; any changes the application makes to the mapped file automatically reflect back to the file's on-disk image.

NT bases physical memory management on assigning each application upper and lower limits on memory. In NT parlance, an application's allotted amount of physical memory is its working set. When an application reaches its working set's upper limit and accesses more data or code, the Memory Manager uses a least-recently-used (LRU) algorithm (the clock algorithm) to find data in the working set to replace. When the Memory Manager must bring in data or code from a file on disk, it will typically bring in slightly more than the application requests. This optimization is called clustering.

Most UNIX memory managers are generally similar to NT's Memory Manager. UNIX memory managers define a virtual address space split between user space and kernel space. Some UNIX variants implement the same 2GB-2GB or 3GB-1GB user-space and kernel-space split NT implements. Other UNIX variants, however, give the majority of the address space to applications, reserving only a few hundred MB for the kernel. Similarly to NT's Memory Manager, UNIX memory managers implement demand-paged virtual memory and support shared memory, copy-on-write, and memory-mapped files.

UNIX memory managers differ from the NT Memory Manager in that they manage memory globally--they do not constrain individual applications to specific upper and lower limits. In addition, when a UNIX application accesses code or data that must be brought into memory, the memory manager uses the clock algorithm or a close variation to find data or code that belongs to any application--not necessarily the application performing the access--to replace the code or data it maps to memory. This policy lets memory-intensive UNIX applications starve other programs, which can lead to a performance bottleneck known as thrashing. To combat thrashing, most UNIX variants have a swapper--a background process that can send entire applications out of memory. The swapper will thus swap out applications to relieve a thrashing condition. Finally, UNIX clusters memory-oriented file accesses in the same way NT does.

A side-by-side comparison of NT and UNIX memory management reveals many similarities: Both implement demand-paged virtual memory, both have similar address-space definitions, and both use variants of the clock algorithm for in-memory data replacement. The differences between the OSs include the fact that NT manages memory on a per-process basis, whereas UNIX manages memory globally. In addition, UNIX relies on swapping to avoid thrashing, whereas NT avoids such a condition through per-process management. Bigger differences between the OSs are that several variants of UNIX, including Solaris, HP/UX, and Digital UNIX, can make use of 64-bit address spaces on 64-bit processors. A 64-bit version of NT won't be available for at least a year. Using 64-bit address spaces can boost the performance of data-intensive server applications such as database servers.

Security.A modern OS must provide protection for its users' sensitive data, and the features of its security subsystem play a major role in determining the security rating an OS achieves. NT's security capabilities have earned it a C2-capable rating (as a standalone nonnetworked system), which the industry considers the minimum level required of a modern OS. NT's security model relies on the concepts of users and groups of users. NT defines users as having certain privileges, such as the ability to shut down the computer, back up files, or load device drivers. NT users can belong to any number of groups. The NT Object Manager's centralized security support means that the Object Manager can implement any object--including synchronization objects, shared memory, and files--with security.

NT specifies an object's security settings by implementing access control lists (ACLs). An object's ACL can have any number of access control entries (ACEs), and each ACE specifies which actions a particular user or group can perform on the object. Administrators can use this flexibility to precisely control access to an object. As part of fulfilling C2 security requirements, NT can audit successful and failed attempts to access an object. NT implements auditing control in a manner similar to the way it implements access control--by assigning objects auditing ACEs that define which user or group actions generate an audit record. A powerful aspect of the NT security model is that server applications can maintain security information for their privately defined objects and can use security APIs to validate client access to the objects.

In many types of NT drivers, a driver writer implements only a portion of the driver. The remaining portion of the driver, which performs functionality common to all drivers of its particular type, is a standard component of NT. This architecture is the class, port, miniport architecture, and simplifies the implementation of many device driver types. NT 5.0 will fully support plug-and-play devices through enhancements to its I/O architecture. These enhancements will let NT 5.0 dynamically detect, load, and assign device drivers' hardware resources optimally.

UNIX's I/O model focuses on vnodes, the UNIX equivalent to NT file objects. The UNIX I/O subsystem routes requests that applications direct at vnodes to the device driver with which the particular vnode associates. UNIX does not describe its I/O requests in discrete packets, and the majority of UNIX implementations do not support a layered driver model (UNIX sometimes supports layered I/O in a specialized network driver model it calls STREAMS). Traditional UNIX supports only synchronous I/O, and UNIX drivers process all levels of abstraction. Several modern UNIX variants, including the leading commercial offerings, extend the traditional I/O model to achieve asynchronous I/O processing capability. The majority of UNIX implementations split interrupt processing into two phases in the same way NT does, and most of these implementations have an interrupt priority scheme that is virtually identical to NT's.

Modern NT and UNIX have similar I/O architectures that are superior in many ways to the I/O model of older UNIX implementations. However, the NT I/O model's layered architecture, which is applied fairly uniformly across device types, makes NT extremely extensible--NT can add new functionality or capabilities to existing device drivers simply by inserting new drivers above or below the existing drivers in a request stack. In addition, NT has a larger API for device drivers to use than most UNIX offerings have, which lets NT add extensions to the base OS.

The traditional UNIX security model is much less powerful. Similarly to NT, UNIX can assign users group membership, but UNIX groups have no security privileges. Instead, UNIX relies on a special user account, or root, that can bypass all security settings. Because UNIX does not employ an application-accessible security, UNIX applies security only to files. UNIX defines a file as having an owning user and group, and flags identify what actions out of read, write, and execute the file's user, group, and everyone else can perform on the file.

The lack of ACLs and auditing prevent traditional UNIX from achieving a C2-capable security rating. This situation has led virtually every major UNIX vendor to create a proprietary UNIX version that implements these features, mirroring NT's security model. For example, Digital has a C2-capable version of Digital UNIX called Trusted Digital UNIX, and Sun developed Trusted Solaris. Some UNIX variants, such as a version of HP/UX that earned a B2 rating, have earned security ratings higher than C2.

Although NT's security model is superior to traditional UNIX's security model, modern UNIX implementations match NT in security robustness and classification. One notable exception is Linux, which does not implement several requirements of a C2-capable rating, including ACLs and auditing.

I/O.The I/O subsystem plays a major role in determining an OS's scalability and performance. The architecture of an OS's I/O model defines the efficiency with which device drivers interact with hardware to transfer application data to and from peripherals such as storage devices and network cards, and the I/O subsystem must aid drivers in quickly responding to device interrupts. Lack of flexibility in an OS's I/O model can make augmenting existing drivers with new functionality difficult.

NT bases its I/O model (which closely resembles VMS's I/O model) on the file object. Applications direct I/O requests at a file object representing a device resource, and the NT I/O Manager passes the requests to the resource's associated device driver. A powerful aspect of NT's I/O architecture is that drivers can layer on one another. Layering lets a driver receive a request from an application and pass the request to another driver for further processing, instead of entirely processing the request itself. Multiple drivers that work together to process I/O process requests at different levels of abstraction.

Figure 4 shows a typical example of NT's layered I/O model. In this example, three drivers work together to process file-system requests. The top-level driver understands file-system on-disk layout, so it takes requests that are file-oriented and translates them into requests that are disk-oriented. The middle-level driver receives the disk-oriented requests and converts them into one or more requests specifying physical media, mirroring or striping the request to achieve fault tolerance. The bottom-level driver simply transfers data to or from a physical device.

Another defining characteristic of the NT I/O model is that NT describes I/O requests in discrete packets of information called I/O request packets (IRPs). The I/O subsystem supports fully asynchronous I/O, which is necessary for high-performance I/O-intensive applications. NT bases its interrupt architecture on abstract interrupt priority levels (IPLs--NT's term for IPL is Interrupt Request Level--IRQL) and splits interrupt processing into two phases. In the first phase, a driver's interrupt service routine (ISR) responds to an interrupt when the system has a high IPL. The ISR performs minimal processing and notifies the I/O subsystem to invoke the driver at a later point, when the system's IPL is lower. At this time, the second phase, in which the bulk of the I/O processing occurs, begins. This two-phase processing scheme keeps periods when interrupts are disabled to a minimum, so that the system is as responsive to devices as possible.

Miscellaneous comparisons.Other significant areas in any NT-UNIX comparison are file systems, networking support, and portability. As in the other areas I've discussed, NT and UNIX have very similar file system and network driver architectures. Both NT and modern versions of UNIX implement virtual-file caches and support zero-copy file serving. In their network drivers, both OSs divide work between network adapter drivers, network protocol drivers, and a network API layer.

Where NT and UNIX differ is in which file-system types, networking APIs, and networking protocols each OS supports. However, even different versions of UNIX vary in these areas. All UNIX versions support at least one file system that is comparable in capability to NTFS, and all UNIX versions support the socket API and the TCP/IP protocol, as NT does with its Winsock API and TCP/IP stack.

It sometimes seems as if NT gets less portable by the day. NT currently supports the Alpha and x86 architectures. Although you can probably find a version of UNIX that runs on any given hardware platform, the leading commercial UNIX releases are even less portable that NT, running only on their vendors' proprietary CPU type, and sometimes on the x86 as well. For example, Sun Microsystems developed Solaris for the SUN Sparq chip but ports Solaris to the x86. IBM's AIX runs only on the PowerPC chip, which IBM codeveloped with Motorola.

Which OS Is Better?
I'm sure that most of this article's readers could happily make their own proclamations as to which OS is superior to the other. However, the only truly objective measure available is the results of industry-accepted benchmark tests. Therefore, here are the best results NT and UNIX achieved on two major benchmarks. The first benchmark is the System Performance Evaluation Consortium's (SPEC's) SpecWeb Web-serving benchmark, and the second benchmark is the Transaction Processing Council's (TPC's) TPC-C database-serving benchmark. Major industry vendors formed SPEC and TPC in the 1980s to independently define and validate benchmarks to compare systems with one another as objectively as possible. The benchmark results hardware and OS manufacturers submit to SPEC and TPC are generally the best-of-breed numbers for hardware and OS platforms. Companies that can claim leading numbers on a benchmark gain prestige, so most manufacturers make significant investments of time and energy to produce and report high numbers.

You can view the complete listing of approved SPEC benchmark results at http://www.specbench.org, and the complete listing of TPC results at http://www.tpc.org. I drew the results I relate from these sites. These results are current as of mid-October. Rather than only reporting the world record for each benchmark, these results show how NT and UNIX compare on uniprocessor and multiprocessor systems.

Graphs 1, 2, and 3, page 130, show the SpecWeb results. These graphs' vertical axes represent the number of Web requests the tested systems serviced per second. NT holds the SpecWeb record for 1-way processors, as Graph 1 shows. UNIX takes the lead on the SpecWeb results for 2-way and 4-way systems, as Graph 2 and Graph 3, respectively, show. UNIX also takes the absolute world record: 13811 on a 16-way HP/UX system.

Graphs 4, 5, and 6, page 131, show the TPC-C results. The vertical axes on these graphs represent the number of database requests the tested system serviced per minute. Graphs 4, 5, and 6 also show cost in dollars per transaction, which TPC calculates based on the total cost of the hardware and software a system uses to achieve the benchmark. Graph 4 and Graph 5 show that NT has the advantage in the TPC-C results for dual processors and 4-way machines, respectively (companies don't usually report uniprocessor TPC-C numbers). UNIX has the edge in the TPC-C results for 8-way systems, as Graph 6 shows. UNIX again takes the TPC-C world record with 102541, which it achieved with a 96-way Digital UNIX system. This world record puts the best NT number (16257, achieved with an 8-way system) to shame, as would many scores achieved by UNIX clusters. However, NT's cost per transaction is consistently half that of comparable UNIX machines, regardless of the number of processors.

What these benchmarks show is that, contrary to popular belief, NT can compete head-to-head with UNIX on high-end servers. The results also demonstrate that NT can scale well to four processors on enterprise-level applications. The results show, too, that NT is not a contender on systems with more than four CPUs, and that UNIX clustering solutions are far ahead of anything NT has to offer. However, NT is a relative newcomer to multiprocessors and clustering. Microsoft has recently begun to focus on multiprocessing and clustering capabilities as part of the company's plan to move NT into the high-end enterprise-scale arena. It won't be long before UNIX finds NT on its heels.

So Which OS Is Really Better?
The fact that an OS implements a certain feature in a particular subsystem or achieves a certain number on a benchmark doesn't necessarily make that OS good or bad, better or worse. Many factors go into determining whether one OS is better than another. Most of these factors depend on the importance of certain attributes I haven't discussed, and include things such as application availability, initial cost, cost of support and maintenance, compatibility with existing infrastructure, and ease of use. The point is that good and bad are highly subjective terms, and one person's or organization's definition of those terms won't necessarily be the same as another person's or organization's definition. However, trends in the marketplace over the past few years are making one thing perfectly clear: NT is here to stay, and it is becoming the choice of a new generation of IT professional.

Sign up for the ITPro Today newsletter
Stay on top of the IT universe with commentary, news analysis, how-to's, and tips delivered to your inbox daily.

You May Also Like