Exchange Server's Fab 4
Discover what you can do with the CPU, memory, the disk subsystem, and your network to maximize Exchange Server performance.
September 5, 2000
Discover how these resources limit—or expand—Exchange Server performance
Several fast-food chains offer super-sized meal combinations. The principle is simple: You get more food for a small additional charge. Imagine the following conversation between you and a vendor:
You: "I'd like a dual-Pentium III Xeon system with 512MB of RAM and a 9GB hard disk, please."
Vendor: "Would you like to try our super-size server special? For only $15 more, you get a RAID 5 array of three 18GB hard disks and a free copy of Quake III."
If only purchasing Microsoft Exchange Server systems worked that way. One reason why maximizing Exchange Server performance isn't so easy is that you can't isolate one factor that by itself will make a big difference in performance. Many people assume that a computer's performance depends on only its CPU speed. In the old days, when a 4MHz 8086 CPU was powerful, this belief might have been true. But with modern hardware and OSs, performance typically depends on four resources—the CPU, memory, the disk subsystem, and the network—and how applications on a target computer use those resources. By understanding those resources and processes, you can learn how to boost each resource—and in turn, each performance aspect—to maximum efficiency.
Exchange Server Performance Factors
When a resource limits an application's performance, we say that the application is bound (e.g., CPU-bound, network-bound). Which resource binds Exchange Server performance? The answer depends on what the server is doing. Network speed might limit a public folder server, whereas CPU or disk throughput might limit a heavily loaded mailbox server. Just as balanced nutrition is important to people, balanced performance is important to your servers.
The CPU
The CPU is the most obvious candidate for inclusion in the Fab Four. Every instruction that executes must go through the CPU, and most other system components exist to get data from, or feed data to, the CPU. CPU speed plays a big role in overall CPU performance. All other things being equal, you'd expect a 900MHz processor to be 1.5 times as fast as a 600MHz processor. However, speed isn't the only player. Windows 2000, Windows NT, and Exchange Server are sensitive to three aspects of CPU performance: the CPU speed, the number of CPUs, and the amount of onboard cache.
In addition to CPU speed, the number of CPUs makes a big performance difference. Two 450MHz Pentium II CPUs can perform better than one 800MHz Pentium III CPU. Why? Win2K, NT, and Exchange Server are heavily multithreaded, which means that multiple tasks might execute simultaneously within one process. The more CPUs you have, the greater the number of threads that can execute at the same time. Since I discovered that my dual 90MHz Pentium system running NT Workstation 3.51 compiled code more than twice as fast as a single-CPU machine of equivalent speed, I've preferred multiprocessor machines.
The amount of onboard CPU cache is also important. Remember the Pentium Pro system? One reason that processor was so popular for servers was that its large onboard cache let NT and Exchange Server run more efficiently than faster processors with smaller caches permitted. This performance difference is also one reason why a cost differential exists between ordinary Pentium-family processors and their Xeon counterparts. The Xeon systems have much larger onboard caches than standard Pentium systems have; this difference boosts performance—and price.
To measure CPU usage, use the % Total Processor Time counter on the NT Performance Monitor's CPU object. (For information about using Performance Monitor, see Michael D. Reilly, Getting Started with NT, "Performance Monitor and Networks," May 1998.) This counter gives you a good overview of your total CPU utilization; you can also monitor individual CPUs, if you prefer. I assume a server is CPU-bound when its average processor utilization stays higher than approximately 75 percent, or when the processor queue length count stays one number higher than the number of installed CPUs. To isolate peaks, you can export Performance Monitor data into Microsoft Excel and look at the raw figures in a chart. This process lets you monitor the data at 30-second intervals and still get a good overall view.
Memory
A joke in the aviation community goes, "You can never have too much fuel on your aircraft (unless it's on fire)." You can say the same of RAM (except that it isn't flammable). To cache pages from the public and private Information Store (IS) databases, Exchange Server uses as much RAM as it can get. This process, called dynamic buffer allocation, makes Exchange Server look like a RAM hog because Exchange Server gradually sucks up all the server's available RAM. However, if other applications on the server request a lot of RAM (as measured by the number of memory pages swapped in and out per second), Exchange Server gives up some of its hoard.
If your server has less physical RAM than it needs, Win2K and NT can make up the difference by using virtual memory, which allocates a chunk of disk space (i.e., a pagefile) as a backing store for RAM. By swapping pages in and out of the pagefile, the OS can simulate a larger amount of RAM—at a performance cost. The Pages/sec counter on Performance Monitor's Memory object shows you how much swapping is occurring. You can also use Task Manager to take a quick snapshot of how much physical and virtual memory your server is using. If your server has a sustained paging rate higher than 8 to 10 pages per second, you need to add RAM.
The Disk Subsystem
Most Exchange Server performance problems are I/O-related. The number and kinds of disks and controllers you have can make a huge difference in overall Exchange Server performance because Exchange Server mixes two types of I/O operations. Transaction logs use sequential I/O, in which the store process always writes new data to the end of the log file. The IS databases usually use random I/O, in which Exchange Server writes or reads data (in 4KB chunks) to or from anywhere in the database file. Mixing these sequential and random I/O operations slows down performance. I strongly suggest separating your logs and IS databases on separate physical drives, with or without RAID.
The total number of I/O operations on your system is important, too. Any disk drive can satisfy a certain number of I/O requests per second. In a typical simple Exchange Server configuration, the OS, pagefile, transaction logs, and store databases must share one disk drive. If your disk supports 15 I/O operations per second, and the OS and pagefile use 10 of those operations, that leaves 5 operations per second for Exchange Server. If you add a second 15-operations-per-second disk, you end up with 20 operations per second for Exchange Server—a fourfold increase. In general, more disk spindles will give better performance.
What about RAID? Choosing the wrong RAID type can actually decrease your performance because different RAID levels work best with different types of I/O. For example, mirroring (i.e., RAID 1) doesn't provide any performance gain for reading or writing data, and striping (i.e., RAID 0) provides little benefit for writing data. However, choosing the right RAID setup can bring a large performance boost because a good RAID controller will do a lot of behind-the-scenes work. In addition, RAID can add data safety. RAID 0 provides the best available performance for read and write but provides no additional data protection. RAID 5 provides the same read performance as RAID 1, as well as redundancy, but suffers under heavy write loads. You need to choose the configuration that meets your needs and budget.
Performance Monitor includes many disk-related performance counters. The Logical Disk object's Disk Queue Length, as well as the counters that monitor the number of reads and writes per second and the number of seconds required for an average read or write, is probably the most useful. (You must manually enable these counters with the diskperf y or diskperf e commands, respectively.)
The Network
Exchange Server is seldom network-bound, but it can be. You'll notice very high network utilization on heavily loaded servers. Often, users blame the network for slow client performance even though name-resolution problems, improper binding order, or slow dial-up connections are the culprits. Your Exchange Server system can generate a lot of traffic (especially if you're using many public folder replicas) in multiserver sites and multisite organizations, but disk, RAM, and CPU availability are more likely to slow your server's performance.
Toe the Baseline
How do you know when you're outgrowing your server? No hard-and-fast rule applies; a server that is too slow for one user might perform at the perfect speed for another. The best way to make upgrade decisions is to look at hard performance data, which you can gather from Performance Monitor. The key is to gather baseline data that shows where your server performance is fast and where it's slow. Choosing the right performance counters for each resource category will give you a good picture of your server's performance under typical conditions. Armed with baseline data, you can objectively assess whether your server has slowed down and, if so, where the problem lies. Be sure to take baseline data from both average and peak load periods to ensure you get an accurate picture.
Choose a Configuration That Makes Sense
How do you get the most bang for your buck when you want to upgrade your server? Whether you're buying a new server or refitting an older one, these recommendations will guide you in the right direction.
Plan ahead. Unless you're sure that you'll never run Win2K or Exchange 2000 Server on the box, consider a server that can accommodate at least 512MB of RAM, two CPUs, and five 3.5" disks. You don't need to fill that expansion capacity immediately, but you'll want it when you decide to upgrade the OS or Exchange Server.
Take two. Two slower CPUs will probably make you happier than one fast CPU. And you'll avoid the current price premium for the fastest members of the Pentium family.
Buy plenty of I/O. Buy a lot of I/O operations per second. My ideal configuration uses a mirrored (i.e., RAID 1) pair of disks for the OS and pagefile, a second RAID 1 pair for the transaction log, and a RAID 5 array for the IS databases. If your budget doesn't permit such a configuration, use three disks—with SCSI, please. (Putting multiple disks on the same IDE channel is a performance loser.) The more disks you have in a RAID 5 array, the more concurrent I/O operations per second you can satisfy. Pierre Bijaoui, senior solution architect for Compaq's Applied Microsoft Technologies Group, has presented numbers that show that IS activity for an active user causes, on average, 0.1 to 0.15 I/O operations per second, and that log-file activity adds another 0.02 I/O operations per second. On a 500-user server, IS activity and log-file activity can reach peak values 10 to 15 times as high as these figures, so if you want to put 1000 users on your server, plan your disk I/O budget (and hardware budget) accordingly. Choosing the best disk controller that you can afford is also crucial: A good controller that supports write-back caching can immediately boost disk performance by as much as 50 percent.
Stock up on RAM. When RAM is cheap, stock up. This advice is wise whether your server runs Exchange Server or the Mac OS. Having more RAM than you need is better than needing more RAM than you have.
Don't let Exchange Server's Fab Four intimidate you. Learn how to use their power for good, and reap the super-sized rewards.
About the Author
You May Also Like