Windows 2000 Datacenter Server
Do you really need Windows 2000 Datacenter Server? Mark Smith explains why Datacenter takes Win2K to a new level.
June 27, 2000
Who needs it?
You're probably hearing a lot about Microsoft's release of Windows 2000 Datacenter Server, which the company positions as an ideal platform for server consolidation and enhanced scalability. Datacenter extends the boundaries of Win2K Advanced Server from 8GB to 64GB of RAM, 8 to 32 processors, and 2- to 4-node clusters.
Last June, Unisys demonstrated a 32-way system that calculates lowest-cost airline fares. Unisys designed the system to analyze 8.5 billion flight segments per day and to allow Internet access through 200,000 terminals. According to the company, the calculating application achieved near linear scalability when the system's CPUs doubled from 16 to 32. This linear improvement revives the old argument that pits scaling up against scaling out (i.e., increasing server size versus adding servers). For example, are you better off using one 16-CPU system or four clustered 4-CPU servers for an application? The key to answering this question is load balancing. Can your systems easily balance an application among a cluster's multiple nodes? If the answer is yes, then you can implement load-balancing clustering, or scaling out. If the answer is no, then large SMP systems (with more than eight CPUs)—or scaling up—are the only way to go.
Let me illustrate the pros and cons of each approach. For example, if you use Win2K Server's Network Load Balancing (NLB) feature, you can easily load-balance as many as 32 nodes on Microsoft IIS. Proponents of large SMP systems will tell you that maintaining and managing one 32-way system is easier than load-balancing eight 4-way systems. Proponents of clustering will tell you that achieving maximum performance and failover protection is easier with eight clustered 4-way servers. The challenge for the clustering camp is to make administering eight clustered nodes as easy as administering one server, and Microsoft is positioning Application Center Server as the solution to this problem. The challenge for the SMP camp is to ensure that applications can scale well to 32 CPUs and to provide adequate failover protection. In the past, Microsoft server applications haven't scaled linearly beyond four CPUs: If you expected 400 percent performance improvement when scaling from 4 CPUs to 16 CPUs, you were disappointed.
Another example is Microsoft SQL Server. SQL Server 7.0 can't easily handle load-balancing transactions, so the only way to scale the application is to move it to a system with a higher number of CPUs. Over time, SQL Server will add load-balancing features, but until this capability is seamless for developers, the best way to scale SQL Server is with large SMP systems.
Datacenter takes Win2K to a new level, providing an answer to those who have complained that NT can't scale sufficiently. However, merely releasing Datacenter won't eliminate the complaints. Microsoft must prove that Datacenter can scale to accommodate a variety of applications, including applications that power Microsoft's own Web services. For example, Microsoft operates the largest Web-based email application, Hotmail. A 10-CPU Sun Microsystems server powers this application. Can Microsoft port the application to a 10-CPU Intel-based server running Datacenter and achieve the same level of performance? That kind of proof is what large-SMP-system proponents are looking for before they give Datacenter the thumbs-up.
By providing both scaling options, Microsoft has tried to avoid choosing between scaling out and scaling up. Microsoft's application development strategy is to use load balancing to provide the best performance for price. This approach lets a company buy smaller systems and simply add servers to scale applications as required. This scaling-out strategy provides the most flexibility and protection against system failure. Critics argue that scaling out is simply a ploy to sell more copies of applications and OSs. Although I don't buy that argument, I do think that Microsoft needs to improve software pricing for clustered environments. Why, for example, pay for a separate license for an instance of SQL Server that resides on a clustered node only as insurance in case of a failure? That instance is not in active use, so why pay for it?
My advice is to scale out and load-balance when possible for flexibility, performance for price, and failure protection. Clustering is a proven technology that will only improve with time. Whether you favor scaling up or scaling out, knowing that you have more options with Datacenter is important.
About the Author
You May Also Like