Windows Server 2003 R2 Review Part 2: Major New Features
Windows Server 2003 R2 is still an astonishingly full-featured upgrade and will be very valuable to many customers. What's odd is that most of the new features are pretty stealthy, and while Microsoft likes to pigeonhole product features into three easy-to-read bullet points (or pillars, as the company calls them), R2's numerous features are instead all over the map.
December 15, 2005
In part one of my review of Windows Server 2003 R2 ("release 2"), the first Update Release of Windows Server 2003, I focused on the events leading to the development of R2, and the wider roadmap Microsoft has adopted for its server products. Now, it's time to look at the new features R2 brings to the table. These features and many and varied.
But don't be confused by the fact that the two biggest new features that Microsoft originally intended for R2 have been pushed back to Longhorn Server (I look at those missing features in Part 4): R2 is still an astonishingly full-featured upgrade and will be very valuable to many customers. What's odd is that most of the new features are pretty stealthy, and while Microsoft likes to pigeonhole product features into three easy-to-read bullet points (or pillars, as the company calls them), R2's numerous features are instead all over the map. For that reason, I've broken them out into functional categories. In this part of this review, we'll jump right into the major new features of R2. Then, in part three, we'll look at the many other R2 features.
Remote Server Management
Microsoft likes to lump R2's remote server management capabilities into a category it calls "branch office," but I think these features applies to a far wider range of scenarios. Today many large enterprises are faced with the increasingly complex challenge of managing servers that are found in remote offices (which are sometimes, indeed, called branch offices). But these aren't the only situations in which administrators might find themselves having to deal with remote servers. Any time you have to deal with distance, a potentially weak WAN connection, and an office full of employees without their own IT staff, the same problems emerge.
In my own experience, I recall having to fly out to San Jose repeatedly to emergency service our NT 4.0 server years ago, and some companies have institutionalized this practice by hiring "flying doctors" who fly from site to site around the world managing servers for distributed companies. Then there are the remote offices without IT staff that often need to be led through server management tasks, step by step, over the phone. And even in those instances where problems can be fixed remotely, balky WAN links often make the remote administration process difficult or prohibitively expensive. All kinds of businesses--banks, retail stores, whatever--run into these issues all the time.
With Windows Server 2003 R2, Microsoft is taking the first steps to make remote server management easier and less expensive. At a low level, a lot of gains are made simply by making network traffic more efficient. Microsoft has worked to tune TCP/IP, minimize network traffic with delta compression, and has improved such technologies as BITS for WAN access. To combat high latency, round trips over the WAN have been minimized. And then there are some major new tools that address core remote management issues.
"With R2, we created a series of tools that start addressing the whole branch office issue," Julius Sinkevicius, a Microsoft senior product manager told me in a recent briefing. "This is not a be-all, end-all solution for branch offices, but it's the first major step where we start addressing the overall cost of remote administration and of backups."
DFS Namespace and DFS Replication
Most of the new technologies for remote server management in R2 are related to the Distributed File System, or DFS, which was first introduced in Windows 2000 Server. DFS allows you to create a single namespace for file shares that are located on different servers, redirecting remote users as required to the correct physical locations. Among other niceties, DFS is site aware: Based on the IP address of the user accessing a DFS share, DFS will ensure that they're using the closest possible DFS server. Abstraction in DFS is important for a variety of reasons, but the most obvious is that administrators can move shared folders to different locations, even on different servers, without disrupting or even informing users of the change.
A related technology called File Replication Service (FRS) is used to replicate DFS content between servers, called replicas, which host DFS roots. (FRS is also used by Active Directory.) From a network performance perspective, the problem with FRS is that even a single byte change in a file store in DFS will trigger a change for that entire file. This usually isn't an issue on a local LAN, but you can see how that might cause problems with remote offices over WANs.
In R2, DFS version 2 has been renamed to DFS Namespace. And FRS has been succeeded by a new DFS 2 technology called DFS Replication. DFS Replication uses a multimaster replication model that allows DFS-based data to be replicated more efficiently over the network than was possible with FRS. A good part of this efficiency is due to a new technology called Remote Differential Compression (RDC) that replicates only those parts of a file that have changed, dramatically lowering network bandwidth. If you change the title in a PowerPoint presentation, I was told, only that title change will be replicated across DFS, not the entire file. This is far more efficient and, ultimately, less expensive.
Replication in R2 is also self healing, and now supports fail back in addition to failover. "If the normal DFS server a user accesses is unavailable, he will be automatically redirected to another DFS server that's not at his site," Ward Ralston, a Microsoft senior technical product manager, told me. "And when that server is back online, the user is automatically switched back to the original server at his location."
How important is RDC? "A change title [operation] in a 3 MB PowerPoint file only requires a 16 K change to replicate that to all the other servers, instead of a 3 MB change on every server," Ralston noted. Oddly enough, a typical PowerPoint file is a relatively tame example. Large PST files (used by Outlook) provide even better savings over RDC.
But replication isn't just more efficient in R2. It's also more manageable, with a more granular scheduling facility. Now, administrators can schedule DFS replication in 15 minute increments, and can control how site-to-site failover works. And DFS configuration data is now stored in Active Directory (AD).
Overall, the changes to DFS are monumental in R2, and though they are largely aimed at remote servers, these features will benefit virtually anyone using DFS. Thus, companies that rely on DFS should consider migrating those servers to R2 as soon as possible.
R2 storage improvements
Microsoft's storage initiatives are varied and scattered across different server platforms. In the core Windows Server products, the company offers basic storage technologies such as Volume Shadow Copy (VSC), the Virtual Disk service (VDS), the Encrypting File System (EFS) and NTFS, Automated System Recovery (ASR), and so on. There's also basic support for Storage Area Networks (SANs). Additionally, Microsoft has created other storage-related server products, such as its Network Attached Storage (NAS) solution, Windows Storage Server 2003 (itself recently updated to an R2 version which I'll review separately soon), and System Center Data Protection Server 2006, a disk-based backup and recovery solution.
In R2, Microsoft has upgraded some of its core storage technologies. In addition to the DFS changes mentioned earlier in this review, R2 includes the new File Server Resource Manager (FSRM), which provides volume-based quota management, and the Storage Manager for SANs (SMfS), which finally makes provisioning SAN-based disk space simple. R2 also includes two of the seven components that made up Services for Unix; these components are together known as Services for NFS.
Why the sudden interest in storage? "The growth in storage is 60 to 80 percent per year," Claus Joergensen, a program manager in Microsoft's storage group, told me in a recent briefing. "But the cost of managing that storage is estimated to be upwards of ten times the actual cost of the storage itself. So there's a significant challenge--and opportunity--to improve the cost of owning this storage."
Storage management
The first major new storage feature in R2 is the FSRM, which provides a front-end to R2's new storage management functionality. FSRM is a set of tools designed to help administrators set and manage storage quotas on a per-volume or per-folder basis. There are two snap-ins loaded when you launch FSRM: Storage Resource Management, which is used to create quotas and file screens, for blocking the use of certain file types; and Scheduled Storage Tasks, which is used to schedule storage reports automatically and on demand.
Storage quotas (sometimes called disk quotas) are probably familiar to most Windows administrators. But previous to R2, Windows Server 2003 (and Windows 2000 Server) supported much simpler and limited quota management using a feature of the NTFS file system. For example, NTFS quotas can only be established per user per volume, compared to R2, which allows by folder or by volume quotas, which are more easily extended. With NTFS quotas, disk usage is calculated by logical disk size only, not the actual disk size, as with R2. And NTFS quotas don't include any reporting mechanism; instead, you have to write your own application or purchase a third party application that can read the event logs. As I'll describe in a moment, R2's FSRM includes extensive quota reporting capabilities.
In R2, quotas can be hard or soft. With a hard quota limit, users can no longer store documents once the quota limit for that storage resource is reached. With a soft quota limit, the administrator will get a warning when the quota limit is reached, but the user is allowed to continue adding files to the resource. There are also various settings so that administrators can automatically be reached when quotas are a certain percentage of full. That will give admins time to extend the storage area as needed, or contact users about tightening their belts.
One nice feature is that you can set what's known as an auto quota. Here, you set a quota template on a master folder. Every folder that sits underneath that folder will inherit the quota used by the template. As you might imagine, this feature is particularly useful for home directories or other situations in which you're likely to create a large number of similarly configured quotas. "You only need to set the quota once on the master folder and every subfolder will inherit that limit," Joergensen said. "One setting can touch a lot of users."
On a related note, R2's quotas support a number of reporting mechanisms, which is certainly better than the default scenario in previous versions, where disk quota problems only became obvious after disk space was used up. R2 can report disk quota issues to the administrator as previously noted, but it can also warn users automatically via email when their quota is near full. R2 supports a wide range of report types, including a set of simple, pre-defined reports and configurable reports that corporations can fine-tune for their own environments. Predefined reports include largest files, most and least recently used files, files by owner, files by group, and so on. Configurable reports can be generated to span multiple volumes, folders, or shares. Reports can utilize DHTML or HTML, XML, CSV (comma separated values), or text format. They can be run ad hoc (on the fly when needed) or be scheduled in various ways.
R2's storage quotas also support file screening. This feature determines what file types users can store in a given folder and can be used to enforce corporate policy. With Windows Server systems today, businesses may elect to restrict certain file types, such as MP3 files, executable files, or whatever, or to allow users to save only certain file types (Word docs and so on), but they must manually scan for the file types they don't want and then act accordingly. With R2's new file screening functionality, administrators can configure templates that determine which file types are allowed, or not allowed.
R2 supports two types of file screening, active screening and passive screening. With active screening, users are prevented from saving blocked file types, and notifications are sent to specific administrators when such an event occurs. With passive screening, users are allowed to store blocked file types, but a notification of the event is still sent to administrators. File screening is nicely implemented, though the user experience--what the end user sees when they try to save a blocked file type--has not be dramatically overhauled, and just displays a standard file access denied dialog by default.
It's also worth noting that file screening doesn't work any particular magic, but rather relies on file extensions to determine whether files can or cannot be stored. This means that sophisticated information workers will be able to bypass file screening by simply renaming files, but as Ralston noted to me, "it's rather hard to imagine that many users are going to intentionally usurp company policy after being warned not to do so." Fair enough. Certainly, the overhead of screening every single file, on the fly, would be prohibitive from a performance standpoint.
SAN management
You may think of storage area networks (SANs) as a high-end solution designed only for enterprises. But understand that the move from desktop attached storage (DAS) to networked storage is a revolution that is sweeping across all Windows Server customer segments, including small businesses. According to a recent IDC study, 35 percent of small businesses have already moved to network-based storage solutions, and another 40 percent are considering such a move.
With this in mind, R2 includes what Microsoft calls "SAN management for the IT generalist" or "simple SAN." Exposed through a new UI called Storage Manager for SANs (SMfS), this feature represents the first time that SAN provisioning could truly be called simple on the Windows platform. "Some time ago, we brought a number of Windows administrators here to the campus and set them up the task of configuring a SAN within 6 hours," Joergensen told me. "At the end of the day, we discovered that none of them were capable of doing that."
Given this evidence, Microsoft set out to make SAN provisioning easier, and SMfS is the result. Aimed more at the emerging market of businesses who are moving to, or considering moving to, network-based storage, SMfS makes it easier to set up a new SAN and provision SAN resources. "It's a user interface for provisioning storage," Joergensen said. "It leverages the Virtual Disk Services [VDS] infrastructure. Now, a Windows administrator, using typical Windows tools, can provision storage. The target audience is small-scale SANs built from simplified hardware." Bigger businesses with more complicated SAN needs like likely continue to use vendor-specific tools that come with those solutions, managed by experienced administrators.
SMfS is an MMC snap-in that lets you discover Fibre Channel or iSCSI-based SANs, including low-level SAN information called storage array properties. You can create, delete, and expand of storage array LUNs (logical unit numbers), which identify storage allotments. You can also allocate LUNs to specific servers on the SAN, and monitor LUN status, health and allocation. Much of the core SMfS functionality is available via simple wizards.
On a side note, you may be wondering how SAN and network attached storage (NAS) solutions compare and, in Microsoft's world, compete. NAS is file-based storage, like a file server, and Microsoft's NAS solution, Windows Storage Server 2003 R2, fulfills this need. SANs are essentially block storage, and are perceived by the system as a disk. These types of storage solutions are ideal for such things as SQL Server and Exchange. On the flipside, NAS solutions tend to be simpler and easier to deploy, and are often made available in appliance-like form factors. There are even low-end NAS devices available today for consumers that are essentially external IDE hard drives that connect to home networks via Ethernet rather to individual PCs with USB 2.0. Each system essentially solves a different storage problem.
UNIX integration
Thanks to the newly integrated Services for NFS, R2 servers can share files with both Windows- and Unix-based clients. "One goal with R2 was to extend the connectivity of storage into Unix systems," Joergensen said. "If you're familiar with the Services for Unix we had in the past, we now have an improved version of the Services for NFS from that product." Services for NFS (Network File System) is an NFS server and an NFS client, and it's now an integral part of Windows Server 2003 R2. Like other R2 features, it installs as an optional Windows component.
Services for Unix is designed primarily to help businesses migrate from expensive proprietary Unix solutions to more cost-effective systems based on Windows Server. It provides 64-bit support (on x64 R2 versions), better interoperability between Windows SMB (Server Message Block) and NFS systems, and other related functionality.
Active Directory Federation Services (ADFS)
Active Directory Federation Services (ADFS, previously codenamed Trustbridge) provides cross-company, federated identity management services, allowing large corporations to selectively open their infrastructures to trusted partners and customers. Essentially a way to provide identity management and secure collaboration between two corporations, ADFS supports a wide variety of organizational infrastructures using standardized, Web services-based WS-Federation technologies. ADFS will appeal to very large enterprises and governments, and Microsoft tells me that the auto industry, European governments, and insurance companies are among those now evaluating this solution. Because it is aimed at high-end scenarios, ADFS is available only in the Enterprise and Datacenter versions of Windows Server 2003 R2, and not in Standard Edition.
ADFS provides three capabilities: extranet authentication, Web single sign-on, and identity federation services for IIS-based Web applications. ADFS will typically be used between two organizations which have developed a trust relationship, such as a supplier and its customers, or two partners. In Active Directory (AD) parlance, ADFS allows a corporation to establish cross forest trusts between the infrastructures in two different companies across the Internet.
Because it is so a high-end and esoteric feature, I don't feel particularly qualified to give ADFS the write-up it no doubt deserves. However, Microsoft has a lot of information available about ADFS if you need more information. This information includes a video demonstration and a Webcast that steps through ADFS features.
Conclusions
There's little doubt that the major new features in Windows Server 2003 R2--remote server management, ADFS, and storage improvements--are valuable, though they may not be universally valuable to all companies. But R2 includes far more functionality than these major new features. In part three of my R2 review, I'll examine the dozens of other features Microsoft has added to its next generation server operating system. I think you'll be surprised by how much there really is.
Part 3 coming soon!
About the Author
You May Also Like