Designing for DC Failover

How to create the best AD site topology possible

Sean Deuby

November 24, 2003

14 Min Read
ITPro Today logo in a gray background | ITPro Today

Most administrators know that good Windows 2000 Active Directory (AD) site topology design is almost as important as good domain design. A well-thought-out site topology minimizes AD-related network traffic, ensures that users authenticate through a nearby domain controller (DC), and makes the time needed to replicate an object across the enterprise more predictable.

AD sites also serve another important, if not as well known or straightforward, role: They influence client DC failover, which is the process that a client follows to connect to another DC when the client's current DC fails. A well-thought-out AD site topology consists of the ability to pick any location on the network, mark a DC as being unavailable, and have clients in that site reliably choose the next best available DC.

Why is designing for DC failover so important? A client's DC selection is a major factor in the user's logon time and perceived response time. For example, most companies run logon scripts, and the distance across the network between the authenticating DC and the client greatly influences the logon script's execution speed. Also, consider that both Microsoft Exchange 2000 Server and its clients are heavy users of the AD Global Catalog (GC). As a result, if you make a poor choice when selecting the DC that hosts the GC, your selection will have a noticeable effect on the client's email response time.

Before you begin to design for DC failover, you need to understand how a client selects its DC, known as the DC locator process. When you model DC failover (i.e., pretend the preferred DCs aren't available), you step through the DC locator process to determine what alternate DCs the client will choose. Ideally, when a Windows client can't contact a local (i.e., onsite) DC, it would use site link costs in the AD site topology to determine the next closest site and attempt to contact a DC there. If DCs in that site weren't available, the client would look to the next closest site and try again, looping until it found a DC. Unfortunately, the DC locator process hasn't reached that state yet. In Windows Server 2003 and Win2K, the client requests a list of DCs in its site and domain. If these DCs aren't available, the client requests a list of all DCs in its domain. For information about the Windows 2003 and Win2K DC locator process, see "Win2K Professional Domain-Controller Selection," http://www.winnetmag.com, InstantDoc ID 9180 and "Authentication Topology," March 2003, InstantDoc ID 37935.

Influencing the DC List
Among other records, DCs register site and domain SRV records in DNS. When a client goes through the process of locating a DC, it receives from DNS a list of DCs that the client should attempt to contact. To properly design DC failover, you need to be able to influence the order of the DCs on the list that the client receives from DNS. By influencing this list, you're telling the client what DC to select if the contacted DC isn't available. In almost all cases, DNS orders the list first by DCs in the client's local site and second by all DCs in the client's domain. To retrieve the list order information from a client, enter one of the following commands

nslookup -querytype=srv_ ldap._tcp.sitename._sites.dc._msdcs.domain.namenslookup -querytype=srv _ldap._tcp.dc._msdcs.domain.name

where sitename is the name of the client's site and domain.name is the Fully Qualified Domain Name (FQDN) of the client's domain. These commands emulate what kind of a DC list DNS will return to a client in domain domain.name and site sitename. The first command returns a list of DCs that are available in both the client's domain and site, and the second command returns a list of DCs in the entire domain.

Figure 1 shows a common hub and spoke configuration, where Hub is a company's main location (and WAN circuit center) and Spoke1, Spoke2, and Spoke3 are smaller remote locations. All locations share one domain. As a larger site, Hub contains several DCs; the smaller spokes each have only one or two DCs. Figure 1 also shows the DC list for Client1.

Understanding DC Failover
If a site has two or more DCs, you generally don't have to worry about failover because a client will always choose a DC in its site, as long as one is available.
However, under certain circumstances, two or more DCs in a site can become unavailable while the clients are still functional (e.g., a blown data center circuit breaker or failed air conditioning). Because these situations are unlikely, designing for this type of failover isn't cost-effective.

So what happens in a typical site topology when DC failover occurs? If SPOKE1-DC1 is unavailable, the client attempts to query the next DC in the DC list. Remember that the list consists of sitewide DCs and domainwide DCs. Because the Spoke1 site contains only one DC and because that DC fails to respond to the client's Lightweight Directory Access Protocol (LDAP) over UDP pings, the client begins querying the domainwide DCs on the list. The remaining DCs on the list appear in random order, so the next DC that the client queries could be anywhere in the domain. The client will work through this list until a DC responds to its queries.

Querying the domainwide DCs on the list decreases the chance that the client will get the best possible DC because no DC is favored over any other, regardless of how close a particular DC might be to the client. The Windows 2003 and Win2K DC query interval—the interval between queries that the client waits before moving to the next DC on the list—compounds the difficulty of the situation. In Windows NT 4.0, the OS sent these queries immediately with no pause between them, which meant that the fastest-responding DC (presumably the closest) would win the session setup with the client. In Win2K, the client waits 100 milliseconds (ms) between DC queries. In Windows 2003, the client waits 400ms between queries for the first five DCs, then 200ms between the next five, then 100ms between the remaining DCs. In either Windows 2003 or Win2K, this interval lets the client easily pick an inappropriate DC.

Let's use Figure 2 to show how this behavior will influence DC selection. Assume the network latency between the client and the DCs in Spoke 3 is 150ms. The DCs in the closer Hub site are only 75ms away from the client. Because SPOKE3-DC2 is at the top of the DC list, the client pings it first. In a Win2K network, SPOKE3-DC2 can't respond within the 100ms interval, so the client moves to the next DC on the list and pings HUB-DC2, a more appropriate choice. Before HUB-DC2 can respond, however, SPOKE3-DC2's response, which has a 100ms head start, returns to the client and the client establishes a session with that DC. In a Windows 2003 network, SPOKE3-DC2 has plenty of time to respond before the 400ms interval has expired and the client attempts to contact the next DC in the list. In either configuration, if your client can't find a DC in its own site and you don't manually influence the domainwide DC list, you might not get the most appropriate DC.

Providing DC Failover Capability
A common misconception is that automatic site coverage (AutoSiteCoverage), which is an integral part of the Windows 2003 and Win2K directory service, will provide failover coverage if no DCs are available in the client's site. When you use AutoSiteCoverage, DCs in the site nearest to the client's site can automatically register themselves into the DC-less site. However, these DCs provide coverage only if no DCs are registered in the client's site. Because AutoSiteCoverage doesn't work if a DC exists in the client's site but isn't responding, AutoSiteCoverage doesn't help with DC failover. You can, however, manually force a DC to register itself to provide DC or GC services for another site. To do so, you must add the site names (separated by spaces) to the SiteCoverage registry value of type REG_DWORD to the HKEY_LOCAL_MACHINESYSTEMCurrentControlSetServicesNetlogonParameters registry subkey and perform some additional steps that I'll describe later to make the registration work correctly.

You can use any of three major techniques to provide DC failover capability for your network. Depending on your needs, you can use these techniques individually or in combination.

Method 1: Selective SRV registration. As I mentioned previously, controlling the contents of the DC list controls the client's DC failover behavior. In our example, the domainwide section of the DC list contains DCs from the distant Spoke2 and Spoke3 sites as well as the closer Hub site. What if you could prevent DNS from adding the Spoke2 and Spoke3 DCs but still add the Hub DCs to the DC list? You can, by using a technique to prevent the spoke-site DCs from registering their domainwide SRV records. In other words, if the spoke sites don't register their domainwide SRV records (e.g., _ldap._tcp.dc._msdcs.domain.name), DNS will exclude them from the domainwide section of the DC list. As a result, the domainwide section will contain only the Hub-site DCs, as Figure 3, page 76, shows. However, this technique doesn't prevent the spoke-site DCs from registering site-specific SRV records. Because the spoke-site DCs register their site-specific SRV records, when a spoke-site client requests a DC list, the spoke-site DCs appear in the sitewide DC section of the list.

For details regarding how to prevent DCs from registering certain SRV records in DNS, see the Microsoft Active Directory Branch Office Guide Series: Planning Guide, Chapter 2, "Structural Planning for Branch Office Environments" (http://www.microsoft.com/technet/treeview/default.asp?url=/technet/prodtechnol/ad/windows2000/deploy/adguide/default.asp).

Method 2: DNS priority. Another way to ensure that spoke sites fail back to the hub rather than to other spokes is to manipulate the DNS priority of a DC's SRV record. A component of the SRV record, the priority is an arbitrary number, and a lower number equates to a higher priority. By default, a DC's DNS priority is zero. All other factors being equal, DCs with a lower priority number will appear higher on the DC list than DCs with a higher priority number. As a result, you can use an SRV record's priority field to influence the DC list.

Figure 4 shows our hub-and-spoke example with SRV priorities added. The Hub site DCs have a high priority of 10, while the Spoke1, Spoke2, and Spoke3 sites have a lower priority of 20. Because these priorities influence the DC list, the high-priority Hub site DCs always appear ahead of the low-priority Spoke2 and Spoke3 site DCs. Although the DC in the client's site (Spoke1) is also a low priority, the Spoke1 DC appears ahead of the high-priority Hub DCs in the DC list because the DCs in a client's site always appear before all other DCs in the list. To control the priority field of a DC's SRV record, you must add the LdapSrvPriority registry value of type REG_DWORD to the HKEY_LOCAL_MACHINESYSTEMCurrentControlSetServicesNetlogonParameters registry subkey. Note that all DCs in a site should have the same priority to ensure that they receive the same treatment in the DC list.

Method 3: Sister-site forced coverage. The sister-site method, which you can combine with the previous two methods, lets you create a two-stage DC failover for more complex topologies. For example, as Figure 5 shows, the Spoke1 and Spoke3 sites are physically close to each other and connected by a high-bandwidth WAN circuit. A low-bandwidth circuit connects the Hub site to the Spoke3 site. (For simplicity's sake, I've left the Spoke2 site out of the example, but I've left its DC in the DC list.) This topology is common for US companies operating in the Asia Pacific region, where two nearby locations will connect to each other over a high-bandwidth circuit but connect to North America over a relatively low-bandwidth circuit.

In this situation, failover needs to occur to a nearby sister site first and occur to a hub site in the United States only if the sister site is unavailable. To design this type of failover, use the SiteCoverage registry key to assign a DC to be part of a site. Before you begin, you must determine how you'll prevent clients from choosing the covering DC (which is at a different location and presumably slower to respond) as often as they choose the local DCs—from the client's point of view, both DCs are in the client's site and are therefore equally desirable. The answer is to use DNS priority to make the sister site's DCs slightly less desirable than the local DC by setting the DNS priority of Spoke1 to 15 and setting the DNS priority of sister site Spoke3 to 20. This configuration ensures the local DC will always appear first in the onsite DC section of the DC list.

Let's walk through a DC failover scenario to test the configuration. When all DCs are available, Client1 will always choose SPOKE1-DC1 because it's on site and has the highest priority. If SPOKE1-DC1 isn't available, Client1 will next choose either SPOKE3-DC1 or SPOKE3-DC2 because the client views them as being on site; their lower priority doesn't matter because the higher-priority SPOKE1-DC1 is unavailable. If both SPOKE1-DC and the Spoke3 DCs are unavailable, Client1 will choose one of the Hub site DCs because they have the lowest priority of the domainwide DCs available. If both Spoke1 and Spoke3 are unavailable, I think Client1 will have bigger problems to worry about.

This sister site method provides a robust and multilayered DC failover capability for many kinds of site topologies. The downside to this design, however, is that it's complex to configure, complicated to maintain, and difficult to explain.

Addressing Other Considerations
In addition to the methods I've described previously, you need to consider other factors that can influence DC selection when you're designing DC failover. The AutoSiteCoverage and GcSiteCoverage registry entries and the DNS weight of a DC's SRV record can all affect DC selection performance.

AutoSiteCoverage. If you create DC-less sites to support various site-aware applications such as DFS, you can consider disabling the AutoSiteCoverage registry entry of type REG_DWORD by setting it to 0 for your other spoke DCs. (This entry is located under the HKEY_LOCAL_MACHINESYSTEMCurrentControlSetServicesNetlogonParameters registry subkey.) This recommendation is based on the premise that authenticating from a spoke to the hub is almost always better than authenticating from one spoke to another spoke (the exception being a sister-site scenario). If you disable the AutoSiteCoverage setting in the spokes only, the hub's AutoSiteCoverage will cover the DC-less spoke and prevent another spoke from trying to cover it. If you've configured a sister-site network topology, also disable AutoSiteCoverage on the hub's DCs and manually configure site coverage where necessary.

GcSiteCoverage. Just as you can use the AutoSiteCoverage registry key to force a DC to cover a site, you can set the GcSiteCoverage registry entry of type REG_DWORD to 1 to ensure that clients fail over to the appropriate GC server. This entry is located under the HKEY_LOCAL_MACHINESYSTEMCurrentControlSetServicesNetlogonParameters registry subkey.

DNS weight. Another field in a DC's SRV record that you can use to influence DC selection is the DNS weight. Weighting helps to order the DNS priority list within a given priority; a higher DNS weighted number means that the DC will be higher on the list than other DCs that have the same priority but have a lower DNS weight. The DNS weight default value is 100. For example, if HUB-DC2 was significantly more powerful than HUB-DC1 and HUB-DC3, you might give HUB-DC2 a weight of 150 so that it would always appear at the top of the Hub DCs in the list. Be aware that unless you have widely varying types of hardware for your DCs, DNS weighting adds yet another layer of manual administration to an already complicated site topology with little benefit. Just because you can doesn't mean you should.

Taking Control of Your DCs
A basic AD site topology provides many benefits, but a consistent and predictable client choice of an alternate DC isn't one of them. Until the Windows server OS has a DC locator process that uses site costing to help with DC selection, you'll have to establish the DC failover order yourself. Once you understand how to use the techniques in this article to influence the list of DCs, you can be sure your users will continue to have the best available service during any DC outages.

Sign up for the ITPro Today newsletter
Stay on top of the IT universe with commentary, news analysis, how-to's, and tips delivered to your inbox daily.

You May Also Like