Skip navigation

Windows NT: A Key Component in IT Reorganization

CommNet Cellular Implements a Service-Based Architecture

CommNet Cellular, a regional cellular phone company based in Denver, Colorado, provides cellular phone service across an eight-state area in the midwest and western United States. In April of 1994, the Information Systems department consisted of four permanent employees and 18 contractors.This situation came from the technological demands that accompany high growth in a fiercely competitive business, a poorly managed department, and the lack of a coherent Information Technology (IT) strategy. Today, this same department has a full-time staff of 20, a strong IT strategy, and standard interoperability from the desktop to the servers and is actively developing client/server applications to capitalize on a technology infrastructure that spans eight states.

On the technology side, the company's battle scars 18 months ago included outdated equipment, a tangled web of undocumented network wiring, a wild profusion of desktop systems and software, eight technology platforms, consistent interoperability problems, and production systems crashing on a regular basis. On the business side, the technology problems were manifest in many of the usual ways, including slow responses to customer calls, inaccurate reporting, a frustrated user community, and permanent crisis-mode operation for the IT staff.

What started out as a short-term project to correct production-system problems quickly evolved into a complete re-engineering of the corporate IT strategy. With a forward-thinking CIO and the blessing of the corporate officers, a solid IT plan emerged that focused on several major goals:

  • Physical network cleanup and upgrade
  • Identification of enterprise-wide IT services
  • Reduction of platform complexity and networking protocols
  • Platform standards for hardware, software, and administration
  • A strong commitment to training and personal productivity

A Service-Based Information Architecture

In a rare and unusual opportunity, the system engineers were given free rein to start from the beginning, with no mandate to accommodate decisions made by a previous regime. While there are many methods for creating an information architecture, in this project the design began by posing the following five questions:

  1. What are the corporate computing services?
  2. What are the service-provider platforms?
  3. Which platforms provide which services?
  4. How are the services accessed?
  5. What tools are needed to support access?

As the answers to these five questions emerged, a service-based architecture began to appear.

Several up-front decisions guided the design process. The design team, in which I participated, chose Ethernet as the communications medium--serial connections were history. CommNet Cellular would ultimately connect servers with fast Ethernet and install hubs and routers to segment traffic and ensure reliable wide-area network (WAN) connectivity. We selected TCP/IP as the standard networking protocol across all target platforms, along with Simple Mail Transfer Protocol (SMTP) for enterprise email delivery and Simple Network Management Protocol (SNMP) for network management. And we decided to reduce the number of supported platforms and standardize connectivity from the desktop to the servers.

When the design process began, file sharing, remote access, printing, and application access were inconvenient and unreliable at best. In an all too familiar scenario, large numbers of dumb terminals forced the whole company to rely on character-based email. Server logins and file transfer were managed by several terminal-emulation packages, some graphical and some character-based, using outdated software and incompatible protocols. You could not access a given printer from all corporate applications and the desktop, and the same printer had different names depending on the platform it was connected to.

Clearly, computing services had to be distributed in a consistent way across the network, rather than restricted by platform or application. Data needed to be easily transferable from the desktop to the servers and back again. The final architecture had to present a consistent, logical view of the network with a standard graphical interface to every workstation.

A strong commitment to standards guided the design process. In a parallel effort, a standards group composed of systems, applications, and support personnel worked hard for eight months to create standards for hardware configuration, software platforms, naming conventions, and platform administration. The resulting standards smoothed the implementation, from the assignment of subnetworks and IP addresses to usernames, computer and printer names, file and device sharing, and graphical connectivity.

A final goal was to achieve 95% user satisfaction with the resulting implementation. Non-conforming legacy systems, with their inherent restrictions and compromises, had to be accommodated until CommNet Cellular could implement an architecture-compliant replacement. Legacy systems were not allowed to negatively impact either the design or the implementation of the final service-based architecture.

Corporate Computing Services
Because CommNet Cellular is a service-based company, it was easy to begin identifying computing services. We defined a corporate computing service as a feature or function required by the majority of users in the company for corporate communications or platform-independent access to data.

The services making the final cut, shown in Figure 1, are typical for a distributed computing environment. They are meant to be available from every desktop, independent of the platform they are located on. With the corporate network expanding to an eight-state WAN and hundreds of remote business partners, the ability to move data easily and reliably across the network was paramount.

Service Provider Platforms
Next, the team evaluated the existing server platforms to see how well they would fulfill the service requirements. Three server platforms made the final four: Digital Equipment's OpenVMS, IBM's AIX, and Microsoft's Windows NT. Existing VAX/VMS production databases and financial applications were migrated to OpenVMS, while AIX continued to support real-time telephone-switch monitoring and maintenance. Windows NT was selected as the desktop server.

Windows NT Server was chosen because of its internetworking capabilities, security features, and network monitoring and management tools. With its ability to function as a gateway for Novell's NetWare, NT provided access to NetWare resources during the migration. Via TCP/IP and Data Link Control (DLC) protocols, NT supported network printing services for all users in the company. Because CommNet Cellular has a large number of traveling employees, the Remote Access Services (RAS) running Point-to-Point Protocol (PPP) was considered a bonus.

On the workstation side, all PCs were upgraded to Windows for Workgroups (WFW) 3.11 with the Microsoft 32-bit TCP/IP stack and the Reflection TCP/IP package. Microsoft Office Professional was selected as the application suite that best met the business needs of the majority of users. Custom desktop applications were strongly discouraged and prohibited in cases where a business task could be accomplished with standard desktop tools. After selecting the service providers, great care was taken to standardize service definition and graphical access for corporate users across all platforms.

Which Platforms Provide Which Services?
With corporate computing services and service-provider platforms identified, we needed a model for providing distributed services that met CommNet's requirements. At this point, we evaluated the advantages and disadvantages of three distributed models, as shown in figures 2, 3, and 4.

Model 1 consisted of a smart server and a dumb workstation, which mimics mainframe-style computing to a great extent (see Figure 2). In this model, the server provides all computing services, has high performance, reliability, and redundancy requirements, and supports many concurrent network connections. However, heavy reliance on the server introduces a single point of failure that directly impacts desktop productivity. On the workstation side, because applications run over the network, each workstation has high connectivity requirements, little access to local tools, minimal local storage demands, little or no sharing of local resources, and little local control over the desktop computing environment.

Model 2 is a mirror image of the first, a dumb server coupled with smart workstations, very much akin to peer-to-peer computing (see Figure 3). In this model, computing resources are located primarily on the desktop. Because the server provides few resources, the server has low performance and reliability requirements, few network connections, and minimal central control and administration tasks. On the workstation side, all applications are local, which gives users great control over their computing environment. This local flexibility translates into a need for high-performance desktop systems, high local sharing, and many local network connections. In this model, users bear most of the responsibility for creating and maintaining their network computing environment.

Model 3 is a compromise between Model 1 and Model 2, where computing resources are allocated to servers and workstations based on how often and the manner in which they are used (see Figure 4). In this scenario, the server provides performance-intensive or widely used applications and services that require carefully managed access controls. On the workstation side, each desktop has daily-use applications, the ability to share and control access to local resources, and moderate control over the computing environment. Compared to the first model, the server has more moderate performance requirements and less network traffic. This model permits distributed control, allows great flexibility in how services are managed, and carries moderate user-support demands.

The last model is the one chosen for CommNet Cellular's distributed service architecture. To complete the design, we created a set of guidelines for determining where to place host resources and services resources. With three server platforms and a target of 350 clients, we knew guidelines would be important when it was time to implement the design.

What Goes Where and Why?
As in all distributed computing environments, there are no hard and fast rules about where specific resources should be located or how the workload should be divided. The server and workstation guidelines shown in table 1 provide a metric against which to evaluate where services should be located on the network. When a service meets three or more criteria in either column, with no unusual parameters, that's where it normally belongs.

Production server decisions were easy. The applications running on OpenVMS and AIX met the server resource criteria, so no changes were planned. The more difficult choices came while deciding where the desktop applications should be located, because they run as easily from an NT Server as from an NT Workstation.

Server Resource Criteria
24-hour availability
High accessibility
High security requirements
Corporate use
Critical business function
High connectivity
Heavy resource demands
Permanent solution
Standard application/tool
Backup mandatory

Workstation Resource Criteria
Workday availability
Limited access
Limited or no security
Project or team use
Non-critical business function
Limited resource demands
Limited lifetime
Special application/tool
Limited usage

TABLE 1: Server Vs. Workstation Resources
The desktop team argued that the virus checker and the office suite should run on the desktop servers, rather than on the workstations, and they had many valid reasons for their position: ease of installation, configuration, and upgrades; ease of troubleshooting; supporting only a few copies instead of hundreds; centralized software-license management; and a great deal of control over the software used daily by all users in the company.

Using the server and workstation guidelines, it was clear that the virus checker and the office suite met more criteria for the workstation than for the server. These applications matched three server criteria--corporate use, permanent solution, and standard application or tool--while they matched five workstation criteria--workday availability, limited or no security, project or team use, non-critical business function, and not resource-intensive. Thus, the virus checker and the office suite ended up on the desktop.

Using this process repeatedly, we were able to allocate business-critical applications, printing, file sharing, remote access, software installation kits, email, and Internet access across the final four platforms.

Windows NT Domain and Security Model
As we began our service-based implementation, we had one final decision to make: What domain model should we use for the Windows NT environment? Because NT Server would provide services for all the client workstations in the company, a single domain model made the most sense. We installed two NT Servers, a Primary Domain Controller and a Backup Domain Controller, to provide redundancy and share the task of user authentication.

A global group was defined for each department in the company, and each group contained a user account for each member of the department. The department name was used as the NT group account and the workgroup name on the client workstation; the username was used also, as the client host name. This resulted in a user-friendly, consistent interface for workstations, end users, and the administrator charged with managing the distributed environment (see Figure 5).

On the client side, we used the Windows for Workgroups administration tool to disable the workgroup logon, force a validated logon to NT, and disable all client-side password caching. When password caching is disabled, users are prohibited from saving Microsoft Mail passwords. This precaution keeps email and server files much more secure.

The security control file is stored on one of the NT Servers and is not accessible by the users. This central location ensures that all workstations conform to the security settings, and it also allows the network administrator to update a single file to control the security environment for all desktop systems.

Windows NT Networking
Several networking protocols were loaded on each server, including NetBEUI for workstation communications and Windows for Workgroups browsing, TCP/IP for cross-platform communications, IPX for NetWare Gateway functionality, and Windows Internet Naming Service (WINS) for NetBEUI-to-TCP/IP name-address translation. With WINS enabled, workstations can browse by workstation name across TCP/IP subnetworks. Because the existing server platforms all supported TCP/IP, in Windows NT an administrator can Telnet and FTP to and from any server platform. A decision was made to fix workstation TCP/IP addresses and testing is in progress to evaluate the use of Dynamic Host Configuration Protocol (DHCP) for portables in the field.

On the client side, the Microsoft TCP/IP stack was loaded on top of Windows for Workgroups, along with Reflection TCP/IP software from Walker, Richer, & Quinn. The Reflection package provides a friendly graphical interface for Telnet and FTP. From a client workstation, a user can connect to any server or workstation in the company, which allows information to flow from the desktop to any other desktop or server on the corporate network, including remote locations accessed over the WAN link.

The domain controllers provide WINS name resolution, RAS for dial-up administration, TCP/IP, and DLC network printing. In addition, the NT Servers provide departmental and individual user directories with secured access, file sharing services, software installation kits for the desktop team, standard drag and drop, and a standard workstation configuration for clients including Windows for Workgroups, TCP/IP, Reflection, and Microsoft Office Professional.

Email and Scheduling
Migration to a new LAN environment is always complicated by the fact that current and new email packages must coexist until all users have the same tools on the desktop. A decision was made to make Microsoft Mail the corporate standard, partly because it is included in Windows for Workgroups and partly because of its friendly user interface.

To maintain uninterrupted email services, an SMTP gateway was installed to interface Microsoft Mail with SMTP mail on the UNIX platforms. A TCP/IP package on VMS allowed VMS mail to ride the same transport to UNIX or Windows systems. The benefits of uninterrupted email justified the additional work of maintaining multiple mailing lists and forwarding information on all server platforms until the migration to the new system was complete.

Microsoft Mail and the Schedule+ utilities were probably the biggest hit with end users who previously had used VMS character-based mail. The corporate post office, stored on the Primary Domain Controller, currently supports 300 interactive users.

Progress Report
One year later, the CommNet Cellular IS team is putting the finishing touches on the new service-based network. Team members have successfully installed more than 300 client systems at corporate headquarters in Denver and at six of the states in their eight-state area. The current remote sites are in Montana, Wyoming, Utah, North Dakota, South Dakota, and Iowa. With fewer than 30 clients remaining, the end is in sight.

There have been a few unexpected problems with the technology, but no showstoppers. The network team decided not to put a Windows NT system in the remote workgroups, which introduced a small, but livable problem. As William Palmer, network administrator, comments, "Unfortunately, the Windows for Workgroups browser is not WAN-aware, so our remote users have to know the exact connections they need to make. They are unable to browse by name across our WAN connection." According to Microsoft, this is not a bug in the Workgroups browser, but rather a limitation of the browser implementation.

Palmer also points out a problem with NT Server lockups. "We have had server lockups that appear to be caused by memory leaks in Windows NT Server. These problems have been temporarily accommodated by increasing the size of the page files. Microsoft is aware of the problem and is supposed to be working on a fix." This is yet another flavor of the memory-leak problems Windows NT has suffered from since its initial release.

According to CommNet Cellular's CIO, Homer Hoe, the man who spearheaded the network architecture, the new technology "couldn't be better. It has exceeded our expectations in the first year in terms of productivity boosts and user expectations. Don't make the mistake of underestimating the power of resource sharing. For a company that used to go through cases of floppy disks, we have virtually ceased buying them. Our users now understand that if the data exists anywhere on the network, they can get to it, whether it is on a desktop system at corporate, a server, or a system at any of our remote sites. We don't print things, fax them, and print them \[again\] anymore. We either share the documents or \[email\] them."

Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish