That Was Then ... NT's Early Days

David Thompson and Mark Lucovsky talk to Paul Thurrott about NT's origins.

Paul Thurrott

March 24, 2003

7 Min Read
ITPro Today logo in a gray background | ITPro Today

David Thompson, Corporate Vice President of the Windows Server Product Group at Microsoft, joined the company in 1990 and led an advanced development group in the company's LAN Manager for OS/2 project before joining the Windows NT team later that year. There, Thompson guided the development of NT's networking subsystem, ensuring that the product would work not just with Microsoft products but with the outside world.

Mark Lucovsky, Distinguished Engineer and Windows Server Architect at Microsoft, joined the company with the original wave of former Digital Equipment Corporation (DEC) employees that accompanied NT architect Dave Cutler in 1988. Lucovsky is widely hailed for his technical acumen and his early efforts to change NT from an OS/2-based system to one that ran 32-bit Windows applications.

As the buzz about Windows Server 2003 heated up, I had a chance to talk with Thompson and Lucovsky about where it all began: NT. The two provided insight into the origins of the Windows architecture and its development process. (For additional comments from Thompson and Lucovsky, see "Windows Server 2003: The Road To Gold" at http://www.itprotoday.com/article/reviews/windows-server-2003-the-road-to-gold-part-one-the-early-years.aspx.)

Paul Thurrott (PT): What were NT's earliest days like?

Mark Lucovsky (ML): We came together as a group in November 1988. The first thing we did was get development machines. We bought top-of-the-line 386/25s with 110MB hard disks and 13MB of RAM. They were ungodly expensive. [Laughs] Then, we started writing the design documentation in [Microsoft] Word. Finally, after 2 weeks, it was time to start writing some code. Originally, we were targeting NT for the Intel i860, a RISC processor that was horribly behind schedule. Because we didn't have any i860 machines inhouse to test on, we used an i860 simulator (code-named N-Ten). That's why we called it NT, because it worked on the N-Ten. We checked the first code pieces in around mid-December 1988, and by January had a very basic system booting on the simulator.

PT: The i860 was pretty short-lived.

ML: Yeah, we ditched the i860 and went to the MIPS R3000 chip. Our architecture really paid off. We had designed NT to be portable, and we proved it would work almost immediately when we moved to MIPS. We made the change without a lot of pain. Then Microsoft added a bunch more guys to the project, and we started up the Intel i386 version. We stayed away from the 386 for a while to avoid getting sucked into the architecture.

PT: When did the decision to make NT a Windows version occur?

ML: In May 1990, when Windows 3.0 was starting to look good, we started looking at it. We said, "What if instead of OS/2, we did a 32-bit version of Windows?" We looked at the 16-bit Windows APIs and figured out what it would take to stretch them to 32-bit. We spent a month and a half prepping the API set, and then presented it to a 100-person design-preview group to see what they thought. The key characteristic was that, though it was a new API, it looked and acted just like Win16. We made it possible to move 16-bit applications to NT very easily. They loved it when they saw how easy it would be. It was basically Windows on steroids, not OS/2.

David Thompson (DT): In September 1990, NT changed from being an internal project to being a real product. That's when I got involved. I was in the LAN Manager group, and when NT became Windows NT, we just threw the switch. The NT team went from 28 developers to about 300 people. We had our first real product plan.

PT: The stories about the change from OS/2 to Windows always suggest that Microsoft worked secretly to usurp OS/2, but the Microsoft programmers I've spoken to say this wasn't the case at all.

ML: We had to get Independent Software Vendor [ISV] approval and executive approval before we started the port. We did an ISV preview with IBM; I had this deck of about 20 slides, and we said, "Look, this is what we're going to do." At first, they thought Win32 was a fancy name for OS/2. Then you could just see it on their faces: "Wait a second, this isn't OS/2." I explained the issue of moving 16-bit applications to 32-bits and that it was a natural evolution. OS/2 was a completely different API set. I said, "Let's make it more similar to Win16. That way, the new APIs will be culturally compatible."

PT: How has NT's modular architecture paid off for Microsoft over the years?

DT: Our core architecture is so solid that we were able to take NT from 386/25s in 1990 to today's embedded devices: 64-way, 64-bit multiprocessor machines and $1000 scale-out server blades. We've been able to deliver a whole array of services.

ML: With the move to Windows, because of NT's subsystem model with the kernel decoupled from application environments like POSIX and Win32, we didn't need to change the kernel or start a new programming effort. The deep guts of the scheduler didn't have to change. Architecturally, we had designed this system for multiprocessor use back in 1988. We had preemptive multitasking and processor active masks built into the NT kernel. But we didn't have multiprocessor hardware until maybe 1991 or 1992.

PT: Obviously, the Windows development process has matured over the years. What was it like in the early days?

DT: Originally, we had a certain time of day that we could check code in. After that, we threw the switch and built the new system. Eventually, we grew the team to 85 people and serialized the process for more control. Dave Cutler—whom we all worked for—ran the build lab for about a week and required people to personally write their check-in requests on a whiteboard in the lab. One day I accepted 85 check-ins, the most we had ever had to that point. Now we can take in over 1000 check-ins every day. It's a completely different scale; the whiteboard is electronic now (Web-based, actually).

PT: Tell me about dogfooding.

DT: One of the things we've always done is self-hosting, or dogfooding. In the early days, the key was to get the system to where we could use it on our desktops, and use it to build the NT system. When you're forced to use the system yourself, you see bugs and performance issues. You go and find the person responsible for the problem and ask him or her to fix it.

PT: How do you handle the complexity of maintaining all the Windows versions that need to be supported?

DT: The servicing infrastructure has really grown. The biggest change is that we've really extended the time that we service our products. When we ship a server product, people use it for 10 years and our volume service lasts for 7 years. Separating the client and the server releases during the development of Windows XP was important. They aren't divergent product lines, just different products that build off each other. So the next client release will be based on Windows Server 2003's code base.

PT: What about hotfixes and service packs?

DT: Our work in rapidly addressing security vulnerabilities means that we now aggressively issue hotfixes when we can. Service packs used to be flexible, a way that we could deliver features as well as fixes. But customers made it clear that they wanted bug fixes only. That leads to an interesting question, though: What exactly is a bug? Is a missing feature a bug? Customers often have different views themselves. But NT 4.0 Service Pack 3 [SP3] was the end of offering major new features in services packs. After that, we went to Generally Deployable Releases [GDRs]. Service packs are called GDRs inhouse. Everyone can and should install them, and they shouldn't break applications. We're careful about them and do the right kind of testing.

PT: Windows 2000's move away from the NT name was somewhat controversial. How did that come about?

DT: They pulled over a guy from the Windows marketing team to NT marketing, and he said we should use Windows everywhere. We were all uncomfortable with the name change at first because NT had such a solid reputation. But because of the reliability push with Windows 2000, people started talking about how much better Windows 2000 was than "that old NT stuff," even though it was the same architecture. But the most fortuitous thing was splitting the server and client. That let us focus on server customers, who want it rock solid rather than right now. Desktop software has to ship in sync with sales cycles, but there's no Christmas rush with servers.

About the Author

Paul Thurrott

Paul Thurrott is senior technical analyst for Windows IT Pro. He writes the SuperSite for Windows, a weekly editorial for Windows IT Pro UPDATE, and a daily Windows news and information newsletter called WinInfo Daily UPDATE.

Sign up for the ITPro Today newsletter
Stay on top of the IT universe with commentary, news analysis, how-to's, and tips delivered to your inbox daily.

You May Also Like