Microsoft Releases Hyper-V Server Virtualization Solution
Hyper-V is here--but before you run out and tell the CIO to virtualize her servers, pay attention to Microsoft's own experience.
July 10, 2008
This week, Microsoft announced the release-to-manufacturing (RTM) version of Hyper-V, a key feature of Windows Server 2008. Hyper-V in Server 2008 provides support for server virtualization, which offers organizations the potential to save a boatload on hardware, energy, and overall costs. (To learn more, see Windows Server 2008.)
Two of Windows IT Pro's heavy hitters recently offered their insights on Hyper-V:
"Hyper-V Hits RTM" by Paul Thurrott InstantDoc ID 99609
"A First Look at Windows Server 2008 Hyper V" by Michael Otey InstantDoc ID 97857
And we've got more about Server 2008, Hyper-V, and virtualization coming down the pipeline.
For now, though, it's sort of entertaining (and somewhat enlightening) to look at how Microsoft struggled with virtualizing its own environment. The Microsoft article "Virtualization Strategy Provides Tools, Processes, and Compliance Capabilities to Enhance Business Support and Drive Adoption," at http://technet.microsoft.com/en-us/library/cc713312(TechNet.10).aspx tells about Microsoft's experience in-house.
In a nutshell, Microsoft was running out of server space in its data centers.The article notes "The goal for Microsoft IT was to have 25 percent of both production servers and Microsoft IT–managed lab servers running as virtual machines by June 2008."
So Microsoft created a "RightSizing" initiative, "identifying proposed server purchases that are candidates for virtualization, and redirecting those purchases to an offering of a virtual machine platform" and a "Compute Utility" strategy, "to remove the concept of server ownership for business groups and replace it with a concept of purchasing computing capacity."
It used benchmarks from the Standard Performance Evaluation Corporation (SPEC) to assign a compute unit to each server to measure its computing capacity. Of course, it's a little more involved than that, so check out the SPEC website at http://www.spec.org if you'd like to learn more.
To identify the performance characteristics of the servers in the data center, Microsoft IT used Microsoft Operations Manager 2005, and later, Microsoft System Center Operations Manager 2007, to collect information about each server. Microsoft IT configured the systems to capture CPU utilization data at 15-minute intervals each day for 12 months. And it collected daily Microsoft Operations Manager aggregation data over 18 months.
With statistics in hand, the IT department created a table with terms even the technically challenged could understand. Using four "temperature" categories, Permafrost, Cold, Warm, and Hot, the table showed server use by using a temperature analogy, with Permafrost and Cold servers considered excellent possibilities for virtualization. They also developed a "RightSizing scorecard" showing performance data about server instances for each business group and provided reports showing cost savings for virtualizing servers within business groups.
Here's what happened: "The Compute Utility strategy and RightSizing initiative greatly increased the visibility and deployment of virtualization at Microsoft. However, despite the efforts of the RightSizing team, the number of physical servers deployed in the Microsoft data centers continued to grow at a rate close to 2,000 servers per year during 2006 and 2007. Many business groups virtualized some servers but still showed a preference to purchase physical servers rather than use less expensive virtual servers.
"By May 2007, the issue became acute when the data centers in Redmond, Washington, essentially reached their capacity for space and power. Deploying a single server in the data center could have taken months, because the RightSizing team would have had to identify which server needed to be removed so that it could deploy the new server."
So the gloves came off. Upper management created a group that had the authority to not only manage server purchases across business groups but could enforce virtualization in all cases. Success at last.
Overall lessons learned sort of fall into the "duh" category, but are useful to know:
1. It's important to gather accurate information about the server environment to find the best candidates for virtualization.
2. Begin implementing virtualization while you still have data-center space and power availability, not after you run out of space.
3. To get buy-in across all business groups, develop consistent guidelines for capacity planning and virtualization and communicate the necessity of virtualization in ways that people can understand.
4. Based on reading between the lines of the report, I'm adding a fourth lesson: It appears some organizations might also need to add a little virtual muscle to their virtualization processes—that is, create a team with the power to enforce virtualization and don't let any business group weasel out of virtualization unless they can prove a true need for physical servers.
Read more about:
MicrosoftAbout the Author
You May Also Like