Microsoft Clustering and VMware High Availability
Discover the differences between these two technologies and which is a better solution for your situation.
January 7, 2008
Microsoft Clustering and VMware High Availability technologies both have their specific market segments. Both solutions provide high availability. Both solutions typically require a SAN-- either iSCSI or Fibre Channel, with a heartbeat connection to the host nodes that resides on a network segregated from end-user traffic.
Microsoft Server Clusters are typically single application clusters. Often they are referred to as the “Exchange Cluster” or “SQL Server Cluster.” Microsoft recommends that you set up an Active/Passive cluster where you have one or more active nodes and one or more passive nodes that are waiting to take over for a failed active node. A “heartbeat” connection is setup between all of the nodes (active and passive) to monitor the status of the nodes. If a passive node detects that an active node is down and the passive node is configured as the failover node, it will attempt to take the place of the active node. The failover process is usually completed within five minutes, depending on the node configuration and the services that must be started on the node. On a Microsoft Cluster the nodes are “potential” Logical Unit Numbers (LUN) owners on the SAN, however only one node can have access to LUN at a time. This means that an active node must “release” the LUN before a passive node has access to it. Microsoft Server Clusters are good solutions when you have a very high load on the nodes, require a lot of memory on each node and require a lot of CPU performance on the node. To run a Microsoft Cluster you typically have to purchase the Enterprise versions of Windows Server 2003 and the Enterprise Version of the application you plan to cluster. However, you can run a two-node cluster with the Standard Version of SQL Server 2005.
It's possible to run Virtual Server 2005 on a Microsoft Cluster, however there are some limitations. You can run only 32-bit guests, and assuming that you have the virtual server guest files stored on the SAN, you must fail over all of the virtual server guests that reside on the same LUN at the same time. It's not possible to fail over virtual servers that reside on the same LUN to different nodes, because only one node can have access to the LUN at a given time. If you require granular failover, you have to carve out numerous LUNs on the SAN, which often ends up in a lot of wasted space.
VMware's Standard and Enterprise Infrastructure packages both include the High Availability Module. It usually makes more sense to purchase the Infrastructure package than buy the individual modules separately. In addition to the Infrastructure bundle, you will need at least one copy of VirtualCenter Server to manage the High Availability Feature. VirtualCenter is used to manage the ESX cluster nodes. Unlike Microsoft Server Clusters, all nodes on the cluster have simultaneous access to LUNs on the SAN. This gives you a lot more flexibility for the server cluster. Unlike Microsoft Clusters, which are typically single application clusters, ESX Server Clusters usually host different applications on the same cluster. This allows you to fail over guests to different hosts, even if they reside on the same LUN. If there is enough capacity on the remaining nodes, assuming one of the nodes fails, we don't design the cluster with an additional passive node.
If you purchase the Enterprise Version of Infrastructure package, it includes VMotion and the Distributed Resource Scheduler (DRS). Using the VirtualCenter Server you can move virtual server guests to different hosts in real time. This allows you to take down an active node in the middle of the day by VMotioning off all of the virtual server guests on that node to different nodes. Then you can add memory, replace a power supply, update the BIOS or perform other maintenance tasks on the host node without disrupting the users. The DRS package allows you to dynamically allocate host node resources and make sure you are effectively using existing hardware resources. You can create a pool of virtual guest servers and configure them to run on a pool of host nodes. DRS will automatically load balance the server guests based on the allocated resource pool, to ensure that no ESX host node gets overloaded while other hosts go underutilized.
VMware recently increased the memory limitation on a guest server from 16GB to 64GB. With the previous 16GB limit there were still a significant number of servers that required more than 16GB of memory--typically big Exchange and SQL Server clusters. Even with the increased guest memory limit, there are probably still a few nodes that require more than 64GB, but even today 64GB is a lot of memory for a server.
Tip: Firmware Update for Palm 700w and 700wx
If you have the Palm 700w or 700wx, be aware that Verizon released a new version of firmware for the phone. Dated 9/18/2007, version 1.22 allows you to use your phone as a wireless modem for dial up networking, if your laptop supports Bluetooth. The update can take about an hour so plan accordingly. In my experience, my phone is faster and more stable with this latest update. You can download this update from http://www.palm.com/us/support/downloads/treo700wupdater/verizon.html.
About the Author
You May Also Like