Q. How do Cluster Shared Volumes work in Windows Server 2008 R2?
November 18, 2008
NTFS is not a clustered file system because it's designed to be accessed only by a single server and does not support concurrent access by multiple servers simultaneously. This is not usually a problem. In a failover cluster, only one node is active for a service/application, and the LUN containing pertinent data is owned by the active node. If the service/application has to move to another node, the physical disk resource is dismounted then remounted to the new active node, which means a few seconds of downtime while the LUN is failed over between nodes. This few seconds needed to move the LUN is not acceptable with virtualization; you want zero seconds of downtime when moving a virtual machine (VM) between nodes, which means you need a way for multiple nodes to concurrently access the VHD files on a LUN. Server 2008 R2 now has Cluster Shared Volumes (CSVs), which are shared disks in Available Storage visible by all cluster nodes (such as LUNs on a SAN) that have been allocated to Cluster Shared Volumes. Cluster Shared Volumes appear as subfolders of a common %systemroot%:ClusterStorage folder. Each volume is named Volume (e.g., C:ClusterStorageVolume1 and C:ClusterStorageVolume2). A VM is placed in its own subfolder of a VolumeN folder. The great feature of CSV is that all nodes in the cluster can access the content of a CSV concurrently, which means there is no delay if a different node needs to start accessing a VHD. This means each VM is placed in its own sub-folder of a volume, so you no longer need one LUN per VM to get granular failover capabilities. This figure shows the ClusterStorage folder of a cluster with two volumes that are part of CSV. For each CSV, one node in the cluster acts as the coordinator node for that volume, as this figure shows. It's the only node that can write NTFS metadata to the volume. This means all nodes trying to write metadata to a CSV send the metadata updates to the coordinator node for the volume. All nodes in the cluster can write normal block level information to the volume directly which is the majority of write activity. CSV is actually implemented via the CSVFilter.sys file system mini-filter driver which is responsible for the interception of the NTFS metadata requests and all I/O requests in the event a node loses communication with the target volume. A node can ask the coordinator to perform all its I/O if it can no longer communicate with the target volume. CSV is currently only supported for Hyper-V purposes. As an additional bonus, CSV removes the previous problem where each VHD had to be on its own LUN to allow the LUN to be moved independently between nodes in the cluster. This was a requirement in order for VMs to be moved independently of each other, since the LUN access was the smallest unit of failover. With the CSVs, you can use a single LUN with multiple VHDs all being accessed by different nodes in the cluster at the same time, removing the need for 1 LUN per VM. This saves on complexity and saves on wasted space by pooling free space with potentially hundreds of small LUNs
About the Author
You May Also Like