What is Storage Spaces Direct?
Learn about Storage Spaces Direct
May 19, 2015
Q. What is Storage Spaces Direct?
A. Storage Spaces Direct is a new feature in Windows Server 2016. It's an extension of the existing software defined storage stack for Windows Server, and it leverages the Storage Space technology across nodes in a cluster in one of two ways: by using the disks internal to nodes in a cluster, or by directly attaching to nodes in a cluster via some kind of enclosure.
Storage Spaces then aggregates the disks local to the cluster nodes, enabling a Storage Pool made up of those disks to be created and then virtual disks created in the cluster Storage Pool which can then be used as a Cluster Shared Volume (CSV) to initially house Hyper-V VMs running on ReFS formatted volumes.
An example of this is shown below: the storage can be internal or JBOD attached via SAS. Note that both HDD and SSD storage can be used and will be tiered to enable the most used blocks to run on SSD giving very high performance, while other data is stored on HDD giving great capacity.
The virtual disks created in the clustered Storage Pool are Cluster Shared Volumes, which can be used locally by the cluster or made available to other servers via Scale-out File Server (SoFS). Data is stored in 1 GB extents and those extents are stored across the various nodes with resiliency based on the settings configured.
For example, if mirroring is used with a a disk resiliency of 2 -- which means there are 3 copies of the data -- for each 1 GB extent of data there are two copies over the data across two other nodes in the cluster. Data is replicated using Storage Space technology over the Software Storage Bus which leverages SMB 3 and also enables all the nodes in the cluster to see the local storage to each node. The Storage Space pool and virtual disk mechanics sit on top of the Software Storage Bus. Note that for disks attached directly to a node and not via a physical enclosure are placed into a "software" enclosure enabling all disks to be managed in a consistent manner.
Creating a Storage Spaces Direct environment is actually quite simple. The only big difference is to enable Storage Spaces Direct in the cluster to use the local storage:
(Get-Cluster).DASModeEnabled=1
Note: A cluster either uses the Storage Spaces Direct technology OR actual shared storage such as SAN storage OR an actual shared enclosure. They are mutually exclusive.
For more detail, see this Microsoft Technet article. It includes step-by-step PowerShell to set up your first Storage Spaces Direct implementation.
Note: If you are configuring this in VMs make sure all disks are connected to the SCSI controller. Also, in Windows Server 2016, you no longer create separate tiers for the HDD and SSD drives. Storage Spaces makes virtual hybrid drives of all storage and then automatically handles the tiering of data. This command in the Technet article:
Get-StoragePool StorSpaceDirectPool | Get-PhysicalDisk |? MediaType -eq SSD | Set-PhysicalDisk -Usage Journal
simply sets the Storage Pool to use SSD allocation for the Storage Space log and journal data, and it does not initially write all data to SSD. Because of the new virtual hybrid drives created, data is initially written to the SSDs anyway and then moved around.
The key point for this technology is it enables locally accessible node storage to be used as shared storage by the cluster,. Protection from disk or node failure is enabled through the replication of data stored. Below is an example of what you see after creating a Storage Space Direct instance. I created this on a 4 node cluster and each node had 2 local disks that were used in the pool.
Below is the PowerShell I used. Because I have 8 disks, I can use a column count of 2 for my Storage Pool. Once this was done I created a volume and formatted it ReFS then made it a cluster shared volume.
#Enable Storage Spaces Direct(Get-Cluster).DASModeEnabled=1#Check number of disks that can be made part of the pool(Get-StorageSubSystem -Name savtstfcspcdir.savilltech.net | Get-PhysicalDisk).Count#See if there are disks that cannot be pooledGet-StorageSubSystem -Name savtstfcspcdir.savilltech.net | Get-PhysicalDisk |? CanPool -ne $true#Create the poolNew-StoragePool -StorageSubSystemName savtstfcspcdir.savilltech.net -FriendlyName StorSpaceDirectPool -WriteCacheSizeDefault 0 `-FaultDomainAwarenessDefault StorageScaleUnit -ProvisioningTypeDefault Fixed -ResiliencySettingNameDefault Mirror `-PhysicalDisk (Get-StorageSubSystem -Name savtstfcspcdir.savilltech.net | Get-PhysicalDisk)#Initially data goes to SSD then tiering sends to HDD. Gives a little amount of space on SSD for journal and log#The Storage Spaces Direct layer will do inline tiering anyway so don't have to tell to it to split anymoreGet-StoragePool StorSpaceDirectPool | Get-PhysicalDisk |? MediaType -eq SSD | Set-PhysicalDisk -Usage Journal#Create virtual disksNew-VirtualDisk -StoragePoolFriendlyName stor* -Size 80GB -NumberOfDataCopies 3 -NumberOfColumns 2 -FriendlyName testspace `-ResiliencySettingName Mirror -ProvisioningType Fixed -FaultDomainAwareness StorageScaleUnit -WriteCacheSize 0Get-VirtualDisk -FriendlyName testspace | fl *
About the Author
You May Also Like