An Introduction to NVMe over Fabric

Here's why NVMe over Fabric, or NVMeOF as it is sometimes called, has gained such rapid acceptance in the data center.

Brien Posey

August 17, 2018

4 Min Read
woven fabric

A storage technology that is taking hold in the data center is NVMe over Fabric, or NVMeOF as it is sometimes called. The reason for its rapid acceptance: NVMe over Fabric's ability to provide lower-latency, high-performance storage.

So, before I jump right in and talk about NVMe Over Fabric, let me begin by discussing NVMe. Non-volatile Memory Express was introduced as a way of overcoming one of the biggest problems with flash storage--namely, throughput. As you no doubt know, SSD drives are far faster than their HDD counterparts because they do not contain any mechanical parts. Hence, SSD operations are never put on hold while a motor spins up or a drive head moves to the correct location. The problem with SSDs, however, is that their performance often exceeds the speed of the data bus, which was, of course, originally developed for use with HDDs. This means that an SSD that is plugged into a legacy interface, such as SATA, is limited by the controller’s speed.

The NVMe specification was created as a way of allowing SSDs to perform to their full potential, by eliminating dependency on legacy storage controllers. Instead of using an interface such as SATA, NVMe disks communicate with the CPU through a PCIe connection. In recent years, NVMe has gained popularity in high-end laptops because of its small size, light weight and blazing performance.

NVMe Over Fabric is an architecture that allows NVMe storage to be accessed remotely. In fact, NVMe Over Fabric has many similarities to iSCSI storage, but NVMe Over Fabric is much faster than iSCSI. Of course, this raises the question of what it is that makes NVMe Over Fabric the faster of the two. Sure, the underlying NVMe storage is fast, but the storage alone does not account for the difference in speed. After all, there is no reason why NVMe storage could not be configured to act as an iSCSI target. Hence, the performance difference has to be in the NVMe Over Fabric architecture.

Like iSCSI, NVMe Over Fabric is a network protocol that enables communication between a host and storage, over a network. The reason why the NVMe Over Fabric protocol is so much faster than the iSCSI protocol is because it is based on the use of RDMA (and supports RDMA technologies such as iWARP and InfiniBand).

RDMA is an acronym for Remote Direct Memory Access. As its name implies, RDMA provides a direct access method that allows one computer’s memory to be accessed from another. A simple way of thinking about this is that RDMA essentially takes the operating system out of the equation, by offloading copy operations to the network hardware. The end result is data transfers that perform better than would otherwise be possible because data can be transferred directly to or from application memory--without having to involve the normal, OS-level network stack.

Of course, the operating system cannot be bypassed entirely. The OS still needs to know what is going on so that it can coordinate data transfer operations. The key to making this work is that the OS must be RDMA-aware. At this point, however, most major operating systems have built-in support for RDMA. Microsoft, for example, has supported RDMA since the days of Windows Server 2012, through its SMB Direct feature. Similarly, VMware added RDMA support to its products in 2015, and RDMA is widely supported in various Linux distributions.

So, how is it that RDMA benefits NVMe Over Fabric? Well, as previously mentioned, NVMe Over Fabric has many similarities to iSCSI. The iSCSI protocol sends native SCSI commands over the network fabric. NVMe Over Fabric does essentially the same thing. It sends native NVMe commands over the RDMA-based network fabric. Because RDMA eliminates almost all of the processing that occurs with normal network communications, communications can occur at a speed that is near that of native NVMe communications. While it is true that the network fabric incurs a latency penalty, the added latency is minimal. In some instances, the network fabric only adds about 10 microseconds of latency over what would be expected if the NVMe storage resided locally.

As promising as NVMe Over Fabric may be, the technology is still maturing. The biggest thing to watch out for when deploying NVMe Over Fabric is that each storage vendor has a slightly different way of doing things, so one vendor’s NVMeOF solutions might not necessarily be compatible with those offered by another vendor.

About the Author

Brien Posey

Brien Posey is a bestselling technology author, a speaker, and a 20X Microsoft MVP. In addition to his ongoing work in IT, Posey has spent the last several years training as a commercial astronaut candidate in preparation to fly on a mission to study polar mesospheric clouds from space.

https://brienposey.com/

Sign up for the ITPro Today newsletter
Stay on top of the IT universe with commentary, news analysis, how-to's, and tips delivered to your inbox daily.

You May Also Like