5 Questions to Ask Before You Enable Cache on a Storage Array

There are many benefits to deploying a flash cache, but here's what to consider before you enable cache on your storage array.

Brien Posey

August 6, 2021

4 Min Read
5 Questions to Ask Before You Enable Cache on a Storage Array
Getty Images

Most NAS appliances and external storage arrays come with the option of using flash storage as a cache. The idea behind this is that the flash storage acts as a high-performance tier, keeping recent or frequently accessed data on high-speed storage. This allows the storage device to deliver flash-like performance, but at a fraction of the cost of all-flash storage. However, despite the many benefits associated with deploying a flash cache, there are some things that you need to consider before you enable cache on your storage array.

1. Where Will the Flash Storage Reside?

One of the first things to take into account when you are deciding whether to enable cache is where the cache storage will physically reside. In some cases, you may find that the flash storage has to be installed in a drive bay, thus reducing the number of drive bays that are available for data storage. If that’s the case, then it’s entirely possible that capacity will be more important than any performance gains that might be achieved by adding cache storage.

Of course, some arrays do not require sacrificing a drive bay to add a cache. Some storage appliances are equipped with dedicated NVMe slots that are reserved exclusively for caching. Similarly, there are storage appliances that are designed to use cache storage integrated into PCIe cards.

2. What Types of Caching Does the Storage Array Support?

Another thing to consider is the types of caching that the storage device supports. Lower-end storage appliances, for example, might allow only for read caching. Higher-end storage devices, however, typically support both read and write caching.

Most storage appliances that support both read and write caching require separate drives for each. After all, if a single flash drive were to be used for both read and write caching, then that drive could easily become a bottleneck. Using one drive for read caching and another for write caching keeps the two caches separate from one another, improving the overall throughput.

3. Are There Any Additional Hardware Requirements?

Another important consideration when deciding whether to enable cache on a storage array is hardware requirements. This might initially sound like an odd consideration, but keep in mind that there is a certain amount of overhead associated with the caching process. In the case of a read cache, for instance, the storage appliance has to keep track of which storage blocks are accessed the most frequently so that those storage blocks can be moved to the cache. As the hot data begins to cool off, the appliance must be able to move that data from the cache to the appliance’s primary storage.

The backend operations associated with keeping track of storage block use and moving blocks in and out of the cache requires CPU cycles and memory. In fact, storage appliances generally limit the size of the cache based on the available memory. This means that you can’t simply insert, for example, a 4TB flash drive and expect to have a 4TB cache. Even if the storage appliance allows the drive to be used, there is a good chance that memory, not the drive capacity, will be the factor that ultimately determines the cache size. Fortunately, many storage appliances are designed to accommodate additional memory, thereby allowing for a larger cache.

4. What about Redundancy?

One of the easier-to-overlook considerations when you enable cache is whether there is any redundancy available for the cache. The reason why this is so important is because drives can fail regardless of whether they are being used for data storage or for caching purposes.

If the disk used by a read cache were to fail, it probably isn’t going to be a huge problem because the cache contains copies of storage blocks that exist elsewhere within the available storage. However, redundancy is far more important when it comes to the write cache. Remember, the write cache acts as a buffer and temporarily stores data until it can be written to the appliance’s primary storage. This means that a write cache failure could result in data loss unless the cache uses redundant storage to protect the cached data.

5. Will the Cache Benefit the Storage Performance?

Finally, it is worth considering whether adding a cache will actually improve storage performance. Caching tends to be most beneficial when users access a lot of small files (such as documents). However, caching tends to be far less effective if users access a lot of large files or if most of the read operations are sequential.

It’s also worth noting that caching may not make much of a difference if the storage appliance is servicing a light workload (one that requires fewer IOPS than what the appliance can handle) or if network access to the appliance is acting as a bottleneck.

 

About the Author

Brien Posey

Brien Posey is a bestselling technology author, a speaker, and a 20X Microsoft MVP. In addition to his ongoing work in IT, Posey has spent the last several years training as a commercial astronaut candidate in preparation to fly on a mission to study polar mesospheric clouds from space.

http://brienposey.com/

Sign up for the ITPro Today newsletter
Stay on top of the IT universe with commentary, news analysis, how-to's, and tips delivered to your inbox daily.

You May Also Like