Solid-State Disks Can Increase SAN and Application Performance

The need to increase the data-access speeds of Storage Area Networks (SANs) might mean that solid-state disks (SSDs) will finally emerge as a mainstream storage technology over the next several years. Learn what performance concerns SDDs face.

Elliot King

April 21, 2002

4 Min Read
ITPro Today logo in a gray background | ITPro Today

The need to increase the data-access speeds of Storage Area Networks (SANs) might mean that solid-state disks (SSDs) will finally emerge as a mainstream storage technology over the next several years. At least, that's the view of some industry analysts and the hope of vendors such as Imperial Technology, which recently announced SANaccelerator, a data-acceleration device designed specifically to improve SAN application performance.

SANs have emerged as a preferred solution to connect large-capacity storage devices to collections of servers. The fast access to data that SANs provide has been one of the primary factors driving SSD growth. Fibre Channel, the current de facto standard protocol for the majority of SAN interconnects, supports line speeds of as much as 2Gbps, with low overhead and minimal latency. But in some high-transaction processing scenarios, that kind of speed still isn't good enough.

When they face performance concerns, companies often try to resolve the problems by brute force, adding servers and disk drives to spread the workload. Michael Fisch, an analyst with The Clipper Group, argues that the bottleneck in a SAN often isn't the network but the disk drives themselves. (See the URL below for the research bulletin "Imperial Technology's Solid State Disk—Satisfying the Need for Speed in a San" published in early April.) For a server to access stored data, the disk platter must spin and a magnetic head must move to the correct position to complete the read or write. The 150 I/Ops transaction limit of one rotating disk drive can pose a real problem.

Using SSD technology might be the solution to the need for speed. You can define pure SSD as DRAM, backed up by a rotating hard disk drive with integrated battery backup. The primary storage medium is a solid-state semiconductor. Because this approach eliminates the need to synchronize a read/write head with a rotating disk while still permitting random data access, SSDs can offer significantly faster access times. At the same time, SSDs are much more resistant to physical shock, vibrations, and temperature changes than conventional disk drives. SSDs range in capacity from 134MB to 51GB.

SSD technology isn't new. Over the past 20 years, SSDs have carved out a distinct, slow-growing niche. You typically find SSDs in large-scale enterprises or utility-grade storage environments and as components of vertical solutions. The primary barriers to more widespread SSD use have been cost, storage density (which is less than conventional disks in a comparable form factor), and the need for battery backup and a controller.

But a combination of improved technology and changing circumstances has refigured the cost/benefit ratio for SSDs. Importantly, even as SSD technology improves, its cost is dropping rapidly. For example, Curtis, Inc., which claims to be the price-performance leader in the SSD area, contends that the current cost for its SSD technology has fallen to $3 per megabyte.

Internet and digital communications continue to fuel the need for higher-speed transaction-processing systems, which keeps widening the gap between disk I/0 performance and CPU MIPS performance. Because SSDs have close to zero-seek latency, systems can use SSDs to recapture CPU cycles lost during I/O operations. In I/O-intensive applications, SSD vendors claim that application performance can improve from 200 percent to 500 percent. Finally, the emergence of a virtualization layer in SANs will make it easier to incorporate SSDs.

The changes in technology, cost, and need have led International Data Corporation (IDC) analysts Massaki Moriyama and Robert Gray to conclude that end users with transaction-intensive applications should consider the benefits and leverage of incorporating SSDs into their storage infrastructure. Moriyama and Gray argue that SSD technology can help unclog sluggish applications.

As Moriyama and Gray see it, companies can best use SSDs as a file-caching technology to improve application performance rather than disk performance. Users should install SSDs as disk volumes in the storage network to use to store frequently accessed files. Moriyama and Gray contend that SSDs provide the most value when users direct a high percentage of the disk-I/O requests to a relatively small number of files.

Deployed in this way, SSDs can improve the performance of a wide range of applications, including Internet-based applications (e.g., email, Web hosting), relational database and data-warehousing applications, online transaction processing (OLTP) and networked systems, video processing, high-speed data acquisition, and high-performance swap files in multitasking systems.

The greatest obstacle to adopting SSDs might be the lack of end-user awareness—but that should change soon. The investment community has latched onto the SSD technology, and at least a half-dozen corporations currently use SSD technology.

Sign up for the ITPro Today newsletter
Stay on top of the IT universe with commentary, news analysis, how-to's, and tips delivered to your inbox daily.

You May Also Like