Over the course of numerous customer meetings, I started noticing a pattern of comments on how their storage environments fared when it came to delivering the performance needs of applications and users (NOTE: I am referring to mainstream Enterprise applications and not to high-performance computing applications). Many customers chose Nimble Storage for how well we converge primary storage and data protection on the same array, but they were really excited by the fact that their applications are visibly faster and they are able to support many more demanding workloads – despite our use of inline compression and low-cost, high-capacity drives (SATA).

Virtualization causes I/O to become more random

Server virtualization has made performance of storage stand out in sharp contrast. For IT professionals that have implemented server virtualization initiatives, it is well understood that networked storage deployments are a significant part of the capital spend and complexity. As they start measuring success based on how many virtual servers are being consolidated within a single host, customers realize that the BIG storage bottleneck is not capacity related as much as a performance bottleneck. As multiple applications are consolidated on to fewer physical servers, the resulting I/O pattern that is manifested on the network from server to storage becomes a blend of various application specific patterns and increasingly taxes the storage system in terms of delivering IOPS rather serving up GBs of data.

Lack of disk drive performance gains compounds this problem

Against this backdrop of increasing need for random IO performance from storage, disk drives have done a very poor job of keeping pace with advances in the rest of the infrastructure. The table below shows the pace of evolution for mainstream environments, and not high-end environments:

2001 2011 10-year Improvement
Compute: CPU 1.3 GHz x 2 cores 3.8 GHz x 16-24 cores ~20-30 times better
Compute: Memory 0.25-0.5 GB 24-48 GB ~50-100 times better
Network 0.1 – 1 GbE 1 – 10 GbE ~10 times better
Disk drive density 36 – 137 GB 600 – 3000 GB ~20 times better
Disk drive access time (performance) ~6 ms ~3 ms ONLY 2 TIMES BETTER!

Therefore, when an application needs good performance, customers are unable to take advantage of low-cost, high-capacity SATA drives. The typical approach to delivering IOPS has been to deploy as many high RPM drives as necessary to get to the needed performance, even if the capacity of these drives was under-utilized in many instances.

Flash can solve the problem, but presents a cost barrier

Flash SSDs are ideally suited to address this issue. Flash delivers 50-100 times better IO performance than the fastest disk drive. Therefore, a single flash drive can replace 50-100 high-RPM, SAS hard disk drives. Using one or a few flash drives is indeed the right answer for those applications that need extreme performance, but only a few GBs of storage.

However, mainstream applications such as Exchange, SQL databases running business applications, virtual servers and virtual desktops require adequate performance as well as a significant amount of storage capacity. For such applications, at 25 times the $/GB compared to multi-TB drives, storage solutions that rely solely on flash over-deliver on IOPS but become too expensive given the Terabytes of capacity.

A blended model yields the best outcome, but efficient blending is key!

If using multi-TB drives alone will not yield adequate performance, and using flash SSDs alone is too expensive for mainstream Enterprise applications, how then should we go about addressing the need for adequate performance AND cost-effective capacity?

Given that flash SSDs deliver the best performance and low-cost, high-density multi-TB drives deliver the best cost of capacity, the ideal system would be one that can flexibly blend the right proportion of each to optimize cost of performance and cost of capacity. Industry participants have recognized this and have introduced tiering solutions as a way to mix SSDs and disk drives.


 

What Nimble has been able to do far more effectively, by designing a file-system ground up to optimally leverage SSDs and low-cost disk drives, is to deliver the best blended system – our use of flash SSDs is over 10 times more cost-effective than competitive approaches and delivers high performance, while at the same time we use low cost multi-TB drives with inline compression to optimize cost/GB. I will discuss the uniqueness of our “blending” approach in an upcoming blog.

Implications for customers: Evaluate solutions on $/GB AND on $/IOPS

Many RFPs and evaluation processes focus on $/GB. The risk with this approach is that the decision to upgrade or augment your storage system may be forced upon you because you have run out of performance headroom, even though you have plenty of capacity headroom. Therefore, customers need to ensure that they are equally concerned about $/IOPS when comparing alternatives.

Share →
Buffer

Leave a Reply

Your email address will not be published. Required fields are marked *

*

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>