Jump to content

Case Study: Vision Critical

For Vision Critical, High-Performance, Capacity-Optimized Storage is the Key to Managing Software Development

Business Intelligence application provider combines primary storage compression and efficient snapshots to manage the hundreds of code iterations it produces daily.

For Vision Critical, producing multiple iterations of software code for hundreds of business intelligence applications is all in a day's work. Vision Critical serves clients as diverse as Kodak, JetBlue, NASCAR and Marriott Corporation with a rapidly growing list of service offerings and multiple versions of its BI applications. As a result the company's software group develops, revises, and tests scores of code builds across hundreds of VMware instances in a day—and their performance is only as good as their ability to manage an extensive code base. Data under management by the company is expanding at approximately 15 terabytes—50 percent—per year.

Today, software developers at Vision Critical are more productive than ever, thanks in part to a storage architecture optimized to deliver both high performance and efficient capacity optimization of primary and backup data. Vision Critical recently adopted a Nimble Storage CS220 array that converges primary and backup storage into one solution. Developers rely on this platform for managing, storing, and protecting terabytes of data that comprise hundreds of iterations of builds.

Top Priority: Efficient, High-Performance Storage for a Demanding Development Environment

"Where Nimble stands out, and why I made the investment in Nimble, was its ability to provide high-performance primary storage with realtime capacity optimization for our virtualized, mission-critical development environment," said Diraj Goel, Director of Information Technology at Vision Critical. "Nimble's approach is the first I've seen that manages data intelligently by leveraging high-performance SSD (solid state drives) in concert with high-capacity hard drives."

Today, says Goel, Vision Critical serves multiple departmental groups with a single Nimble CS220. "For example, we can serve our software environment and our QA lab 'off the same pipe.' Storage is no longer the bottleneck that it used to be. With other systems, you have to schedule backups and make sure you don't ever fall into a 'lag timeframe' between the time you dump the data on the appliance and the time you actually dedupe it. But with Nimble, you set it and forget it."

The Nimble CS220 has delivered peak performance of over 20,000 IOPS, allowing Vision Critical to run more simultaneous development and test jobs and to complete those jobs in less than half the time they would otherwise require. And real-time compression has delivered primary capacity savings of over 50 percent, helping the company use its storage capacity more effectively while keeping the IT budget in check by postponing new capital expenditures for storage.

Goel points out another differentiator in Nimble's approach: All primary and backup data resides in the same array, reducing management complexity significantly. When combined with efficient replicas on another Nimble array for disaster recovery, the result is a powerful combination of robust data availability and simple management.

Foundation for Efficient Offsite Replication

An effective storage and backup solution further benefits Vision Critical in applications where replication technology must serve multiple sites, or where application dependencies result in the need for application-level deduplication. "With Nimble it's an easy play, not a rip-and-replace," he says. "If I were looking to expand our legacy storage platform, I would have to purchase a significant amount of new storage just to replicate my data. In addition, I would expect to incur significant overhead or a performance penalty in using compression or dedupe."

With Nimble, he says, he can introduce replication quickly and cost effectively. "I appreciate that Nimble can serve as a multi-utility storage appliance—as primary storage for replicated sites, or for DR environments, as well as for our current test dev environment." Goel adds that he leverages the use of Nimble's performance profiles, which allow his team to optimize the performance of each storage volume according to its particular use, whether for Microsoft Exchange, VMware, or other applications. This practice also helps establish repeatable processes and reduces human error.

"What we found was that no matter how many times we tried to stress the appliance to force a crash, it held its own and performed. In a sense, Nimble is in the perfect sweet spot in terms of providing the storage that I need. I don't have to revisit or rethink my entire storage strategy."

Zero-Copy Cloning

In test development at Vision Critical, developers frequently create a set of virtual machines for which new software iterations need to be available. In these cases, Goel's immediate need is to clone a range of software instances and bring them online quickly. That's where Nimble's zero-copy cloning comes into play.

Cloning has traditionally involved copying blocks of data to create a new volume. With some systems, cloning results in lengthy copy times, additional overhead, and the need for significantly greater storage capacity. But with Nimble, the new clone shares data with its parent volume, meaning that no additional storage space is required. If a programmer then writes to the new clone and the clone begins to diverge from its parent, Nimble simply stores the new data separately as a change to the original volume.

Vision Critical uses zero-copy cloning to eliminate the need for copying huge volumes of VMware data, and Goel notes the added benefit of reduced capacity demand. "When multi-tier applications and multiple systems are part of a build—and, for example, when 100 gigabytes of data or more is involved—a basic limitation is how quickly your storage engine can move or copy that 100 GB of data. With zero-copy cloning on the Nimble system, we use the storage layer to do a 'clone drop' and expose that environment in seconds. You turn around and it's done. Nimble's zero-copy clones have saved us over 80 percent of our test development storage capacity."

Snapshots Like Clockwork

Goel further appreciates the Nimble CS220's implementation of data snapshot technology. "The ability to take multiple snapshots—hourly during the day, every day—just in case a developer nukes a block of code or if a data set becomes corrupted, is absolutely critical," he says. "In such a case, we can look back to the snapshot from the previous hour, as we did recently with a corrupted VMware file, and the developer is up and running again in 15 minutes." But Goel is more demanding. While an effective regimen for snapshots is mandatory, the key differentiator in favor of Nimble is that it offers incredibly efficient snapshots. "What that means for me is that I can take one snapshot a day, or 100 snapshots a day—it doesn't matter. Regardless of the number, my storage consumption for those snapshots is going to be virtually the same. But the clincher is that I can provide almost continuous data availability to my developers." In Vision Critical's "continuous integration environment" in which code builds can occur every 15 to 30 minutes, Goel and his team now have the flexibility of reverting to code instances from any part of the day or several days before without incurring significant storage costs. Says Goel, "Developing scores of iterations each day, and in an environment where regression testing is required, and where understanding how incremental changes in the code can introduce bugs or issues in the product, that's huge for me. Even more importantly, reverting to a previous point in the product's development without having to reconstruct a new build—well, I'll just say that the Nimble CS220 is more extensible than I had imagined a storage platform could be."