NimbleStorage Blog » Technology http://www.nimblestorage.com/blog Accelerate Apps, Store and Protect More Data And Empower IT with Flash-Optimized Nimble Storage Thu, 31 Oct 2013 15:13:10 +0000 en-US hourly 1 http://wordpress.org/?v=3.4.1 Welcome to the (Multi-Core) CPU Era http://www.nimblestorage.com/blog/technology/welcome-to-the-multi-core-cpu-era/ http://www.nimblestorage.com/blog/technology/welcome-to-the-multi-core-cpu-era/#comments Wed, 04 Sep 2013 18:08:19 +0000 Ajay Singh Vice President, Product Management http://www.nimblestorage.com/blog/?p=5806 speeding up CPU clocks to increasing CPU core density. This transition has been painful for storage industry incumbents, whose software stacks had been developed in the 1990s [...]]]> Hello Multi-Core CPUs (Finally)

Battling the laws of physics and Moore’s “law”, chip manufacturers like Intel and AMD have over the past decade shifted their focus from speeding up CPU clocks to increasing CPU core density. This transition has been painful for storage industry incumbents, whose software stacks had been developed in the 1990s to be “single threaded”, and therefore have needed years of painstaking effort to adapt to multi-core CPUs. Architectures developed after about 2005 have typically been multithreaded from the beginning, to take advantage of growing core densities.

One of the more interesting aspects of recent vendor announcements is just how long it has taken storage industry behemoths to upgrade their products to accommodate multi-core CPUs – judging by all the hoopla these are big engineering feats. Even a fully multi-threaded architecture like Nimble Storage’s CASL (Cache Accelerated Sequential Layout) needs software optimizing whenever there is a big jump in CPU core density to take full advantage of the added horsepower. The difference is that for Nimble these are maintenance releases (not announcement-worthy major or minor releases). We routinely deliver new hardware with the needed resiliency levels, and then add a series of software optimizations over subsequent maintenance releases to squeeze out significant performance gains.

Here’s an example of what’s been accomplished in the 1.4.x release train over the last several months. Nimble customers have particularly appreciated that these performance improvements were made available as non-disruptive firmware upgrades (no downtime), and did not require any additional disk shelves or SSDs (more on how we do this below).

Beyond Multi-Threading

Multi-core readiness is nice, but there is an even more fundamental distinction between architectures. Using CPU cores more efficiently in a “spindle bound” architecture still leaves it a spindle bound architecture. In other words, if your performance was directly proportional to the number of high RPM disks (or expensive SSDs), improved CPU utilization and more CPU cores may raise the limits of the system, but will still leave you needing the exact same number of high RPM disks or SSDs to achieve the same performance. So meeting high random IO performance requirements still takes ungodly amounts of racks full of disk shelves, even with a smattering of SSDs.

The Nimble Storage architecture actually does something very different – it takes advantage of the plentiful CPU cores (and flash) to transform storage performance (IOPS) from a “spindle count” problem to a CPU core problem, allowing us to deliver extremely high performance levels with very few low RPM disks and commodity flash, and achieving big gains in price/performance and density to boot (in many cases needing 1/10th the hardware of competing architectures).

Similarly, the availability of more CPU cores does little to fix other fundamental limitations in some older architectures, such as compression algorithms that are constrained by disk IO performance, or heavy-duty snapshots that carry big performance (and in some cases capacity) penalties.

Thinking Outside the (Commodity) Box

It’s always good to raise the performance limits of the storage controllers, because this allows you to deliver more performance from a single storage node (improving manageability for large scale IT environments). Here again though, many (though not all) of the older architectures have a fundamental limitation – they can only use a limited number of CPU cores in a system if they want to leverage commodity hardware designs (e.g. standard Intel motherboards). A solution to this problem has been known for a long time – a horizontal clustering (scale-out) architecture. This approach allows a single workload to span the CPU resources of multiple commodity hardware enclosures, while maintaining the simplicity of managing a single storage entity or system. In the long run, this is the capability that allows an architecture to scale performance cost-efficiently without expensive, custom-built refrigerator-like hardware. This can also allow multiple generations of hardware to co-exist in one pool, with seamless load balancing and non-disruptive data migration, thus eliminating the horrendous “forklift” upgrades one of these recent vendor announcements promises to put their customers through.

So, while congratulations are in order to the engineering teams at our industry peers who worked so hard on making aging architectures multi-core ready, this is just a small step forward while the industry has moved ahead by leaps and bounds.

]]>
http://www.nimblestorage.com/blog/technology/welcome-to-the-multi-core-cpu-era/feed/ 0
Cisco and Nimble Storage – Proven, Efficient, Scalable VDI http://www.nimblestorage.com/blog/technology/leveraging-cisco-and-nimble-for-simplified-scalable-vdi/ http://www.nimblestorage.com/blog/technology/leveraging-cisco-and-nimble-for-simplified-scalable-vdi/#comments Wed, 15 May 2013 16:17:36 +0000 Radhika Krishnan, Head of Solutions and Alliances http://www.nimblestorage.com/blog/?p=5695 Most customers looking to deploy Virtual Desktop Infrastructure (VDI) recognize the need for storage performance upfront. But the infrastructure choices become more challenging when you also take into account needs around storage optimization, scaling, and simplicity.

What makes it even harder is the need for IT organizations to select various infrastructure components such as servers, storage, hypervisors, and VDI software (aka brokers) to construct a complete solution that truly works and adapts to growing needs – all without breaking the bank.

Cisco’s architecture for desktop virtualization leveraging Nimble Storage hybrid arrays.

Validated blueprints such as the Nimble Storage SmartStack, leveraging Nimble Storage hybrid arrays and Cisco UCS with VMware View and Citrix XenDesktop, eliminate the guesswork in creating a comprehensive solution for VDI. Customers across various verticals including financial services, federal, healthcare, education, legal, and retail have deployed VDI solutions based on this validated reference architecture.

The combination of Cisco UCS and Nimble with VMware View and Citrix XenDesktop wins with customers because:

Dramatically Higher Performance and Price/Performance: By far the biggest reason the combination of Nimble and Cisco UCS is selected by customers is excellent performance and leading price/performance. To accommodate VDI’s bursty I/O, many storage vendors leverage flash—but while this works really well to accelerate reads, write performance continues to be constrained. Nimble’s CASL file system delivers the superior write performance that VDI needs without throwing expensive flash or spindles into the mix. This, combined with Cisco’s superior blade architecture with large memory footprint, makes for winning cost economics.

Ease of Scaling: Most VDI deployments, whether they are big or small, inevitably grow due to continued adoption of desktop virtualization and explosion in end clients, including laptops and mobile devices. With both the Cisco UCS and the Nimble Storage solution, it is very simple to scale the solution. With Nimble scale-to-fit, customers can choose to add more flash or upgrade their controllers or add capacity, all non–disruptively. This complements Cisco’s simplified and scalable architectures for desktop virtualization, leveraging the value of UCS to easily add servers to scale to thousands of virtual desktops.

Operational Simplicity: There are many tasks through the lifecycle of a virtual desktop environment, ranging from provisioning, monitoring, and protecting the desktops. Just as important is the ability to keep the underlying infrastructure running at all times. Protection is simplified with Nimble’s instant snapshots. Most of all, Nimble’s proactive support capabilities have allowed customers to keep their systems up and running in perfect condition – one time a power outage occurred, and even though it was unrelated to Nimble, the Nimble array was able to detect the situation right away and alert the customer.

Here are a few joint Nimble Storage and Cisco UCS customers that have benefited from a high performance and efficient VDI solution:

University of Colorado at Boulder’s Housing and Dining Services have scaled their Cisco UCS, Nimble Storage, and VMware View based VDI implementation to now cover hundreds of their staff (including mobile users), after starting small and adding infrastructure as they grew. This has helped them meet their users’ needs, cut costs and meet their environmental sustainability goals.

Oaks Christian School (OCS) out of Southern California refreshed their VDI infrastructure for their Citrix environment with Cisco UCS and Nimble Storage. Their simplified experience began with the Nimble storage purchase process, where all software features and functionality are included with the base system. OCS leverages compression in their provisioning, saving significantly, and takes advantage of the scalable architecture of UCS and Nimble Storage to support their administrators, teachers and students, including a growing online school presence.

Cisco recently conducted a webcast around their desktop virtualization solutions including the simplified and scalable architectures leveraging Nimble Storage hybrid arrays for storing and serving virtual desktops and end-user data. You can view the Cisco webcast here: “Customer Insights: Desktop Virtualization On Your Terms

You can learn more about Cisco Desktop Virtualization solutions by clicking on the links below:

Cisco Blog: Jim McHugh
Desktop Virtualization On Your Terms – Flexibility and Choice with Architectures That Fit

Cisco Blog: Rick Snyder
Accelerating Your Success with Cisco Desktop Virtualization Solutions

You can also learn more about the proven Nimble Storage SmartStack solutions for VDI with Cisco by visiting the solution portal pages for Citrix XenDesktop and VMware View:

]]>
http://www.nimblestorage.com/blog/technology/leveraging-cisco-and-nimble-for-simplified-scalable-vdi/feed/ 0
Clearing Bottlenecks with InfoSight: 1-Million Oracle TPM and Counting http://www.nimblestorage.com/blog/technology/clearing-bottlenecks-with-infosight-1-million-oracle-tpm-and-counting/ http://www.nimblestorage.com/blog/technology/clearing-bottlenecks-with-infosight-1-million-oracle-tpm-and-counting/#comments Mon, 29 Apr 2013 17:54:54 +0000 Larry Lancaster - Chief Data Scientist http://www.nimblestorage.com/blog/?p=5608 Our in-house Oracle guru, Tom Dau, has achieved over 1M TPM (one million transactions per minute) using the Hammerora TPC-C workload against a single Nimble Storage CS440G … and he’s just getting warmed up.

One cool thing about this result is that our cost-effective, high-performance hybrid was used for both redo logs and data. This allows DBAs to snapshot and replicate consistent database images for DR (disaster recovery) and rollback.

Here’s Tom, demonstrating mind-boggling techniques for DBAs everywhere.

For this test run, Tom used a single two-socket, sixteen core blade server, and a single CS440G over 2X10gigE links using DM multipath for Oracle ASM. He’s already lined up a bunch more configs to test, and the array still has plenty of headroom to play with. So we’ll see what Tom comes up with in the coming weeks.

Tom’s a big fan of real-world benchmarks – that’s why he insisted on maintaining great latencies (3ms log file sync through AWR) and following Nimble best practices during these test runs. As a result, we feel comfortable Oracle DBAs can trust Nimble to deliver amazing performance in their production environments.

We used some cool new InfoSight-based performance optimization tools to help us identify and clear out a few bottlenecks. These tools will be available directly to customers in an InfoSight release due out later in the year… in the meanime, we are already using them in support to help our customers get the most from their Nimble gear.

We will continue to update this blog with more details and results from the Oracle side, as well as more information on what we’re cooking up for performance junkies in upcoming versions of InfoSight.

Until then, here’s some Hammerora screen candy from the middle of one of the test runs, courtesy of Tom – the 15-minute average for the full test run was ~1.03M TPM.

Here’s some Hammerora screen candy from the middle of one of the test runs, courtesy of Tom.
The 15-minute average for the full test run was ~1.03M TPM.

Hammerora Settings:

  • No think & key time
  • 200,000 transactions per virtual user
  • 105 virtual users
  • 15-minute test run with 2 min ramp up time
  • 100 warehouses

Results:

Vuser 1:Rampup 1 minutes complete …
Vuser 1:Rampup 2 minutes complete …
Vuser 1:Rampup complete, Taking start AWR snapshot.
Vuser 1:Start Snapshot 83 taken at 27 APR 2013 13:56 of instance PERFASMDB (1) of database PERFASMD (1272977337)
Vuser 1:Timing test period of 15 in minutes
Vuser 1:1 …,
Vuser 1:2 …,
Vuser 1:3 …,
Vuser 1:4 …,
Vuser 1:5 …,
Vuser 1:6 …,
Vuser 1:7 …,
Vuser 1:8 …,
Vuser 1:9 …,
Vuser 1:10 …,
Vuser 1:11 …,
Vuser 1:12 …,
Vuser 1:13 …,
Vuser 1:14 …,
Vuser 1:15 …,
Vuser 1:Test complete, Taking end AWR snapshot.
Vuser 1:End Snapshot 84 taken at 27 APR 2013 14:11 of instance PERFASMDB (1) of database PERFASMD (1272977337)
Vuser 1:Test complete: view report from SNAPID 83 to 84
Vuser 1:105 Virtual Users configured
Vuser 1:TEST RESULT : System achieved 1030899 Oracle TPM at 343103 NOPM
Vuser 1:Checkpoint
Vuser 1:Checkpoint Complete

This result shows that our purpose-built hybrid systems can perform as well as or better than flash-only solutions at a small fraction of the price. With high performance and high capacity together, you really can have your cake and eat it too!

]]>
http://www.nimblestorage.com/blog/technology/clearing-bottlenecks-with-infosight-1-million-oracle-tpm-and-counting/feed/ 1
Making Scalable Midsized VDI Deployments a Simple, Affordable Reality http://www.nimblestorage.com/blog/technology/making-scalable-midsized-vdi-deployments-a-simple-affordable-reality/ http://www.nimblestorage.com/blog/technology/making-scalable-midsized-vdi-deployments-a-simple-affordable-reality/#comments Tue, 26 Feb 2013 17:25:45 +0000 Sachin Chheda - Product Marketing http://www.nimblestorage.com/blog/?p=5406 This is a joint blog post with Cisco, Nimble Storage, and VMware.

The proliferation of new applications and devices in the workplace has added complexity to IT environments of all sizes. Companies are being forced to re-evaluate how they manage their end-users, secure data and enable workplace mobility. This has led many organizations to turn to Virtual Desktop Infrastructure (VDI) as a solution.

This year alone, more than half of midsized organizations will turn to virtual desktops to streamline desktop management, ensure greater security and compliance, and provide anytime / anywhere access to the their users (source: ESG 2012). The 2012 Morgan Stanley CIO Survey found the percentage of virtual desktop was poised to nearly double in a year’s time. If it is not already in production, desktop virtualization is bound to be one of the projects on IT’s list for 2013 – including resource-constrained midsized IT organizations.

Successfully tackling desktop virtualization requires understanding the roadblocks that might otherwise keep a VDI project from succeeding. An informal survey of IT personnel and value added resellers (VARs) identified three main areas:

  • Understanding the costs of deploying VDI – comprehending how capital expenditures (CapEx) shift from the end-users’ desktops to the data center is key. This includes the upfront implementation costs, as well as the opportunities for management efficiency and savings that come from selecting the right server, storage infrastructure and software for desktop virtualization.
  • Assuming end-user needs—researching and implementing to exact requirements for end-users is essential to success. These requirements not only include the different end-users performance and capacity needs, but also the location from where they are accessing their desktops and applications.
  • Planning and infrastructure—choosing the right scalable server, networking and storage infrastructure along with right desktop virtualization software is as important to VDI deployments as proper planning. Leveraging a pretested and validated reference architecture built using efficient, easy to manage and scalable components is an easy way to tackle the challenge of planning and infrastructure.

Since most VDI deployments start small and scale as usage grows, Cisco has put together a SmartPlay bundle for VDI that allows IT shops to get started. This “SmartPlay with VMware Horizon View” is built on the UCS B200 M3 blades, which offer excellent VM density, allowing for greater consolidation in a smaller footprint. The bundle provides the compute infrastructure for a 300-VDI user deployment. This can scale to thousands of users over multiple blade chassis using the scalable UCS (Unified Computing Systems) management framework.

Also included in the Cisco UCS SmartPlay bundle is VMware Horizon View. View has always been the standard VDI platform for midsized and large deployments thanks to its rich functionality, efficiency using linked clones, performance, and most importantly simplicity in deployment and day-to-day management. View also simplifies the infrastructure needs when dealing with events like boot storms using View Storage Accelerator.

VMware announced View 5.2 at VMware Partner Exchange this week. The new version of View offers a variety of enhancements in efficiency and performance. Running on vSphere 5.1, Horizon View 5.2 uses Space Efficient Sparse Virtual Disks benefiting from space reclamation and higher performance. You can read more about this here.

Storage is a key component of the overall VDI stack. Choosing the right storage solution can often mean the difference between successful deployments and very unhappy end-users. Storage needs to not only deliver the performance and capacity VDI needs, but also do so efficiently.

The Nimble Storage CS200 Series is the ideal complement to the Cisco UCS SmartPlay with VMware Horizon View for 300 users and up. Built upon the flash-optimized, hybrid CASL architecture, the Nimble Storage CS-Series packs adaptive performance VDI needs – such as with read intensive boot storms and write intensive patching – in an efficient 3U form factor. Scaling is simple with the CS200 Series – storage controllers and flash can be non-disruptively upgraded to scale performance, and storage capacity can be non-disruptively scaled by adding additional storage shelves to the CS-Series (no manual configuration of RAID levels is needed).

To simplify the path to VDI, Nimble Storage has published a pre-tested and validated reference architecture leveraging the UCS B-Series Blade Servers and View offering storage. The CS-series used in this reference architecture uses a mere 3U of rack space for 1,000 users. This reference architecture, called Nimble Storage SmartStack for VDI with Cisco and VMware, is the foundation for the design for an end-to-end stack for 300-VDI users—using the UCS B-Series Blade Servers, Nimble Storage CS200 Series and VMware Horizon View.

To make the journey to VDI smooth and easy, our joint value added resellers along with Nimble, VMware and Cisco are rolling out a series of VDI boot camps worldwide. These half-day workshops cover everything from building a business case and understanding the ROI, to assessing end-users needs and understanding the sizing process for servers and storage, to learning the best practices from successful VDI deployments. These value added resellers can deliver a cost effective end-to-end VDI solution using Nimble Storage CS200 Series and Cisco UCS SmartPlay with VMware Horizon View worldwide – making it easy to start small and scale as needed.

To summarize, tackling VDI requires carefully navigating the different roadblocks that may arise.

  • Cisco UCS SmartPlay with VMware Horizon View built using the UCS B200 M3 starts small and can scale as an organization’s VDI needs grow.
  • Nimble Storage CS200 Series is an ideal complement to the UCS SmartPlay, offering storage for hundreds of users.
  • To help tackle the challenges around planning, our joint VARs are conducting a series of VDI boot camps and are leveraging the Nimble Storage SmartStack reference architecture for VDI with Cisco and VMware [video].

Start smart with your VDI projects by checking out details about the UCS SmartPlay with VMware Horizon View here and read more about the Nimble Storage SmartStack for VDI with Cisco and VMware at http://bit.ly/YbOf77.

]]>
http://www.nimblestorage.com/blog/technology/making-scalable-midsized-vdi-deployments-a-simple-affordable-reality/feed/ 0
London Blog: Show Report – Cisco Live London 2013 http://www.nimblestorage.com/blog/technology/show-report-cisco-live-london-2013/ http://www.nimblestorage.com/blog/technology/show-report-cisco-live-london-2013/#comments Tue, 12 Feb 2013 23:04:06 +0000 Michael McLaughlin, Technical Marketing http://www.nimblestorage.com/blog/?p=5361 http://www.ciscolive.com/london/). We were especially pleased to see the turnout for Charlie Whitfield’s talk on “Successful VDI Using Pre-validated [...]]]> As the leading provider of flash-optimized hybrid data storage systems, Nimble Storage is a frequent participant at major industry events, such as last week’s Cisco Live show in London, England (http://www.ciscolive.com/london/).

map of londonWe were especially pleased to see the turnout for Charlie Whitfield’s talk on “Successful VDI Using Pre-validated Reference Architectures”. It’s clear that this is a hot topic for many IT managers and storage admins, which is no surprise given the way VDI enables them to provide their users with improved performance and capabilities without increasing their costs.

You can learn more about the reference architecture from the document here: http://www.cisco.com/en/US/solutions/collateral/ns340/ns517/ns224/ns836/ns978/guide_c07-719522.pdf

This document describes the reference architecture for desktop virtualization with storage for up to 1,000 virtual desktops based on VMware View 5.1 and VMware vSphere 5, built on Cisco UCS B-Series Blade Servers and the Nimble Storage CS-Series storage appliance.

In addition to providing demonstrations of the latest Nimble Storage hybrid arrays to existing customers, prospects, and resellers, I had a first-hand opportunity to learn about a number of interesting developments and trends. One big shift was the discussion about the internet of things, which is critical to many of the technology solutions we are coming to expect and depend on today. Cisco is definitely in the middle of this domain and there were many interesting sessions on these topics.

From a storage perspective, it’s now accepted that solid state flash technology is a fundamental requirement to meet the performance needs of the applications that will support the infrastructures behind this connect-anywhere market. VDI (Virtual Desktop Infrastructure) and BYOD (Bring Your Own Device) are still key drivers for the users as well. This seems to be in almost every conversation somewhere.

As I stated in an earlier post, I was really interested in the UCS of things too. While we have been mainly focused on the B-series solutions with Nimble Storage as the foundation, there is also a keen interest in the C-series rack systems and the new E-series, ISR-based solutions. (More information can be found here: http://www.cisco.com/en/US/products/ps12629/index.html)

In a future post we’ll look at Nimble Storage connectivity for these other compute/network solutions, testing whether they’re also as simple and straightforward as the B-series has been for us in our environment.

From One Thousand to Fifty Billion Devices
Based on conversations with other Cisco Live London participants, a variety of issues are increasingly important to enterprise customers today:

  • Mobile access is becoming increasingly important, as evidenced by a statistic from Padmasree Warrior, Cisco’s chief technology officer: There were only about 1,000 devices connected to the Internet in 1984, a number that has now surpassed 50 billion! No wonder CIOs are busy updating their network infrastructure.
  • To support this trend, Cisco took the wraps off its Unified Access One Network strategy during the opening keynote of the show, highlighting enterprises’ need to manage complexity and cost. They also showed new high-speed LAN and wireless controllers.
  • Despite the emergence of the Open Compute project (supported by Facebook, Intel, Rackspace, and others), the vast majority of large companies will continue to rely on unified data centers powered by Cisco, VMware, Nimble Storage, and other tried-and-trusted vendors.
]]>
http://www.nimblestorage.com/blog/technology/show-report-cisco-live-london-2013/feed/ 0
Cloud Service Providers: Opportunity or Threat? http://www.nimblestorage.com/blog/technology/cloud-service-providers-opportunity-or-threat/ http://www.nimblestorage.com/blog/technology/cloud-service-providers-opportunity-or-threat/#comments Tue, 22 Jan 2013 09:19:22 +0000 Suresh Vasudevan, CEO http://www.nimblestorage.com/blog/?p=5302 Tweet Over the last few months, I have had conversations with numerous financial analysts and industry analysts and a question that comes up frequently is whether or not cloud service providers pose a threat to product manufacturers. They are surprised when I describe how deployments by cloud service providers are one of [...]]]>

Over the last few months, I have had conversations with numerous financial analysts and industry analysts and a question that comes up frequently is whether or not cloud service providers pose a threat to product manufacturers. They are surprised when I describe how deployments by cloud service providers are one of the fastest growing segments of our business. Not only are we acquiring dozens of customers in this segment, including Virtacore (see today’s press release), they are also some of our largest customers! I find myself offering this explanation fairly frequently. Hence this blog post.

Let us start by parsing cloud delivered solutions based on a couple of key dimensions:

  • What is being deployed in the cloud?
  • Who is providing the IT service from the cloud?

The Opportunity is Larger than the Threat

The image conjured up by the term “cloud” is that of companies like Amazon, Google and Microsoft delivering large-scale computing infrastructure that eliminate the need for on-premises IT equipment and software. To the extent that Enterprises port their applications to leverage Amazon EC2 or the Google Cloud or Microsoft Azure, these Enterprises are indeed displacing on-premises infrastructure. Further, since these cloud providers build their own infrastructure rather than buying from product manufacturers, they are indeed shrinking the available market for product manufacturers.

However, there are other categories of service providers that on balance, more than offset the reduced spend and create an opportunity.

  • Consumer SaaS providers aggregate consumer-spend into Centralized Shared Enterprise Infrastructure. Consumer applications that traditionally consumed end user equipment (PCs, PC applications, etc.) are moving to the cloud triggering a need for Enterprise infrastructure. While some of these service providers build their own infrastructure (e.g., gmail), others leverage product manufacturers. Example: Yahoo Mail
  • Enterprise SaaS providers aggregate SMB requirements into Centralized Shared Enterprise Infrastructure. A key customer segment that SaaS providers target is small businesses that would otherwise never have invested in Enterprise IT infrastructure. Consequently SaaS companies end up aggregating SMB spend into Centralized Shared Enterprise Infrastructure, and while some SaaS companies may end up building it on their own, most are focused on their core areas of expertise, choosing instead to deploy best of breed infrastructure from product manufacturers. Examples: ServiceNOW, Salesforce
  • Modern hosting companies aggregate SMB spend into Centralized Shared Enterprise Infrastructure. Traditional hosting companies thrived on renting real estate, power and cooling. Now these same companies are evolving into renting hypervisors (and all the servers, networking, storage and other infrastructure that comes with a hypervisor) or renting virtual desktops or renting DR infrastructure. Examples: Virtacore, Desktone

The key to leveraging the opportunity lies in recognizing that the requirements posed by such service providers is different.

Centralized Shared Infrastructure Requires Rethinking Traditional Approaches

While we believe that cloud delivered IT solutions and applications create an opportunity, we also believe that the requirements posed by cloud providers are different. Specifically, as it relates to storage infrastructure, some of the dimensions that matter more are as follows:

  1. Efficiency. For end customers, making IT efficient is important as a way to manage costs for the Enterprise. For service providers, on the other hand, efficient IT infrastructure is the very essence of their business success. Lowering the cost of infrastructure allows them to price their services more competitively to gain market share and improve profitability.
  2. Flexibility in Scaling. The storage industry has traditionally deployed incremental storage performance and storage capacity in big “capital intensive” chunks at a time. What is needed is an approach that allows service providers to scale performance and capacity in small, low-cost increments – thus making IT equipment costs more variable and aligned with revenues. Perhaps, an even more critical requirement is to avoid fork-lift upgrades of hardware and to be able to perform “rolling hardware upgrades” that are non-disruptive.
  3. Rapid Recoverability. For service providers that operate against stringent SLAs, the notion of nightly backups and backup windows is simply not good enough. What is needed is the ability to provide hundreds of recovery points and rapid recovery from any such point for applications.
  4. Dramatic Simplicity. Lowering the administrative cost of operating a large-scale data center with hundreds of tenants is key to profitably for service provider. To this end, most service providers rely on extensive in-house automation that requires extensive APIs and deep integration with hypervisor vendors. Similarly, the traditional support model of responding to customer calls with a layered, hierarchical organization comprised of level1 engineers, level2 engineers and escalation engineers will not suffice. In a connected world, Vendors will need to have sophisticated remote monitoring tools that recognize problems concurrently as they occur at customer sites, and remote diagnostic tools that allow for rapid problem diagnosis and problem resolution.
  5. Integrated Security. Gaining the trust of Enterprises through robust security becomes a foundation for persuading them to move their data and applications to a third party. This requires a much deeper investment in integrated security (as opposed to security as a separate layer) on the part of product manufacturers, even if they are not building security products.
  6. Multi-Tenancy. Going back to the theme of driving efficiency, the ability to share infrastructure across multiple users while simultaneously maintaining good quality of service drives superior economics.

We have seen a rapid increase in the number of deployments of our products by service providers. The key drivers of this have been our unparalleled efficiency in simultaneously lowering the cost of capacity and cost of performance, our ability to scale in low-cost increments and scale non-disruptively, our ability to deliver superior data recoverability, and our simplicity and remote support automation. I believe that we are just at the very early stages of leveraging the opportunity, and am excited about the partnerships we are building with our service provider customers.

]]>
http://www.nimblestorage.com/blog/technology/cloud-service-providers-opportunity-or-threat/feed/ 2
O Snapshot: What Art Thou? http://www.nimblestorage.com/blog/technology/o-snapshot-what-art-thou/ http://www.nimblestorage.com/blog/technology/o-snapshot-what-art-thou/#comments Fri, 16 Nov 2012 19:39:51 +0000 Sachin Chheda - Product Marketing http://www.nimblestorage.com/blog/?p=5250 This is part one of a two-part blog on snapshots.

In the storage world, snapshots are a point-in-time copy of data. They have been around for some time and are increasingly being used by IT to protect stored data. This post recalls some of the popular Nimble Storage blog posts on snapshots. But before I jump into the list of blogs, let me quickly walk you through the different types of storage-based snapshots, including their attributes.

A snapshot copies the metadata (think index) of the data instead of copying the data itself. This means taking a snapshot is almost always instantaneous. This is one of the primary advantages of storage-based snapshots—they eliminate backup windows. In traditional backup deployments, applications either have to be taken off-line or suffer from degraded performance during backups (which is why traditional backups typically happen during off-peak hours).

This means snapshot-based backups can be taken more frequently, improving recovery point objectives. However not all snapshot implementations are created equal, posing different requirements and restrictions on their use (example: reserve space required, how frequently snapshots can be taken, and number of snapshots that can be retained).

In the ‘Copy-on-Write’ (COW) implementation, the address map related metadata is copied whenever a snapshot is taken. None of the actual data is copied at that time—resulting in an instant snapshot. In almost all implementations this copy is taken to a ‘pre-designated’ space on storage (aka a snapshot reserve). When the data is modified through writes, the original data is copied over to the reserve area. The snapshot’s metadata is then updated to point to the copied data. Because of this, ‘COW’ implementation requires two writes and a read when any of the original data is modified for the first time after a snapshot is taken—causing a performance hit when the original data is updated. This gets progressively worse with frequent snapshots. Vendors such as EMC, IBM, and HP have used COW implementations on their traditional storage.

The other major implementation of snapshots is ‘Redirect on Write’ (ROW). Like COW, only the metadata is copied when a snapshot is taken. Unlike COW, whenever original data is being modified after a snapshot, the write is redirected to a new free location on disk. This means ROW snapshots do not suffer the performance impact of COW snapshots as none of the original data is copied.

Nimble snapshots are based on the ROW implementation as the write-optimized data layout in the CASL architecture always redirects writes to new free space. A lightweight, background sweeping process in CASL ensures the continued availability of free space and assures consistent performance, addressing a shortcoming of some older ROW implementations. This efficiency allows IT to think of snapshot + replication in a new light—store weeks/months of history versus mere days of backups with traditional, inefficient implementations.  This allows virtually all of the operational recoveries to come from snapshots and dramatically improves RTOs. (Umesh’s blog ‘A Comparison of File System Architectures’ linked below covers this in detail.)

Nimble Storage snapshots are stored (compressed) alongside the primary data on high-capacity disk drives. This allows 1000s of snapshots to be taken and retained on the same system as primary data. A measurement of our install base shows that over 50% of our customers retain their snapshots for over a month. 

First is the support of universal inline compression for storing data. This ensures data takes up less space on disk, which as discussed earlier makes replication more efficient and allows for more snapshots to be retained in a given storage space. On average, Nimble’s install base measure compression rates ranging from 30% to 75% for a variety of workloads.

Second is the support of cloning, which is fully functional read/writable copy of the original. Cloning is useful in the cases of VDI and test/development where many copies (clones) of a master data set are needed. In the ROW implementations, clones do not take up any additional space.

Last but not least is the granularity of the snapshot. This determines how small a snapshot can be for a volume. This is relevant when the data volume being protected has a small rate of daily change. When the extent of a data write is smaller than the snapshot granularity, the snapshot wastes considerable space storing a duplicate copy of unchanged data. Snapshots in Nimble’s CASL architecture can be as granular as a single 4K block.

Before going onto the blogs, I wanted to share that Nimble Storage (@NimbleStorage) and CommVault (@CommVault) recently did a joint Twitter Chat on the Nimble Storage integrations through CommVault IntelliSnap Connect Program. The chat featured experts from Nimble Storage (@wensteryu, @schoonycom & @scnmb… me) and CommVault (@gregorydwhite & @liemnguyen). Here is the edited transcript for your reading pleasure.

Blogs:

Leveraging Snapshots for Backup: An Expert View (http://www.nimblestorage.com/blog/technology/leveraging-snapshots-for-backup/): Radhika Krishnan interviews Jason Buffington (@JBuff)  who is ESG’s Senior Analyst covering Data Protection. According to ESG’s research 55% of the IT shops are looking at snapshots augmenting their backups.

Snapshots + Backup Management = the Best of Both Worlds (http://www.nimblestorage.com/blog/technology/snapshots-backup-management-the-best-of-both-worlds/):  Another blog talking about need for integration between storage systems and their native snapshot capabilities and backup storage delivering rich data management functionality

How Snappy and Skinny Are Your Snapshots? (http://www.nimblestorage.com/blog/technology/how-snappy-and-skinny-are-your-snapshots/): Umesh Maheshwari (our CTO) talks about concept of COW versus ROW and discusses benefits of variable block support. 

A Comparison of File System Architectures (http://www.nimblestorage.com/blog/technology/a-comparison-of-filesystem-architectures/): Another blog by Umesh. This one talks about the concept of keeping data optimized on disk–especially applicable if your want to know how should storage handle deleted snapshots. The comments at the bottom are worth reading.

Extended Snapshots and Replication As Backup (http://www.nimblestorage.com/blog/technology/2160/): Ajay Singh discusses using snapshots and replication for deploying Disaster Recovery. 

Can you have a backup system based solely on snapshots and replication? (http://www.backupcentral.com/mr-backup-blog-mainmenu-47/13-mr-backup-blog/299-snapshots-replication-backups.html/): A W. Curtis Preston special calling it as he sees it. 

The Nightmare of Incremental Backup is Over (http://www.nimblestorage.com/blog/technology/the-nightmare-of-incremental-backup-is-over/): Nicholas Schoonover discusses concepts of RPO and RTO with incremental backups.

Better Than Dedupe: Unduped! (http://www.nimblestorage.com/blog/technology/better-than-dedupe-unduped/): Umesh shows a mathematically comparison of total storage space between different types of storage making the case for optimizing your entire storage environment. Be sure to skim through the comments at the bottom.

This is a part one in a two part series. In the second blog, we’ll cover the concept of data integrity, discuss application integration and review the different demos covering data protection and high availability.

We would love to hear from you. Follow us on Twitter (@NimbleStorage), send us a tweet (#nimblestorage #hybridstorage) or leave a note below.

]]>
http://www.nimblestorage.com/blog/technology/o-snapshot-what-art-thou/feed/ 0
Announcing the Nimble Storage SmartStack http://www.nimblestorage.com/blog/technology/smartstack-vdi-architecture-from-cisco-nimble-storage-and-vmware/ http://www.nimblestorage.com/blog/technology/smartstack-vdi-architecture-from-cisco-nimble-storage-and-vmware/#comments Thu, 18 Oct 2012 15:06:40 +0000 Suresh Vasudevan, CEO http://www.nimblestorage.com/blog/?p=5232 Tweet No man is an island, said the famous English poet John Donne, and no storage product is an island either. Over the last couple of years, there has been growing interest in pre-validated vertical stacks of application, server, hypervisor, networking and storage. Industry analysts like Wikibon have in fact predicted that [...]]]>

No man is an island, said the famous English poet John Donne, and no storage product is an island either. Over the last couple of years, there has been growing interest in pre-validated vertical stacks of application, server, hypervisor, networking and storage. Industry analysts like Wikibon have in fact predicted that over half of the entire infrastructure consumed would move towards pre-validated reference architectures, rather than as independently procured servers, storage and networking products.

Companies such as HP, Dell and IBM have responded to this by essentially repositioning their portfolio of existing products as converged infrastructure. I fundamentally do not believe that customers will be willing to sacrifice best of breed technologies and embrace inferior products, just because they are packaged into integrated infrastructure offerings that come from a single vendor.  There is a reason why EMC and NetApp have gained share in storage, and why Cisco has maintained a dominant share in networking.

Having said that, I have encountered numerous customers over the last years that have felt palpable pain stemming from having to integrate best-of-breed servers, hypervisor, networking and storage. Solutions such as Flexpod and VCE have been more successful than single vendor solutions – true reference architectures that address both problems – deliver best-of-breed products while also relieving the customer from having to integrate best-of-breed products from multiple vendors.

Today, we are proud and excited to announce the Nimble SmartStack. The first SmartStack offering is targeted at VDI workloads, working closely with VMware and Cisco to produce an integrated solution that has been tested for interoperability, performance and fast deployment, and has been endorsed by all three companies.  Several months ago, we announced the certification of our storage platform as part of the VMware rapid desktop program.  Today, we are proud to be among a small number of storage platforms that are UCS certified.  There are three major benefits from the SmartStack solution:

  1. Economically-Viable, Converged Infrastructure Solution for Mid-Sized Enterprises. Nimble was able to bring dramatic economic benefits through substantially higher performance, lower cost of capacity, dramatically better business continuity and operational simplicity to a mid-sized Enterprise – customers that were otherwise unable to deploy VDI projects because the cost of a pre-integrated converged infrastructure stack was prohibitively expensive.
  2. Best-of-Breed Flash-Optimized Storage, Available as Converged Infrastructure. The Nimble architecture is unparalleled in its ability to leverage flash for high performance along with low-cost HDDs for low cost of capacity. What further distinguishes our approach and makes us particularly well suited for workloads such as VDI is that our scale-to-fit architecture (see white paper ») allows us to be extremely flexible – we can increase the ratio of flash within our system when workloads such as VDI need higher performance or lower the ratio of flash when workload such as file services need greater capacity – thus spanning an extremely broad range of workloads.
  3. Channel Leverage. Over the last two years, we have built a rapidly growing base of over 300 channel partners, most of whom were already Cisco partners and VMware partners. Our SmartStack approach allows us to leverage that synergy and rapidly bring the solution to market.

Our partnerships with the ecosystem vendors continue to flourish which means more SmartStack solutions and fewer headaches and more value for end customers. Stay tuned for more on this front.

]]>
http://www.nimblestorage.com/blog/technology/smartstack-vdi-architecture-from-cisco-nimble-storage-and-vmware/feed/ 2
Storage for Your Branch Office VDI http://www.nimblestorage.com/blog/technology/storage-for-your-branch-office-vdi/ http://www.nimblestorage.com/blog/technology/storage-for-your-branch-office-vdi/#comments Wed, 10 Oct 2012 16:02:46 +0000 Radhika Krishnan, Head of Solutions and Alliances http://www.nimblestorage.com/blog/?p=5207 ESG survey shows that 92 percent of customers are interested in replacing the laptops/PCs of their remote office employees with virtual [...]]]> “One size fits all” is not a tenet applicable for branch office deployments. Branch office deployments vary based on number of users, type of applications, available WAN bandwidth, and IT infrastructure.

A recent ESG survey shows that 92 percent of customers are interested in replacing the laptops/PCs of their remote office employees with virtual desktops running in a central location. VMware, working with partners such as Nimble Storage, provides a variety of approaches to tackle VDI with their Branch Office Desktop solution.  One way to deploy is the centrally-hosted desktop environment and the other is using a locally-hosted desktop solution kept in sync using VMware Mirage. Either way, you want to make sure the storage you choose is optimized for your specific deployment model.

With the centralized desktop hosting model, the storage not only needs to be cost-effective, but also deliver high performance. This is where Nimble shines with optimizations for both read and write operations. In addition, built-in snapshot-based data protection allows you to dramatically improve business continuity.

In the case of the locally-hosted desktop environment, the primary storage factors in play are cost, ease of deployment and use, as well as supportability.  Nimble’s CS200 series offers cost-effective, dense performance for these environments in a 3U form factor. Systems come tuned out of the box with only a few easy steps to go from plug-in to operational.  Finally, proactive wellness features enable the relay of comprehensive system telemetry data to Nimble HQ that is analyzed in real-time to identify potential problems at the branch location. In case of a support incident, the system can be serviced remotely the majority of the time.

Additionally, in the locally-deployed VDI solution, Nimble Storage used for branch office desktops can serve as a replication target for the primary datacenter. That way critical data can be replicated and protected in case of a disaster at the primary site. Similarly, critical data such as local persona and user files can be replicated back to the primary datacenter.

To learn more about how Nimble and VMware are working together to simplify branch office deployments, please refer to the following resources:

Best Practices Guide for VDI »

Solution Brief »

Whiteboard Video »

Nimble Storage Solutions for VDI Environments »

Follow us on Twitter @NimbleStorage.

]]>
http://www.nimblestorage.com/blog/technology/storage-for-your-branch-office-vdi/feed/ 0
Are All Hybrid Storage Arrays Created Equal? http://www.nimblestorage.com/blog/technology/are-all-hybrid-storage-arrays-created-equal/ http://www.nimblestorage.com/blog/technology/are-all-hybrid-storage-arrays-created-equal/#comments Tue, 09 Oct 2012 18:30:13 +0000 Ajay Singh Vice President, Product Management http://www.nimblestorage.com/blog/?p=5194 hybrid storage arrays would be the dominant networked storage architecture over the next decade – a premise that is now widely accepted. The interesting question today is, “Are all hybrid storage arrays are created equal?” After all, SSDs and HDDs are commodities, so [...]]]> Nimble Storage was founded in early 2008 on the premise that hybrid storage arrays would be the dominant networked storage architecture over the next decade – a premise that is now widely accepted. The interesting question today is, “Are all hybrid storage arrays are created equal?” After all, SSDs and HDDs are commodities, so the only factor setting them apart is the effectiveness of the array software.

How does one compare hybrid storage arrays? Here are some key factors:

  1. How cost-effectively does the hybrid storage array use SSDs to minimize costs while maximizing performance?
  2. How cost-effectively does the hybrid storage array use HDDs to minimize costs while maximizing useable capacity?
  3. How responsive and flexible is the hybrid array at handling multiple workloads and workload changes?
  4. Aside from price/performance and price/capacity, how efficient is the array data management functionality (such as snapshots, clones, and replication)?

This blog will cover the first three. The fourth dimension of efficient data management is a very important factor in evaluating storage arrays, and a topic we’ll cover in detail in a future blog post.

How cost-effectively does the hybrid storage array use SSDs?

Most hybrid storage array architectures stage all writes to SSDs first in order to accelerate write performance, allowing data that is deemed less “hot” to be moved to HDDs at a later point. However as explained below, this is an expensive approach. Nimble storage arrays employ a unique architecture in that only data that is deemed to be cache-worthy for subsequent read access is written to SSDs, while all data is written to low-cost HDDs. Nimble’s unique architecture achieves very high write performance despite writing all data to HDDs by converting random write IOs issued by applications into sequential IOs on the fly, leveraging the fact that HDDs are very good at handling sequential IO.

  1. Write endurance issues demand the use of expensive SSDs.  When SSDs receive random writes directly, the actual write activity within the physical SSD itself is higher than the number of logical writes issued to the SSD (a phenomenon called write amplification). This eats into the SSD lifespan, i.e. the number of write cycles that the SSD can endure. Consequently, many storage systems are forced to use higher endurance eMLC or SLC SSDs, which are far more expensive. In addition to the selective writing capability mentioned above, the Nimble architecture also optimizes the written data layout on SSDs so as to minimize write amplification. This allows the use of lower cost commodity MLC SSDs, while still delivering a 5 year lifespan.
  2. Overheads reduce useable capacity relative to raw capacity of SSDs. Hybrid arrays that can leverage data reduction techniques such as compression and de-duplication can significantly increase useable capacity. On the flip side, RAID parity overheads can significantly reduce useable capacity. Nimble’s architecture eliminates the need for RAID overheads on SSD entirely and further increases useable capacity by using inline compression.
  3. Infrequent decision-making about what data to place on SSDs and moving large-sized data chunks wastes SSD capacity. Most hybrid storage arrays determine what data gets placed on SSDs vs. HDDs by analyzing access patterns for (and eventually migrating) large “data chunks”, sometimes called pages or extents. This allows “hot” or more frequently requested data chunks to be promoted into SSDs, while keeping the “cold” or less frequently requested data on HDDs.
  • Infrequent decisions on data placement cause SSD over-provisioning. Many storage systems analyze what data is “hot” on an infrequent basis (every several hours) and move that data into SSDs with no ability to react to workload changes between periods. Consequently, they have to over-provision SSD capacity to optimize performance between periods. Nimble’s architecture optimizes data placement real-time, with every IO operation.
  • Optimizing data placement in large data chunks (many MB or even GB) causes SSD over-provisioning. The amount of meta-data needed to manage placement of data chunks gets larger as the data chunks get smaller. Most storage systems are not designed to manage a large amount of meta-data and they consequently use large-sized data chunks, which wastes SSD capacity. For example, if a storage array were to use data chunks that are 1GB in size, frequent access of a database record that is 8KB in size results in an entire 1GB chunk of data being treated as “hot” and getting moved into SSDs. Nimble’s architecture manages data placement in very small chunks (~4KB), thus avoiding SSD wastage.

How cost-effectively does the hybrid storage array use HDDs?

This means assessing the ratio of usable to raw HDD capacity, as well as the cost per GB of capacity. Three main areas drive this:

  1. Type of HDDs. Many hybrid arrays are forced to use high-RPM (10K or 15K) HDDs to handle performance needs for data that is not on SSDs, because of their (higher) random IO performance. Unfortunately high RPM HDD capacity is about 5x costlier ($/GB) vs. low RPM HDDs. As mentioned earlier, Nimble’s write-optimized architecture coalesces thousands of random writes into a small number of sequential writes. Since low-cost, high-density HDDs are good at handling sequential IO, this allows Nimble storage arrays to deliver very high random write performance with low-cost HDDs. In fact a single shelf of low RPM HDDs with the Nimble layout handily outperforms the random write performance of multiple shelves of high RPM drives.
  2. Data Reduction. Most hybrid arrays are unable to compress or de-duplicate data that is resident on HDDs (some may be able to compress or de-duplicate data resident on SSDs). Even among those that do, many recommend that data reduction approaches not be deployed for transactional applications (e.g., databases, mail applications, etc.). The Nimble architecture is able to compress data inline, even for high-performance applications.
  3. RAID and Other System Overheads. Arrays can differ significantly in how much capacity is lost due to RAID protection and other system overheads. For example many architectures force the use of mirroring (RAID-10) for performance intensive workloads. Nimble on the other hand uses a very fast version of dual parity RAID that delivers resiliency in the event of dual disk failure, allows high performance, and yet consumes low capacity overhead. This can be assessed by comparing useable capacity relative to raw capacity, while using the vendor’s RAID best practices for your application.

How responsive and flexible is the hybrid array at handling multiple workloads?

One of the main purposes of a hybrid array is to deliver responsive, high performance at a lower cost than traditional arrays. There are a couple of keys to delivering on the performance promise:

  1. Responsiveness to workload changes based on timeliness and granularity of data placement.  As discussed earlier, hybrid arrays deliver high performance by ensuring that “hot” randomly accessed data is served out of SSDs. However many hybrid arrays manage this migration process only on a periodic basis (on the order of hours) which results in poor responsiveness if workloads change between intervals.  And in most cases hybrid arrays can only manage very large data chunks for SSD migration, on the order of many MB or even GB. Unfortunately, when such large chunks are promoted into SSDs, large fractions of that can be “cold data” that is forced to be promoted because of design limitations. Then because some of the SSD capacity is used up by this cold data, not all the “hot” data that would have been SSD worthy is able to make it into SSDs. Nimble’s architecture optimizes data placement real-time, for every IO that can be as small as a 4 KB in size.
  2. The IO penalty of promoting “hot” data and demoting “cold” data.  Hybrid arrays that rely on a migration process often find that the very process of migration can actually hurt performance when it is most in need! In a migration based approach, promotion of “hot” data into SSDs requires not just that data be read from HDDs and written to SSDs, but also that to make room for that hot data, some colder data needs to be read from SSDs and written into HDDs – which we already know are slow at handling writes. The Nimble architecture is much more efficient in that promoting hot data only requires that data be read from HDDs and written into SSDs – the reverse process is not necessary since a copy of all data is already stored in HDDs.
  3. Flexibly scaling the ratio of SSD to HDD on the fly. Hybrid arrays need to be flexible in that as the attributes of SSDs and HDDs change over time (performance, $/GB, sequential bandwidth, etc.), or as the workloads being consolidated on the array evolve over time, you can vary the ratio of SSD to HDD capacity within the array. A measure of this would be whether a hybrid array can change the SSD capacity on the fly without requiring application disruption, so that you can adapt the flash/disk ratio if and when needed, in the most cost effective manner.

We truly believe that storage infrastructure is going through the most significant transformation in over a decade, and that efficient hybrid storage arrays will displace modular storage over that time frame. Every storage vendor will deploy a combination of SSDs and HDDs within their arrays, and argue that they have already embraced hybrid storage architectures. The real winners over this transformation will be those who have truly engineered their product architectures to maximize the best that SSDs and HDDs together can bring to bear for Enterprise applications.

 

]]>
http://www.nimblestorage.com/blog/technology/are-all-hybrid-storage-arrays-created-equal/feed/ 0