Why Robust Mixed Workload Support Matters

Much like a large highway supporting traffic from vehicles of kinds and sizes, your storage array has to manage traffic from applications that come in many differing block sizes. Don’t let your storage be the point of congestion because of its rigid block size limitation.  Robust mixed workload processing matters! Read on to learn why.

At Kaminario we are heartened that Enterprise is deploying our technology, now on its 6th generation of hardware and Operating Environment.  We’ve always known that solid-state is a perfect way to massively boost application responsiveness and remove storage layer Input/Output (I/O) bottlenecks which have been the bane of applications for decades. But we also know that to truly gain all the benefits of flash enterprises need to consolidate workloads and move away from the myriad point solutions that complicate today’s data centers excessively.  To do this, merely adding more flash isn’t the answer, the solution requires more intelligence than simply more or faster, the solution requires an architecture that can automatically support *all* kinds of workload profiles without the need to tune, divide, partition, break out, or otherwise quarantine workloads that in the past could cause ‘noisy neighbors’ and could tip storage arrays over—even all flash storage that isn’t built to inherently and automatically support variable workloads.

 

Blocking and Tackling “Variable Block Sizes”

Briefly, in storage there are a few different kinds of speeds to consider: random vs. sequential and read vs. write. Add to this the fact that all applications use different I/O request profiles, and will make read and write requests in different Block Sizes which can vary anywhere from 512 bytes up to 128 KB or even larger depending on the application. These variables of Random/Sequential, Read/Write, and Block size have a dramatic effect on all-flash storage platforms and can cause array performance to vary wildly from being really great one minute to nearly unusable the next as applications time out waiting for IO.  This is the current day problem with consolidating Tier 1 and Tier 2 workloads all onto a single all-flash platform.

For example, let’s look at the most common applications today: retail databases, online analytical processing (OLAP) and virtualized servers.

 

  • Online retail sales databases drive business and sales engines for customers in the form of e-commerce, inventory management, and sales forecast reporting. As more people move online to do daily tasks and purchases, the demand for more responsive application performance at greater scales becomes critical.  These types of loads are typically highly random with massive use of indexes and use small block transactions with a good mix of read and write.
  • OLAP datasets, which encompasses Data Warehouse and Business Intelligence, is the (in as close to real time as possible) process of turning enormous amounts of raw data points into meaningful business intelligence reporting to help businesses drive process improvements and to create more efficient business models.  These types of jobs jobs usually include large sequential scans, are less random and use large block transactions that can be 100% write (when loading data) or 100% read (when reading reports).
  • Virtualized servers run on hypervisor that controls multiple ‘virtual’ server environments, or VMs. Today’s IT shops can run dozens or even hundreds of VMs on a single physical server, making it possible to support the entire enterprise—with a wide variety of workload requirements. Manually tuning storage platforms to support the infinite variety of IO profiles for consistently great performance is an impossibly difficult task today, when all IT departments need to do more with less, and the need to simplify operations is crucial.

Kaminario clearly understands the challenges each of these applications brings to enterprise primary storage, and has built an all-flash platform that can handle all of these radically mixed workloads in a simple, automated fashion.  The K2 all-flash array, with its automatic variable adaptive block architecture can handle any kind of workload natively, whether it be small, random 4K online transaction processing or large 128K analytical processing or any kind of mix in between, along with virtual server support for hundreds of different applications at the same time.  No manual tuning of any kind is needed to do this, the K2 does it all automatically.  No complex setup, configuration, or maintenance is required.

On top of the simplicity and the ‘fire and forget’ nature of the array, the K2 consistently ranks highest for performance among analysts, recently being ranked #1 for performance by Gartner in 2016 for All Flash Arrays.

 

See It to Believe It

The below chart shows a real mixed workload sample of how the K2 does this.  Here you can see the K2 supporting a mix of 4K, 8K, 64K and 128K block sizes in a 75/25 mix of read/write.  The K2 is easily pushing almost 5 GB/s of mixed throughput, at around 100,000 IOPS, with a response time of 1 millisecond.

 

K2 GUI showing performance characteristics of a real mixed workload example

Put us to the challenge and let us run a 10-minute demo showing you performance on a mixed workload environment.





New Call-to-action