Fundamental Storage Design Principles – Overengineering versus Flexibility

When designing a storage system (or any system for that matter), you make a few big decisions starting out and often those are the decisions that later control your destiny. The design principles you start with will determine if you will end up with an over engineered system that can go only so far or a flexible framework that will allow you to deliver increased value repeatedly.


At Kaminario, we opted for flexibility. We designed a  software-defined shared storage all-flash array architecture from the ground up.

It disaggregates the storage controllers from the storage capacity making it the only modern all-flash array to support both scaling up and scaling out. We also introduced adaptive block size processing on a scale out cluster to support all types of mixed workloads and the industry-leading data reduction technologies (which by the way, are not all created equal).


Let’s break that down a little and see how the above supports the customer values we focus on – which are scalability, performance and cost efficiency – the core needs of modern cloud-scale and consumer internet applications.

Starting with our software-defined approach, which is a central part of maintaining our flexibility – a simple concept with a lot of benefit. Kaminario does not design hardware, plain and simple. We focus on working with the best technology out there to support the above. That means having a strong hardware virtualization layer, excelling at qualifying and testing new hardware technologies that come around, and quickly integrating them into our product. This allows the K2 to run the most cost-efficient and performant CPU, the latest networking (16/32GB FC and 25GB Ethernet), dense low cost SSDs from multiple vendors and leveraging hardware-offloading for enhanced compression capabilities.

Moving to our disaggregated scalability model where storage capacity and storage compute can be added and removed from the system independently to always support the right capacity/performance requirements of our customers. Combined with Kaminario Foresight and the guarantee to mix & match future technologies into or current storage clusters, we can grow and adapt with or customers as their needs change.

When it comes to performance, it needs to be addressed in every layer the storage system. The challenge is in how you take hundreds of SSDs and allow customers to enjoy performance benefits for any workload while avoiding the pitfalls of SSD wear. With the Kaminario adaptive block size algorithms, the K2 handles each IO at the size it arrives to the system. This ensures, for example, that analytics and transactional processing can occur at the same time.

Lastly we have to keep the K2 cost-efficient and that relies on efficient use of the media (87.5% utilization), low metadata overhead (with our adaptive block size), the best data reduction (byte aligned hardware accelerated compression, global selective deduplication) and the usage of the most cost efficient SSDs (low endurance 3D TLC).


This allows us to be the best storage solution for cloud-scale SaaS applications that require these levels of flexibility, performance and cost efficiencies at scale. We do so with a mature, high performing, scalable, resilient and data-service-rich storage software stack – our Vision OS.

Now the real test is what happens next. We have a new set of challenges in the industry – the continued massive shift to everything “as a Service”, and a new set of technologies – NVMe and NVMe over fabrics. Sadly, vendors who didn’t opt for flexibility are on the path to overengineering – across the board existing all-flash vendors are ripping out the SATA SSDs from their array, plugging in NVMe SSDs and hoping for the best. Hoping that their architecture will be able to get some performance boost and especially hoping that their customers won’t notice that it’s a costly proposition for the amount of performance gained.

I know there is a better way, the Kaminario Vision OS way. It lies with having a fully software-defined architecture that was designed from day one to fully disaggregate storage controllers from storage capacity. That is where our insistence on flexibility will truly shine.

An architecture such as ours gives us a natural transition to adopting low-overhead, low-latency interconnects. But why stop there? The myriad of advantages that an low-latency interconnect provides will allow Kaminario to create a world of possibilities in the storage arena that didn’t exist before. The capability to dynamically manage these disaggregated resources as a highly performing, extremely flexible, cost efficient and scalable storage system.

A system geared to support all the requirements of the cloud scale SaaS world.

It actually allows so much more.

Stay tuned.


New Call-to-action