Technology Trends and What’s Next: Part II

 

A couple weeks ago, I published the first of a three part series of blogs on technology trends and what the future holds in place. In the second part of the series, I share my thoughts on software defined data centers (SDDC) and more specifically on software defined storage (SDS)

 

Part II: Experimentation and Discovery

 

For large enterprises to successfully achieve their advanced multi-cloud strategies, and for Service Providers to offer a public cloud alternative, a new approach to storage was essential. It had to be something different from the old model of pre-integrated monolithic storage arrays. The market needed the ability to deliver agility, ease of use, and cost efficiency – which in turn would lead to a software-first approach.

As public cloud adoption grew, organizations started to recognize the agility and ease of use of virtualized data center resources.  Soon IT vendors started promoting the concept of the SDDC, in an effort to overcome some of the hurdles of migrating to public cloud.  While adoption of SDDC has grown, it is far from the norm at this point. One of the main reasons being the need for the right hardware to run SDDC resources on. This is where, at first, the idea of software-SDS created great enthusiasm.

The enthusiasm faded, however, with the realization of how difficult it would be to create software-defined storage that would fulfill the needs of tier-one applications. All of the required services, and robustness of an enterprise-level product needed to be built in to an SDS solution. It needed all of the performance, features, functions, and flexibility that are expected and needed to serve an enterprise environment. In addition, and most importantly, SDS needed to be more cost-efficient and reliable in order to warrant a move away from public cloud or legacy approaches.

Today, some less robust software-defined storage solutions are finding a home in the SMB space or in tier-two backup solutions that require less performance and cost optimization.  But they’re not running in top-tier mission-critical environments. They also don’t have the cloud-like capabilities to easily provision, to ramp up or down as business requirements change.

 

Virtualized Storage and Composability

 

The biggest challenge in delivering true enterprise level capabilities in the form of virtualized storage is the ability to achieve hardware and software disintegration. How do you grow an environment in a cloud-like fashion?  How do you build in flexibility to scale up and out; add capacity and performance as business needs grow or change; allow for elasticity, and composability. Unlike the old model, you don’t want to pre-provision for future needs. You just right-size the environment for what you need now and then grow it or shrink it over time as needed. That’s how cost-efficient, scalable, high-performance solutions can meet the requirements of enterprise cloud-like environments.  It’s what addresses the needs of Software as a Service (SaaS) companies that want to have the flexibility of the public cloud within their own managed practices. It’s what allows them to compete at the economic level, because of the ability to manage infrastructure in a cost-effective manner.

 

Disaggregation and Composability

 

How do you take separate storage and compute resources and dynamically group them together to address a business need, to create a cost-efficient, scalable solution? First, you need to be able to disaggregate. If not, if the resources are rigid, you can’t achieve composability. In contrast, a monolithic storage array has no composability. It’s just there. You can maybe add some disks to it, but there is no dynamic capability, no flexible restructuring.

There are two types of disaggregation. One, for example, is available today from Kaminario.  You buy the storage software license from Kaminario and then go to a distributor, such as Tech Data, to buy our prescribed commodity hardware stack.

The other level of disaggregation is within the infrastructure itself. It’s tied to an architecture that disaggregates compute resources from capacity resources. It provides the ability to add or remove resources and scale them independently to right-size the environment for capacity, performance, and cost.

“A composable infrastructure is a framework whose physical compute, storage and network fabric resources are treated as services. In a composable infrastructure, resources are logically pooled so that administrators don’t have to configure hardware to support a specific software application physically.”

— Tech Target

There is a level of disaggregation that’s tied to the software architecture and doesn’t rely on any non-standard, not-commodity hardware component. But another form is very much related to the product architecture. It can add and remove capacity and compute resources independently. This leads to a new level of composability that orchestrates resources based on a fully converged NVMe and NVMe over fabric (NVMeF) architecture. It means we can leverage this new technology to completely separate hardware components of compute and storage from one another. It provides the ability to have as much performance or capacity as you want independently.

 

In Part III: Storage Composability Now and Next, a look at the future of storage.

 

 

New Call-to-action