As discussed in my last blog post, cloud storage is commonly heralded as the future of data storage. This is due, in large part, to the ability to procure and pay for IT infrastructure resources on demand. The problem is that cloud storage doesn’t provide the levels of control or performance required by mission-critical applications. In honor of the U.S. Independence Day holiday, I’ll explore on-premise alternatives for when you declare independence from public cloud storage.
Traditional Scale-Up Storage
Scale-up storage architectures are the longest standing. These arrays are configured with a fixed number of controllers (typically two). Additional resources, such as disk drives, may be added to the system as capacity requirements increase. Typically, scale-up architectures are ideal for serving point workloads with fairly predictable requirements. They are also typically capable of delivering the fastest levels of performance of per volume performance on fixed workloads. However, workloads are becoming more dynamic and unpredictable while controller compute resources are not upgradeable and become overwhelmed with Input/Output. This results in performance degrading as more capacity is added.
Scale-up storage systems are limited by the maximum levels of drives and performance that the storage controllers can handle. If a system’s limits are exceeded, the controllers either need to be upgraded or a new system must be added for additional compute and storage. Silos of data are created, and multiple systems must be purchased, racked, stacked and managed. Additionally, the balance of compute and storage that was once ideal for the specific workload can quickly become imbalanced. Resources can quickly become over or under provisioned.
Scale-out architectures emerged to address some of these pain points. In a scale-out approach, resources (including storage capacity) across multiple systems are aggregated into a centralized, shared pool. The pool of resources is no longer bound by the constraints of a singular system. A new system may be deployed to expand capacity or performance. This creates the opportunity for more flexibility, better system utilization, simplified management, and a stronger ability to run multiple workloads in parallel.
The problem with the scale-out approach is that a minimum configuration of three nodes is typically required. This may be more than a particular project requires for a long time (if ever). In a scale-out approach, servers are clustered together so that compute, storage capacity and networking may be pooled. These clusters are typically called nodes. Additionally, performance is impacted by the levels of communication between systems over the network that keep systems in sync. Scale-out storage systems require parallelism to deliver the performance they promise through many workloads using the system or workloads that are innately parallel.
The concept of composable storage is developing — boosting infrastructure agility and utilization through even more granular and fluid resource allocation. Composable storage virtualizes storage resources into a shared pool, from which resources may be created on demand and automatically, according to application or workload requirements. Composable storage stands to bring the enterprise closer to the resource agility that is inherent in public cloud storage.
The problem with scale-up, scale-out, and composable storage, when it comes to facilitating a cloud storage like model on premises, is that infrastructure still needs to be procured upfront. Composable storage brings the enterprise closer to a highly utilized and agile infrastructure. However, storage professionals should look further for the opportunity to pay on a consumption-based approach, as well.
Discover how composable storage can give you the same flexibility as cloud storage with the dedication of on-premises infrastructure. Download the Storage Switzerland ebook, “Developing a Cloud-Like Experience for Mission-Critical Data”.