Why Scale-Out vs. Scale-Up Architecture is Hardly a Dilemma with Kaminario

No matter where you look in IT news outlets or storage forums, you can’t avoid heated conversations around choosing scale-up vs. scale-out architectures for primary storage. And for good reason.

The architectural approach used by all-flash array vendors often almost always makes a significant difference in the performance, scalability and total cost of the selected solution. Cost and performance often are the main elements for selecting a next-generation storage solution, so it’s important for companies to choose an architecture that’s flexible enough to meet their performance requirements, but also balance that need with a cost structure that’s suitable for both the initial purchase and future growth.

Here’s a primer on both approaches, the technical differences between scale-up and scale-out – and key considerations to keep in mind when talking with all-flash vendors on what solution is best for your business.

The Architecture ABCs

First things first. Let’s define the scalability terms so we’re on the same page:

  • “Scale-up” refers to architecture that uses a fixed controller resource for all processing. Scaling capacity happens by adding storage shelves up to the maximum number permitted for that controller. In order to maintain high availability, such architectures typically use dual controllers. However, the majority of them act as “active-passive” which creates a situation whereby the array’s performance is limited by the performance of a single controller. It leads to waste of 50 percent resources under normal operations and the industry has really moved away from it.

Scaling Up a Single K-Block With an Expansion Shelf

  • “Scale-out” refers to architecture that doesn’t rely on a single controller and scales by adding processing power coupled with additional storage. It’s also important to remember that with scale-out, all architectures are not created equal. (Some vendors build a unified management that utilizes multiple, independent arrays and call it scale-out, which is, technically speaking, false.) In true scale-out architecture, the data and the metadata are distributed across all nodes and all SSDs in the system. Modern scale-out architectures also implement global deduplication across the entire data set.

Scaling Out from a Single K-Block to a Dual K-Block Array

The Pros and Cons

Since only storage shelves are added when expanding capacity, scale-up architectures offer a very cost-effective capacity upgrade. It’s important to note that this approach can suffer from performance problems when capacity is added and the controller resources are insufficient for the additional data. When the controller reaches its sustainable capacity limit, the customer must purchase an additional system. When scaling beyond certain limits, this necessitates manageability overhead of multiple systems, not to mention it also makes deduplication less effective. (This is due to the fact that it is not performed globally across multiple independent systems.)

Scale-out architectures add more CPU, memory and connectivity when expanding system capacity. This guarantees that performance doesn’t decline when scaling the system. Scale-out systems can typically grow to a large number of nodes with a single management entity – and often without performance tuning. Such architectures enable very efficient data reduction because deduplication is performed globally across the entire data set. There is a disadvantage to this approach, which is apparent when capacity growth can be met without exceeding the performance of the existing controllers. This means there is no need to purchase additional controllers, and adding those leads to unnecessary costs for the customer. With scale-out architecture, there is no option to add capacity without adding controllers.

The Price of Limitations

For an all-flash array that can provide significantly better performance than a legacy storage array, some systems will run out of capacity before performance levels decrease. However, when planning for scale, it’s important to retain flexibility and be able to scale performance as more and more applications are serviced by the all-flash array. When choices are limited by architecture to use only one approach or the other with no ability to choose between them, it is nearly impossible to scale and balance both capacity and performance in a cost-effective manner. In contrast, an architecture that can offer the flexibility to either scale-up or scale-out provides the optimal scaling option, allowing the customer to scale capacity or performance as needed.

Some vendors that are unable to scale out rely on Moore’s Law, with frequent controller upgrades for improving performance. In practice, we see a slowing down of Moore’s Law, since computer power simply cannot maintain its rapid exponential increase using standard silicon technology. (Intel has admitted this.) This creates inconvenience and extra cost, forcing customers to refresh the controllers too frequently without gaining sufficient performance improvement.

So What?

Storage architecture should allow for the benefits of both approaches, delivering the flexibility to start with a system that meets current needs, while also providing the ability to scale-up or scale-out later on as necessary. Kaminario’s K2 v5 architecture is uniquely designed to be flexible – and allow both scale-out and/or scale-up expansions. Having both capabilities ensures that the maximum amount of data that can be managed by each controller can scale to very high capacities. It also ensures that the architecture maintains a low metadata footprint, with no assumption that all metadata always resides in the memory.

K2’s implementation of a variable block size algorithm also results in a lower metadata footprint when compared to naïve 4k fixed size architectures, which are forced to maintain a pointer for every 4k of data. Variable block size approach enables scaling up the capacity attached to every node and also allows supporting extremely high data reduction ratios. All of this is done without compromising on the scale-out properties of the system, which is uniquely designed to achieve linear performance growth as the cluster scales under any configuration or any workload. This is accomplished using a true scale-out design that spreads the data and the metadata across all nodes and all SSDs in the system and eliminates any potential data- or metadata-related bottlenecks.

The chart below illustrates the flexibility of scaling the Kaminario system, where scale-out and scale-up expansions are supported using a mixture of higher and lower density SSDs:   

The variable block size algorithm implemented in K2 ensures that the system utilizes the resources efficiently and delivers higher performance levels under real-world I/O sizes that are normally much larger than 4k. Meaning, Kaminario can deliver better performance from each node in comparison with systems that are implementing a fixed 4k block size algorithm. In fact, Kaminario’s architecture is the only one that supports both scale-out and variable block size. With Kaminario, the customer only scales out when the performance of all existing controllers is fully utilized, and not earlier as a side effect of an inefficient fixed 4k implementation that over-extends the controller and networking resources.

There are places in the market for both scale-up and scale-out storage systems. Many legacy storage systems use primarily scale-up architectures, and the market is increasingly shifting to scale-out. In many scenarios, scale-out requires adding unnecessary costs for the customer, and the extra cost limits the adoption of scale-out storage systems.

Kaminario’s K2 v5 is the first storage solution to close this gap by enabling scale-up or scale-out within the same system, allowing customers to scale in the most flexible way to meet both performance and budget requirements. Talk about win-win!

To learn more about our K2 v5, visit our website or check out our architecture whitepaper here.

New Call-to-action