Imagine if there was a better way to manage and monitor your data storage infrastructure by leveraging big data analytics and machine learning. Kaminario customers are already talking advantage of this unique capability with Kaminario Clarity. Clarity is a SaaS-based predictive analytics platform that delivers intelligence, automation, and analytics to customers’ cloud-scale infrastructure. In the video below, we’ll demonstrate how Clarity enables customers to understand capacity trends and predict usage.
A Smarter Way to Plan Capacity
Leveraging machine learning, Kaminario Clarity enables customers to understand capacity trends and predict usage based on historical data. While users can look at all data points from inception to make future predictions, Clarity offers the ability of leveraging particular slices of time to predict future capacity usage. This capability is particular helpful if a storage array is out of use for a while – say, three months after purchase – and the team wants more accurate planning based on the three-month mark and onward.
For the users with an “always be prepared” attitude, Kaminario Clarity’s offers the ability to examine what would happens to capacity in certain scenarios or use cases. For example, a user might be considering adding a new workload to an array but is concerned about how it will affect capacity of the array going forward. This capability helps to predict capacity usage which, in turn, helps organizations avoid overprovisioning and prevent overutilization. Should the need arise, Kaminario Clarity also enables users to dynamically migrate and rebalance capacity between arrays.
Capacity Planning in Kaminario Clarity
In the “Capacity Planning” tab of Kaminario Clarity, users can look at the historical and predictive capacity trends for a single K2. Clarity’s analytics looks at the latest physical usage – meaning storage usage before data reduction took place – as well as allocated usage and data reduction ratios.
As anyone who has worked with storage for a while knows, as a general rule of thumb, storage capacity should not exceed 80%. This is the threshold at which data reclamation routines become more aggressive. The software will try to keep itself as physically free as possible but, since more resources are going towards reclamation than user data, this is at the cost of a very small decline in performance. Reaching 80% capacity can also result in latency spikes.
In the demo video above, we find that K2-6959 is at 79% and is estimated to reach the 80% threshold in less than a month. On the other hand, K2-5405 hasn’t been greatly utilized and, at this rate, isn’t projected to reach the 80% threshold for at least another year. For this reason, the user might consider moving volumes or volume groups from K2-6959 to K2-5405 and help balance the loads.