Kubernetes is a powerful cloud-native automation tool that allows the environment to automate system configurations such as scaling up and down. Employing Kubernetes best practices can result in substantial cost savings when compared to static or manually managing systems.
The effectiveness of this automation in saving and limiting costs is directly related to the appropriate configuration being applied to an environment. Therefore, regular feedback and adjustment is important in assuring that configurations are optimized to achieve the desired cloud spend. So how is this done?
The self-service and rapid elasticity aspects of cloud systems break the command and control traditionally managed by IT services, where costs and resources are relatively centrally controlled.
The NIST model of cloud computing considers the essential characteristic of cloud computing to be Self Service: “A consumer can unilaterally provision computing capabilities, such as server time and network storage, as needed automatically without requiring human interaction with each service provider.”
In addition, another NIST model essential characteristic, “rapid elasticity,” is how and why cloud systems have become known for cost overruns: “Capabilities can be elastically provisioned and released, in some cases automatically, to scale rapidly outward and inward commensurate with demand. To the consumer, the capabilities available for provisioning often appear to be unlimited and can be appropriated in any quantity at any time.”
The “measured service” essential characteristic comes to the rescue and is core to cloud cost monitoring and optimization: “Cloud systems automatically control and optimize resource use by leveraging a metering capability at some level of abstraction appropriate to the type of service (e.g., storage, processing, bandwidth, and active user accounts). Resource usage can be monitored, controlled, and reported, providing transparency for both the provider and consumer of the utilized service.”
Still, if the consumer, operator or end-user does not or cannot access the measured service to understand cost, it is easy to overspend.
Kubernetes is a cloud-native tool and carries with it the inherent benefits and dangers of cloud systems.
Kubernetes is designed to automate ‘right sizing’ a cloud environment. This entails scaling systems up and down in response to demand on the system’s CPU and Memory needs. The danger is that this inherent automation can scale up without oversight and drive costs up to new and unexpected heights. It is possible to manually set limits on resource consumption – but again, without oversight, this could cause performance issues (out of memory, out of CPU).
Now you know there’s a problem, here’s the solution.
Manual: You see the cost, make an adjustment afterward. You use metrics to evaluate resource use and manually calculate costs, using a tool like Kubecost.
Automated: You evaluate costs and automatically tune the system to maintain appropriate limits using AI-driven software..
On the technical side of things:
When it comes to cost ownership:
You can do this manually. Although Kubernetes is an automation engine that can be readily tuned on myriad parameters, the typical approach to optimizing K8s is very manual. Look at metrics → evaluate costs → edit system configuration → repeat. Even with reasonable data from a metrics engine, the myriad options from instance types and sizes, multiple tuning parameters and more make evaluating cloud pricing far from straightforward.