We've had the opportunity, through multiple client engagements, to see how companies are utilizing their cloud services and a first-hand view of how many organizations would benefit from a structured approach to cloud computing cost savings. Some corrections are straightforward, and some require rethinking how cloud services are implemented. With that in mind, we wanted to walk you through everything you need to know about managing, monitoring, and reducing the cost of cloud computing services.
Cloud computing offers major financial and operational benefits for companies. However, it can also introduce complexity and cost that, if not governed properly, can endanger the health of the business. Enterprises that monitor and optimize their cloud computing costs will see a positive, immediate, and long-term impact on their organizations. As your platforms are migrated to the cloud, many of the costs will move from the one-fee capital expenditure structure (resources + on-premises hardware costs) to operational expenditures (e.g., paying for computing power).
And the biggest difference isn’t necessarily how it’s billed; it’s all about how it’s framed within the organization.
Cloud budgeting and finances are very different from on-premises. How leaders frame the costs of on-premises vs. cloud, such as capital expenditures (CapEx) vs. operational expenditures (OpEx), often ends up driving the conversations around how the cloud should be built and utilized. Educating leadership on the differences is really key because trying to build everything out as close to a CapEx model as possible usually leads to a poor cloud design and losing out on a lot of the benefits of the cloud, such as on-demand scaling.
Leadership needs to understand the different kinds of budgeting and tax implications while planning for the future.
Managing hardware, networks, and storage is a significant undertaking that often requires an entire department to manage. Scaling up and scaling down is generally very slow because building new servers and decommissioning old ones require significant hours to complete. Resource-for-resource, on-premises tends to be cheaper in the long term, but efficient use is more difficult to achieve across an entire organization. And that’s where cloud computing comes in.
Cloud cost management and optimization help companies save on their cloud bill by reducing waste and alerting users of lowered demand or automatically scaling usage to optimal rates. By budgeting cloud costs, cloud cost management solutions often provide reporting features to outline waste and redundancies—which can increase the efficiency of usage, save on hidden costs, decrease TCO, and help businesses go-to-market faster.
We would like to give you a rundown of how you can manage your IT costs effectively. This list isn’t meant to be exhaustive, but if you pay close attention, you’ll be able to do everything you need at a lower cost.
Let’s start with how cloud services are billed. Then we'll give you some insight into the process of how ConvergeOne helps our clients reduce cloud costs.
What are the most common cloud cost models?
There are a variety of ways to utilize cloud resources and a variety of ways to pay for them. It’s also important to note that organizations don’t need to choose a single cost model. An optimal strategy tends to be a blend of all three of these categories.
Here are the different types of cloud cost models.
We also highly recommend looking into RIs (Reserved Instances).
These are larger discounts based on upfront payment and time commitment. RI savings can reach up to 75%. Many companies that utilize cloud computing have this option but don’t utilize it. We would highly recommend looking into this if you’re a big user of cloud services.
Reserved Instances are appropriate for steady-state loads for systems that will be operational for a long period of time (three years is usually the most cost-effective). Some organizations may fall into the trap of using peak load when deciding on a reserved instance count, resulting in wasted resources. Spikes in resource usage (such as a time entry system at the end of the month) should be covered by on-demand or spot instances.
In terms of budgeting and forecasting, this is a quick look into the differences between on-premises and cloud.
On-premises
Cloud
It’s never been more important for CFOs and CIOs to unite around understanding cloud computing costs. With on-premises hardware and software, it was easy to calculate costs because they were always predictable.
With a more scalable and flexible cloud model comes a lot of opportunities to create a stronger IT infrastructure, but also add more cost complexity.
On-premises budgeting is fairly straightforward. You set how much the IT team is allowed to spend on hardware and software, and you issue a PO against that cost. If the cost doesn’t align specifically with the PO, the invoices cannot be approved without further discussion.
With cloud computing, proper governance is much more complex and should be treated much differently than on-premises budgeting. It’s increasingly important to implement the proper governance to align IT costs and performance while enabling the technology team’s ability to move fast.
Handling this complexity often means thinking about costs differently. For example, saying we spent $100,000 on the cloud this month doesn't provide much context about the flexibility of the cloud.
A different way to think about cost is that we spent $8 per user this month, or $.25 per user hour this month. It can also be useful to understand how that cost changes as the system is scaled based on usage. Is it linear, logarithmic, or exponential? Is there a way to reduce the slope of the cost curve so that a larger scale means higher profit margins?
You shouldn’t have to walk into an executive status meeting and ask, “So how much money did we spend on cloud computing this month?” and be surprised by the answer.
A financial organization should be able to control costs without limiting IT’s ability to support the organization effectively. This does require a bit more understanding of the technologies being used and why they are used.
We highly recommend creating a budget-per-cloud application that can be monitored and controlled. The good news is that the major cloud vendors are implementing tools that allow you to control this automatically through their platforms. GCP allows you to create budgets and set alerts against those budgets once certain thresholds are set.
You should know:
The GCP budgeting and alert tool also allows you to configure alerts per specific “project” and send them to the individuals (IT or Finance) that should be alerted about their cloud application.
Create a detailed resource and workload plan
The devil is in the details when it comes to cloud computing costs. It’s one thing to manage IT cost when prices are fixed and agreed upon, and it’s another when you’re paying per-unit usage and have no visibility into how these units are being utilized.
It’s a balancing act between ensuring IT has everything they need to help the business perform but also making sure costs aren’t being racked up frivolously.
Here are some things that CFO and finance team should have:
In the end, the finance team should be able to ask, “Is this the best usage of our money? Are there ways to achieve application requirements while maintaining sustainable costs?"
And they should be able to govern how their budgets are being utilized.
The last thing IT wants is to be “governed,” but giving them a blank check isn’t the way to go. Here are a few ways to implement strong governance rules without restricting IT’s ability to scale.
IT must be able to do their jobs and remain agile. Preventing your teams from doing their job with overzealous governance also incurs a cost of lost productivity and throwing work over walls to other teams.
Traditionally, governance rules were for individuals who could approve a purchase order and invoices against a purchase order, so this meant many financial organizations assigned authority to those who could purchase new cloud services and approve scaling applications beyond defined thresholds.
While it is a good idea to implement governance to prevent those teams from spending frivolously or overspending by mistake, it shouldn’t be used to restrict their productivity.
It’s worth mentioning that you can get consumption-cost-per-application, but it might not give you the full view of the entire cost. Here is a good definition of application vs. workload.
In the simplest sense, an application is a code that performs a given function—nothing more, nothing less.
In contrast, a cloud workload is all the resources and processes that are necessary to make an application useful. A cloud workload typically includes an application, but it also involves things like data served to and generated by the application, network resources required to connect users to the application (or to connect different parts of the application together) and users — without whom your application would not really serve its purpose. [src]
It’s important to track both the costs and workload of the application and compare it against the budget.
Because of the complex nature of how applications can be created on the cloud, every finance organization should assume there are at least a few rogue applications that exist in the environment that utilize resources even if they aren’t being actively used.
Our team usually recommends doing a consistently-scheduled audit of the environment with the IT organization to understand what applications are running that could be turned off.
Cloud computing is the future of how organizations will operate, and the overlap between IT and Finance gets bigger and bigger every day. None of these can be done in siloed environments. Follow the steps above to ensure you’re getting the performance you need without any frivolous overspending.