Blog

FILTERS

Blog

Blog Post 5 min read

AWS Savings Plan: All You Need to Know

Organizations using Amazon Web Services (AWS) cloud traditionally leveraged Reserved Instances (RI) to realize cost savings by committing to the use of a specific instance type and operating system within the AWS region. Nearly 2 years ago, AWS rolled out a new program called Savings Plans, which give companies a new way to reduce costs by making an advanced commitment of a one-year or three-year fixed term. Based on first impressions the immediate understanding was that saving money on your AWS would be significantly simpler and easier, due to the lowering of the customer’s required commitment. The reality is the complete opposite. With Amazon’s Saving plans, it is significantly harder to manage your spending and lower your costs on AWS Plans, especially if you only rely on Amazon’s tools. 1. What are Savings Plans? To understand why the new Saving Plans significantly complicate cloud cost management, it is necessary to briefly review the two savings plan options. EC2 Compute Saving Plan The EC2 Savings plan is just a Standard Reserved Instance without the requirement of having to commit to an operating system up front. Since changing an operating system is not routine, this has very little added value. Compute Saving Plan With this product Amazon has clearly introduced a new line. The customer no longer has to commit to the type of Compute he is going to use. You no longer have to commit to the type of machine, its size or even the region where the machine would run, these are all significant advantages. In addition, Amazon no longer requires a commitment to the service that will use Compute. It does not have to be EC2, which means that when purchasing Compute Saving Plans, using Compute in EMR, ECS EKS clusters or Fargate can also be considered a guarantee and you will receive a discount. In RI Convertible, to get a discount on a different server type, rather than the original server for which we purchased the RI an RI change operation was required. With the new Compute Plan, it is not necessary to make the change and the discount is automatically applied to the different types of servers. The bottom line is that you commit to the hourly cost of computing time, however, you choose whether the commitment is for one or three years and how you want to pay i.e. prepayment, partial payment, or daily payment. At this stage, it sounds like Compute Saving Plans would simplify and lower your costs, as the commitment is more flexible. However, as we stated above, the reality is much more complex. 2. Are Amazon’s Saving Plan Recommendations Right for Me? Let’s start with the most trivial yet critical question, how do I know the optimal computing time for me? Amazon offers you recommendations of what your computing time costs should be and what they feel you should commit to buying from them. It’s interesting that Amazon offers these recommendations considering they don’t share usage data with their users. So what is this recommendation based on? Amazon is recommending to their users to commit to spend hundreds of thousands of dollars a month without any real data or usage information to help users make an educated investment decision. Usually when people commit to future usage they do so based on past usage data. The one thing that Amazon does allow you to do is choose a time period on which their recommendation will be based on. For example, based on usage over the last 30 days of a sample account, Amazon recommended a spend of $ 0.39 per computing hour. The IT manager can simply accept Amazon’s recommendation, but with no ability to check the data the resulting purchase could cost the company a significant amount of additional and unnecessary money. In the example above, there was significant usage over the last 30 days, however a couple of weeks prior to this, there may have been a significant change, such as a reduction in server volume and/or a RI acquisition and therefore the recommendation here should have been particularly lower. This is even truer if Saving Plans had already been purchased and had earned an actual discount. 3. How do I know which savings plan is best for my company? On this large and significant vacuum Umbrella for Cloud Cost can provide a lot of value. Using Umbrella, you can see your average hourly cost per day for the last 30 days. Since the Saving Plan estimate does not include the Compute hours already receiving an RI discount, Umbrella only displays the cost of Compute on-demand. It is also critical for a user who has already purchased and is utilizing Saving Plans to know how this impacts his costs before making any additional commitments. Umbrella shows the actual cost of each individual computing hour over the last 30 days to enable educated decisions that can impact significant multi-year financial commitments. Umbrella utilizes its unique algorithm and analyses all your data to deliver customized recommendations on what will be the optimal computing time cost that you should actually commit to. It is important to note that when purchasing a Compute Saving Plans, it is not possible to know at the time of purchase what your exact discount will be. The actual amount of the discount can be only estimated in all cases other than RI. This uncertainty is due to an additional complexity that exists in Compute Saving Plans. Each type of server receives a different discount, so in practice the discounts that you receive depends on the type of server you actually run and if Amazon’s algorithm chooses to provide that type of server with the Saving Plan discounts offered.
Cloud Cost Management
Blog Post 13 min read

Manage Cloud Costs Like the Pros: 6 Tools and 7 Best Practices

Continuous monitoring, deep visibility, business context, and forecasting are essential capabilities for eliminating unpredictable cloud spend
Blog Post 5 min read

12 Must-Read Data Analytics Websites of 2024

When it comes to staying current on big data and analytics, you'll want to bookmark these leading blogs and sites.
Blog Post 10 min read

The Rise of FinOps

Many companies have tried to feed business data, such as business activity, into IT or APM monitoring solutions, only to discover the data is too dynamic for static thresholds. Some companies choose to depend on analyze BI dashboards to find issues, but that leaves anomaly detection to chance. As companies have tried to solve these challenges, AI is driving a future where monitoring business data is monitored autonomously.
Good Catch Cloud Cost Monitoring
Blog Post 5 min read

Good Catch: Cloud Cost Monitoring

Aside from ensuring each service is working properly, one of the most challenging parts of managing a cloud-based infrastructure is cloud cost monitoring. There are countless services to keep track of—including storage, databases, and cloud computing—each with its own complex pricing structure. Cloud cost monitoring is essential for both cloud cost management and optimization. But monitoring cloud spend is quite different from other organizational costs in that it can be difficult to detect anomalies in real-time and accurately forecast monthly costs.  Many cloud providers such as AWS, Google Cloud, and Azure provide you with a daily cost report, but in most cases, this is not enough. For example, if someone is incorrectly querying a database for a few hours this can cause costs to skyrocket—and with a daily report, you wouldn’t be able to detect the spike until it’s too late.  While there are cloud cost management tools that allow you to interpret costs, again these technologies often fall short as they don’t provide the granularity that’s required in real-time monitoring. Similarly, without a real-time alert to detect and resolve the anomaly, the potential to negatively impact the bottom line is significant.  As we’ll see from the examples below, only an AI-based monitoring solution can effectively monitor cloud costs. In particular, there are three layers to Umbrella’s holistic cloud monitoring solution, these include: Cost monitoring: Instead of just providing generic cloud costs, one of the main advantages of AI-based monitoring is that costs are specific to the service, region, team, and instance type. When anomalies do occur, this level of granularity allows for a much faster time-to-resolution. Usage monitoring: The next layer consists of monitoring usage on an hourly basis. This means that if usage spikes, you don’t need to wait a full day to resolve the issue and can actively prevent cost increases. Cost forecasting: Finally, the AI-based solution can take in every single cloud-based metric - even in multi-cloud environments - learn its normal behavior on its own, and create cost forecasts which allow for more effective budget planning and resource allocation. Now that we’ve discussed the three layers of AI-based cloud cost monitoring, let’s review several real-world use cases. Network Traffic Spikes In the example below, we can see that the service is an AWS EC2 instance, which is being monitored on an hourly basis. As you can see, the service experienced a 1000+ percent increase in network traffic, from 292.5M to 5.73B over the course of three hours. In this case, if the company was simply using a daily cloud cost report this spike would have been missed and costs would have also skyrocketed as it’s likely that the network traffic would have stayed at this heightened level at least until the end of the day. With the real-time alert sent to the appropriate team, which was paired with root-cause analysis, you can see the anomaly was resolved promptly, ultimately resulting in cost savings for the company. Spike in Average Daily Bucket Size The next use case is from an AWS S3 service on an hourly time frame. In this case, the first alert was sent regarding a spike in head request by bucket. As you may know, bucket sizes can go up and down frequently, but if you’re looking at the current bucket you often don’t actually know how much you’re using relative to normal levels. The key difference in the example below is that, instead of simply looking at absolute values, Umbrella’s anomaly detection was looking at the average daily bucket size. You can see that the spike in the bucket size is not larger than the typical spikes, but what is anomalous is the time of day of the spike. In this case, by looking at the average daily bucket size and monitoring on a shorter time frame, the company received a real-time alert and was able to resolve it before it incurred a significant cost. [CTA id="dcd803e2-efe9-4b57-92d5-1fca2e47b892"][/CTA] Spike in Download Rates A final example of cloud cost monitoring is monitoring the AWS CloudFront service, which was again being monitored on an hourly timescale.  In this case, there was an irregular spike in the rate of CloudFront bytes downloaded. Similar to other examples, if the company was only monitoring costs reactively at the end of the day, this could have severely impacted the bottom line. By taking a proactive approach to cloud-cost management with the use of AI and machine learning, the anomaly was quickly resolved and the company was able to save a significant amount of otherwise wasted costs. Summary: Cloud Cost Monitoring As we’ve seen from these three examples, managing a cloud-based infrastructure requires a highly granular solution that can monitor 100 percent of the data in real-time. If this unexpected cloud activity isn’t tracked in real-time, it opens the door to runaway costs, which in most cases is entirely preventable. In addition, it is critical that cloud teams understand the business context of their cloud performance and utilization. An increase in cloud costs might be a result of business growth - but not always. Understanding whether a cost increase is proportionately tied to revenue growth requires context that can be derived only through AI monitoring and cloud cost management. AI models allow companies to become proactive - rather than reactive - in their cloud financial management by catching and alerting anomalies as they occur. Each alert is paired with a deep root-cause analysis so that incidents can be remediated as fast as possible. By distilling billions of events into a single scored metric, IT teams are able to focus on what matters leave alert storms, false positives, and false negatives behind, gain control over their cloud spend, and proactively work towards cloud costs optimization.
Cloud Cost Monitoring
Blog Post 10 min read

How We're Cutting $360K From Umbrella’s Annual Cloud Costs 

In this guide, we’ll discuss exactly what strategic actions you can take in order to cut cloud costs, and lay out how our company used a plan that integrated AI-based monitoring to effectively cut $360K from our cloud costs.