Resources

FILTERS

Resources

Blog Post 5 min read

AWS Savings Plan: All You Need to Know

Organizations using Amazon Web Services (AWS) cloud traditionally leveraged Reserved Instances (RI) to realize cost savings by committing to the use of a specific instance type and operating system within the AWS region. Nearly 2 years ago, AWS rolled out a new program called Savings Plans, which give companies a new way to reduce costs by making an advanced commitment of a one-year or three-year fixed term. Based on first impressions the immediate understanding was that saving money on your AWS would be significantly simpler and easier, due to the lowering of the customer’s required commitment. The reality is the complete opposite. With Amazon’s Saving plans, it is significantly harder to manage your spending and lower your costs on AWS Plans, especially if you only rely on Amazon’s tools. 1. What are Savings Plans? To understand why the new Saving Plans significantly complicate cloud cost management, it is necessary to briefly review the two savings plan options. EC2 Compute Saving Plan The EC2 Savings plan is just a Standard Reserved Instance without the requirement of having to commit to an operating system up front. Since changing an operating system is not routine, this has very little added value. Compute Saving Plan With this product Amazon has clearly introduced a new line. The customer no longer has to commit to the type of Compute he is going to use. You no longer have to commit to the type of machine, its size or even the region where the machine would run, these are all significant advantages. In addition, Amazon no longer requires a commitment to the service that will use Compute. It does not have to be EC2, which means that when purchasing Compute Saving Plans, using Compute in EMR, ECS EKS clusters or Fargate can also be considered a guarantee and you will receive a discount. In RI Convertible, to get a discount on a different server type, rather than the original server for which we purchased the RI an RI change operation was required. With the new Compute Plan, it is not necessary to make the change and the discount is automatically applied to the different types of servers. The bottom line is that you commit to the hourly cost of computing time, however, you choose whether the commitment is for one or three years and how you want to pay i.e. prepayment, partial payment, or daily payment. At this stage, it sounds like Compute Saving Plans would simplify and lower your costs, as the commitment is more flexible. However, as we stated above, the reality is much more complex. 2. Are Amazon’s Saving Plan Recommendations Right for Me? Let’s start with the most trivial yet critical question, how do I know the optimal computing time for me? Amazon offers you recommendations of what your computing time costs should be and what they feel you should commit to buying from them. It’s interesting that Amazon offers these recommendations considering they don’t share usage data with their users. So what is this recommendation based on? Amazon is recommending to their users to commit to spend hundreds of thousands of dollars a month without any real data or usage information to help users make an educated investment decision. Usually when people commit to future usage they do so based on past usage data. The one thing that Amazon does allow you to do is choose a time period on which their recommendation will be based on. For example, based on usage over the last 30 days of a sample account, Amazon recommended a spend of $ 0.39 per computing hour. The IT manager can simply accept Amazon’s recommendation, but with no ability to check the data the resulting purchase could cost the company a significant amount of additional and unnecessary money. In the example above, there was significant usage over the last 30 days, however a couple of weeks prior to this, there may have been a significant change, such as a reduction in server volume and/or a RI acquisition and therefore the recommendation here should have been particularly lower. This is even truer if Saving Plans had already been purchased and had earned an actual discount. 3. How do I know which savings plan is best for my company? On this large and significant vacuum Umbrella for Cloud Cost can provide a lot of value. Using Umbrella, you can see your average hourly cost per day for the last 30 days. Since the Saving Plan estimate does not include the Compute hours already receiving an RI discount, Umbrella only displays the cost of Compute on-demand. It is also critical for a user who has already purchased and is utilizing Saving Plans to know how this impacts his costs before making any additional commitments. Umbrella shows the actual cost of each individual computing hour over the last 30 days to enable educated decisions that can impact significant multi-year financial commitments. Umbrella utilizes its unique algorithm and analyses all your data to deliver customized recommendations on what will be the optimal computing time cost that you should actually commit to. It is important to note that when purchasing a Compute Saving Plans, it is not possible to know at the time of purchase what your exact discount will be. The actual amount of the discount can be only estimated in all cases other than RI. This uncertainty is due to an additional complexity that exists in Compute Saving Plans. Each type of server receives a different discount, so in practice the discounts that you receive depends on the type of server you actually run and if Amazon’s algorithm chooses to provide that type of server with the Saving Plan discounts offered.
Blog Post 5 min read

12 Must-Read Data Analytics Websites of 2024

When it comes to staying current on big data and analytics, you'll want to bookmark these leading blogs and sites.
Blog Post 10 min read

The Rise of FinOps

Many companies have tried to feed business data, such as business activity, into IT or APM monitoring solutions, only to discover the data is too dynamic for static thresholds. Some companies choose to depend on analyze BI dashboards to find issues, but that leaves anomaly detection to chance. As companies have tried to solve these challenges, AI is driving a future where monitoring business data is monitored autonomously.
Good Catch Cloud Cost Monitoring
Blog Post 5 min read

Good Catch: Cloud Cost Monitoring

Aside from ensuring each service is working properly, one of the most challenging parts of managing a cloud-based infrastructure is cloud cost monitoring. There are countless services to keep track of—including storage, databases, and cloud computing—each with its own complex pricing structure. Cloud cost monitoring is essential for both cloud cost management and optimization. But monitoring cloud spend is quite different from other organizational costs in that it can be difficult to detect anomalies in real-time and accurately forecast monthly costs.  Many cloud providers such as AWS, Google Cloud, and Azure provide you with a daily cost report, but in most cases, this is not enough. For example, if someone is incorrectly querying a database for a few hours this can cause costs to skyrocket—and with a daily report, you wouldn’t be able to detect the spike until it’s too late.  While there are cloud cost management tools that allow you to interpret costs, again these technologies often fall short as they don’t provide the granularity that’s required in real-time monitoring. Similarly, without a real-time alert to detect and resolve the anomaly, the potential to negatively impact the bottom line is significant.  As we’ll see from the examples below, only an AI-based monitoring solution can effectively monitor cloud costs. In particular, there are three layers to Umbrella’s holistic cloud monitoring solution, these include: Cost monitoring: Instead of just providing generic cloud costs, one of the main advantages of AI-based monitoring is that costs are specific to the service, region, team, and instance type. When anomalies do occur, this level of granularity allows for a much faster time-to-resolution. Usage monitoring: The next layer consists of monitoring usage on an hourly basis. This means that if usage spikes, you don’t need to wait a full day to resolve the issue and can actively prevent cost increases. Cost forecasting: Finally, the AI-based solution can take in every single cloud-based metric - even in multi-cloud environments - learn its normal behavior on its own, and create cost forecasts which allow for more effective budget planning and resource allocation. Now that we’ve discussed the three layers of AI-based cloud cost monitoring, let’s review several real-world use cases. Network Traffic Spikes In the example below, we can see that the service is an AWS EC2 instance, which is being monitored on an hourly basis. As you can see, the service experienced a 1000+ percent increase in network traffic, from 292.5M to 5.73B over the course of three hours. In this case, if the company was simply using a daily cloud cost report this spike would have been missed and costs would have also skyrocketed as it’s likely that the network traffic would have stayed at this heightened level at least until the end of the day. With the real-time alert sent to the appropriate team, which was paired with root-cause analysis, you can see the anomaly was resolved promptly, ultimately resulting in cost savings for the company. Spike in Average Daily Bucket Size The next use case is from an AWS S3 service on an hourly time frame. In this case, the first alert was sent regarding a spike in head request by bucket. As you may know, bucket sizes can go up and down frequently, but if you’re looking at the current bucket you often don’t actually know how much you’re using relative to normal levels. The key difference in the example below is that, instead of simply looking at absolute values, Umbrella’s anomaly detection was looking at the average daily bucket size. You can see that the spike in the bucket size is not larger than the typical spikes, but what is anomalous is the time of day of the spike. In this case, by looking at the average daily bucket size and monitoring on a shorter time frame, the company received a real-time alert and was able to resolve it before it incurred a significant cost. [CTA id="dcd803e2-efe9-4b57-92d5-1fca2e47b892"][/CTA] Spike in Download Rates A final example of cloud cost monitoring is monitoring the AWS CloudFront service, which was again being monitored on an hourly timescale.  In this case, there was an irregular spike in the rate of CloudFront bytes downloaded. Similar to other examples, if the company was only monitoring costs reactively at the end of the day, this could have severely impacted the bottom line. By taking a proactive approach to cloud-cost management with the use of AI and machine learning, the anomaly was quickly resolved and the company was able to save a significant amount of otherwise wasted costs. Summary: As we’ve seen from these three examples, managing a cloud-based infrastructure requires a highly granular solution that can monitor 100 percent of the data in real-time. If this unexpected cloud activity isn’t tracked in real-time, it opens the door to runaway costs, which in most cases is entirely preventable. In addition, it is critical that cloud teams understand the business context of their cloud performance and utilization. An increase in cloud costs might be a result of business growth - but not always. Understanding whether a cost increase is proportionately tied to revenue growth requires context that can be derived only through AI monitoring and cloud cost management. AI models allow companies to become proactive - rather than reactive - in their cloud financial management by catching and alerting anomalies as they occur. Each alert is paired with a deep root-cause analysis so that incidents can be remediated as fast as possible. By distilling billions of events into a single scored metric, IT teams are able to focus on what matters leave alert storms, false positives, and false negatives behind, gain control over their cloud spend, and proactively work towards cloud costs optimization.
Documents 1 min read

Transform MSP operations with FinOps

Documents 1 min read

Optimize K8s cloud costs

Webinars 3 min read

Overcoming Challenges to Scaling FinOps

After putting the initial tools and processes in place for a cloud management strategy, many organizations struggle to scale their FinOps to fit their growing cloud needs. To ensure that the scalability of cloud computing is actually boosting your company’s financial performance, delivering continuous insight and value from cloud investments is critical. Cyberark, an identity security company, uses Umbrella's cloud cost management solution for achieving ongoing value and savings in their FinOps practice. In a recent Umbrella webinar, Cyberark's FinOps expert, Uri Eliyahu, discussed solutions to creating an all-encompassing cloud culture and tips for driving organizational alignment around FinOps.   [CTA id="794d2af8-d992-4ed1-9bfb-c286e1d3e3c8"][/CTA]   Uri compares cloud computing to the game of chess. Sometimes, you don't know what you don't know but you can plan your moves ahead of time. Uri shared how Cyberark established cloud operations and a Cloud Center of Excellence using tools like Umbrella to increase influence across the organization and reduce what is not known about cloud spend and usage. Cloud Operation Magic Triangle Traditionally, organizations relied on on-premise data centers which required a CapEx expenditure ahead to purchase hardware and software. With cloud computing, developers or engineers can spin up an instance in one click without oversight or approval from IT or Finance. To reduce this risk, Uri says companies need to provide tools and processes to make sure cloud engineers can do their work across the following domains: security, FinOps and operations. Uri says companies should have a vision for building a cloud operations culture, from the top down. Direct and Indirect Costs  It's important to take into consideration direct and indirect costs when budgeting for FinOps. For example, typical direct costs would include services like Amazon EC2, Amazon S3 and Amazon RDS. But according to Uri, direct costs account for only about 55% of total cloud spend. Indirect costs such as AWS KMS, AWS CloudTrail or data transfer must be considered as well. Using Umbrella to Scale FinOps Umbrella is the only FinOps platform built to measure and drive success in FinOps, giving you complete visibility into your KPIs and baselines, recommendations to help you control cloud waste and spend, and reporting to make sure you improve your cloud efficiency. Umbrella  is built to offer cloud teams a contextual understanding of cloud costs and the impact of business decisions on cloud spend, helping companies achieve unit economics and understand how specific units and/or customers impact cloud metrics including cost, utilization and performance. From a single platform, Umbrella provides complete, end-to-end visibility into your entire cloud infrastructure and related billing costs. By monitoring your cloud metrics together with your revenue and business metrics, Umbrella enables cloud teams to understand the true cost of their SaaS customers and features. Umbrella automatically learns each service usage pattern and alerts relevant teams to irregular cloud spend and usage anomalies, providing the full context of what is happening for the fastest time to resolution. With continuous monitoring and deep visibility, you gain the power to align FinOps, DevOps, and Finance teams and cut your cloud bill.
Webinars 3 min read

Multicloud Forecasting and Budgeting for FinOps

The on-demand infrastructure of the cloud has its benefits and challenges. While it allows flexibility and immediate availability, the rapid fluctuations of cloud use makes it difficult to forecast and budget. The goal of forecasting is to help businesses anticipate results and create budgets. It's typically based on a combination of historical spending and an evaluation of future infrastructure and application plans. Umbrella's cloud and data science experts recently recently led a webinar discussing strategies for forecasting future multicloud spend across AWS, Azure, GCP and Kubernetes. 4 types of FinOps forecasting Ira Cohen, Umbrella's Chief Data Scientist and Jeff Haines, Umbrella's Director of Marketing explained that in order to help control spending, business should leverage four types of FinOps forecasting: Planning: long-term - Foresee the long term evolution of your cloud costs based on past usage and inputs about what might happen in the next year or two Budgeting: mid-term - Analyze budgets that were allocated to different teams or business units every few months or quarter to ensure they are on track Monitoring: short-term - Forecast through the next month looking at forecast vs. actual vs. budgeted, track progress and take action if over budget Insight generation for proactive FinOps - Forecasting to generate insights and cost saving recommendations Capabilities required for forecasting models Granularity - Forecasting for different clouds, services teams and products Accuracy - Use the forecast at any granularity to get accurate budgets Flexibility - Should be flexible enough to adapt to changes Forecasting cloud spend with Umbrella Cohen and Haines discussed the example of of short term ML-powered forecasting with Umbrella. In this graph, the teal line tracks the previous calendar month actual spend by day and the blue area is showing the actual current month spend in September. The filled blue area represents the actual spend.In this example, month to date costs were about $3.2 million dollars. The dotted orange line represents Umbrella's AI-generated forecast for the remainder of the month which is estimated to be a little over $5 million. You’re able to configure budgets for business objects like linked accounts, services, teams, and projects. This overview shows current versus budgeted consumption for each budget, as well as forecasted versus budgeted consumption. You can set budgets monthly, monthly through the quarter, and monthly for the next calendar or fiscal year. In addition to forecasting capabilities, Umbrella also provides end-to-end visibility into an organization's entire cloud infrastructure and related billing costs. By monitoring cloud metrics together with revenue and business metrics, Umbrella enables cloud teams to understand the true cost of their cloud resources, with benefits such as: Deep visibility and insights - Report on and allocate 100% of your multicloud costs and deliver relevant reporting for each persona in your FinOps organization. Easy-to-action savings recommendations - Reduce waste and maximize utilization with 40+ savings recommendations personalized to your business Immediate value - You'll know how much you can immediately save from day one and rely on pre-configured, customized reports to begin eliminating waste. With Umbrella's continuous monitoring and deep visibility, engineers gain the power to eliminate unpredictable spending. Umbrella automatically learns each service usage pattern and alerts relevant teams to irregular cloud spend and usage anomalies, providing the full context of what is happening for the fastest time to resolution.
Webinars 3 min read

Optimize Your Kubernetes Costs and Infrastructure

Optimizing Kubernetes Costs   Gartner predicts by 2022, more than 75% of global organizations will be running containerized applications in production, a huge jump from the mere 30% in 2019. Kubernetes remains the most popular container orchestration in the cloud. According to the Cloud Native Computing Foundation (CNCF) 96% of organizations are already using or evaluating Kubernetes in 2022 Kubernetes has crossed the adoption chasm to become a mainstream global technology With more organizations adopting Kubernetes, the reality is setting in that there is tremendous potential cost impact due to lack of visibility into the cost of operating Kubernetes in the cloud. According to CNCF, inefficient or nonexistent Kubernetes cost monitoring is causing overspend. Cloud experts at Umbrella and Komodor recently hosted a webinar to discuss the challenges of optimizing cloud costs and how to empower teams to control Kubernetes costs and health. The rise of FinOps Historically, engineers and architects did not have to worry too much about operational costs. Now, engineers are on the hook for the financial impact of:  Code resource utilization Node selections Pod and container configurations Meanwhile, finance has dealing with the transition from the CapEx world of on-premises IT to OpEx-driven cloud as well as comprehending cloud cost drivers and the complexity of the cloud bill.  That's why more organizations have cross-functional Kubernetes value realization team, often called FinOps or Cloud Center of Excellence. The goal of this team is to strategically bring engineering and finance together and remove barriers to maximizing the revenue return on your business’ investment in Kubernetes. Visibility into Kubernetes is critical Getting control of Kubernetes costs depends primarily on gaining better visibility. CNCF combines all aspects of visibility together with monitoring, but, when asked what level of Kubernetes monitoring they have in place: Nearly 45% of industry respondents were simply estimating costs Almost 25% had no cost monitoring in place  With 75% of organizations running Kubernetes workloads in production, now is the time to eliminate cloud cost blindspots by understanding K8s cost drivers. Kubernetes cost drivers In order to build better visibility, organizations need to understand the seven primary Kubernetes cost drivers: Underlying nodes Pod CPU/memory requests/limits Persistent volumes K8s scheduler Data transfer  Networking  App architecture In the webinar, experts outline specific strategies that will empower your team gain visibility into and optimize each of the Kubernetes cost drivers. Umbrella for Kubernetes cost optimization To enable Finops that covers  all of  Kubernetes, enterprise organizations are choosing Umbrella for continuous visibility into K8s costs and drivers so you can understand what elements are contributing to your costs and tie them to your business objectives. With Umbrella, you can visualize your entire Kubernetes and multicloud infrastructures from macro, infrastructure-wide views, all the way down to the specifics of each container. Umbrella empowers finance teams to allocate and track every dollar of spend to business objects and owners, revealing where costs are originating. We help you monitor your cloud spend so can respond to anomalous activity immediately and are never surprised by your cloud bill.   Our team of scientists has delivered AI-powered cost forecasting that helps you accurately predict costs and negotiate enterprise discounts. With Umbrella, you'll realize a culture of FinOps that solves the Kubernetes cost visibility problem.