Page 13 of Umbrella Blog

Blog
Blog Post 7 min read

What is Cloud Financial Management?

Few organizations remain today without some of their business operating in the cloud. According to a study from 451 Research, part of S&P Global Market Intelligence, 96 percent reported enterprises using or planning to use at least two cloud application providers (Software-as-a-Service), with 45 percent using cloud applications from five or more providers. In 2024, global spending on public cloud services is expected to reach $679 billion, surpassing $1 trillion by 2027. Most companies move to the cloud to take advantage of cloud computing solutions' speed, innovation, and flexibility. Cloud operations can also provide cost savings and improved productivity.  However, controlling cloud costs has become increasingly difficult and complex as cloud adoption grows. That is why cloud cost management has become a priority for CIOs to understand the true ROI for cloud operations.  When cloud assets are fragmented across multiple teams, vendors, and containerized environments, it is easy to lose sight of the budget. As a result, cloud financial management is a must-have for understanding cloud cost and usage data and making more informed cloud-related decisions.  Plus, it's an opportunity for more savings! According to McKinsey, businesses using CFM can reduce their cloud costs by 20% to 30%. But what exactly is Cloud Financial Management (CFM)? Is it merely about cutting costs? What kind of tools are best for multiple cloud environments? If you have these and other questions, we have the answers. Let’s jump in!   Table of Contents: What’s Cloud Financial Management? Cloud Financial Management Benefits  Cloud Financial Management Challenges Building a Cloud Center of Excellence Umbrella for Cloud Financial Management  Umbrella’s 7 Core Features for Cloud Success   [CTA id="6c56537c-2f3f-4ee7-bcc1-1b074802aa4c"][/CTA] <h2id="toc-what">What's Cloud Financial Management? Cloud Financial Management is a system that enables companies to identify, measure, monitor, and optimize finances to maximize return on their cloud computing investments.  CFM also enhances staff productivity, workflow efficiency, and other aspects of cloud management. However, it is important to remember that while cost is a major focus, it’s not the only one.  A subset of CFM is FinOps, which is essentially a combination of Finance and DevOps. The idea behind FinOps is to foster collaboration and communication between the engineering and business teams to align the cost and budget to their technical, business, and financial goals.   Cloud Financial Management Benefits  Better Track Cloud Spend Cloud Financial Management helps companies oversee operations, tasks, and resources that drive usage billing. This insight can be used to identify projects, apps, or teams that are driving your cloud costs. Optimize Cloud Costs With visibility into cloud resources and spend, your organization can identify and remove unutilized resources, redundant integrations, and wasteful processes. Financial Accountability   Instead of reacting to unexpected cost spend and spikes, cloud financial management allows businesses to plan and predict budgets by making delivery teams financially accountable. By aligning cloud financial data to business metrics, organizations can establish common goals and outcomes.  Cloud Financial Management Challenges Budgeting Migrating from on-premise to the cloud often means transitioning from a CapEx to an OpEx model. On the surface, switching to a predictable OpEx-based strategy seems attractive. However, the change can create more issues than it solves.  Optimizing costs is the biggest driver for moving to OpEx. However, cloud spend is vulnerable to waste and overspend if not carefully managed. Many companies haven't reaped the expected cloud benefits due to poor visibility and control. Some have taken the dramatic step of ‘repatriating’ workloads while others have adopted a hybrid approach.  Visibility Into Cloud Assets and Usage Monitoring cloud assets makes or breaks FinOps. But employees often find it challenging to track asset performance, resource needs, and storage requirements. Tagging offers a simple solution, allowing easy categorization of cloud assets by department, performance, usage, costs, and more. Even when you look at the infrastructure, there are numerous departments in an organization, and there are different purposes for them to use the cloud. So, unless and until there is a proper tagging system for these departments, operations, and costs, it is very difficult to monitor cloud assets.  Calculating Unit Costs The unit cost calculation becomes a tedious job, considering the complexity of the cloud infrastructure and the sheer number of assets. In addition, calculating and comparing the investment and the revenue being generated becomes difficult when there are so many multiple interdependencies.  Identifying Inefficiencies Companies that lack full visibility into cloud spend find it difficult to identify where there are inefficiencies, waste, or overuse of resources. The result is that decisions can’t be made regarding the efficient allocation of resources, and companies are in the dark regarding questions such as whether an increase in spend results from business growth or from sheer inefficiencies. Building a Cloud Center of Excellence A Cloud Center of Excellence (CCoE), or FinOps practice, is an important next step for companies using ad hoc methods for cloud cost management. A CCoE provides a roadmap to execute the organization’s cloud strategy and governs cloud adoption across the enterprise. It is meant to establish repeatable standards and processes for all organizational stakeholders to follow in a cloud-first approach. The CCoE has three core pillars: Governance - The team creates policies with cross-functional business units and selects governance tools for financial and risk management. Brokerage - Members of the CCoE help users select cloud providers and architect the cloud solution. Community - It's the responsibility of the CCoE to improve cloud knowledge in the organization and establish best practices through a knowledge base. With those pillars as a foundation, CCoEs are generally responsible for the following activities: Optimizing cloud costs - Managing and optimizing cloud spend is a key task of the CCoE. They are also accountable for tying the strategic goals of the company with the cost of delivery value in the cloud. Managing cloud transformation - In the initial phase of transformation, the CCoE should assess cloud readiness and be responsible for identifying cloud providers. During migration, the team should provide guidance and accurate reports on progress. Enforce cloud policies - Security and regulatory requirements can change frequently in complex and changing cloud ecosystems. It's important that CCoE members enforce security standards and provide operational support across the business. Umbrella for Cloud Financial Management  Umbrella’s Cloud Cost Management solution helps organizations get a handle on their true cloud costs by focusing on FinOps to drive better revenue and profitability. From a single platform, Umbrella provides complete, end-to-end visibility into your entire cloud infrastructure and related billing costs. By tracking cloud metrics alongside revenue and business metrics, Umbrella helps cloud teams grasp the actual cost of their resources. Umbrella's 7 Core Features for Cloud Success   Forecasting and Budgeting with 98.5% Accuracy Use historical data to predict cloud spending and usage based on selected metrics and changing conditions to make necessary adjustments to avoid going into the red. Cost Visibility Manage multi-cloud expenses on AWS, Azure, Google Cloud, and Kubernetes with customizable dashboards, multi-cloud cost tagging, and anomaly detection. Real-Time Cost Monitoring  Monitoring cloud spend is quite different from other organizational costs in that it can be difficult to detect anomalies in real-time. Cloud activity that isn’t tracked in real-time opens the door to potentially preventable runaway costs. Umbrella enables companies to detect cost incidents in real-time and get engineers to take immediate action.  Saving Recommendations Get 80+ CTA recommendations throughout all major cloud providers and enjoy a 40% reduction in annual cloud spending. Real-time Alerts & Detection Eliminate uncertainty surrounding anomalies through precise, targeted notifications and machine learning (ML) models. Stay consistent with cloud activity by analyzing data to accurately differentiate normal fluctuations from actual risks, thereby minimizing false positives. 360° View of the Multicloud Never waste time searching for a spending transaction again. Simplify cost management with an all-in-one platform offering billing flexibility and cost allocation for enterprise and MSP models. AI Tool for Cloud Spending With a simple search, cloud cost management can be automated with CostGPT. Get instant answers to address common cost challenges, including complex pricing models, hidden costs, and inadequate monitoring and reporting. Automatic Savings Trackers Track the effects of applied recommendations using automated savings reports and a savings tracker.   CFM just got a lot easier with Umbrella. Try it out and see the difference.  
EC2 cloud optimization
Blog Post 5 min read

AWS EC2 Cost Optimization Best Practices

Amazon EC2 Explained Amazon Elastic Compute Cloud (EC2) is one of the core services of AWS, designed to help users reduce the cost of acquiring and reserving hardware.  EC2 represents the compute infrastructure of Amazon's cloud service offerings, providing organizations a customizable selection of processors, storage, networking, operating systems, and purchasing models.  It is known for assisting organizations to simplify and speed up their deployments for less cost and enabling them to increase or decrease capacity as requirements change quickly.  However, the costs associated with instances and features in EC2 can soon get out of control if not properly managed and optimized. The first cost consideration is usually selecting an instance type.  EC2 Instance Types Even for experienced cloud engineers and FinOps practitioners, EC2 pricing is extraordinarily complex. Many options impact cost, with instances optimized for workload categories like compute, memory, accelerated computing, and storage.  The default option for purchasing is on-demand instances, which bills based on seconds or hours of usage but require no long-term commitments. EC2 instances are grouped together into families. Each EC2 family is designed to meet a target application profile in one of these buckets: General Purpose Instances General-purpose instances provide a balance of computing power, memory, and networking resources and can be used for everyday workloads like web servers and code repositories.  Compute Optimized Compute-optimized instances are best suited for applications that benefit from high-performance processors. Memory-Optimized  Memory-Optimized instances deliver faster performance for workloads that process large data sets in memory.  Accelerated Computing Accelerated Computing instances leverage hardware acceleration and co-processors to perform complex calculations and graphics processing tasks. Storage Optimized Storage optimized instances are designed for workloads requiring high performance, sequential read and write access to large-scale datasets.  When considering the cost, each instance type above can vary by region or operating system selections. The Hidden Cost of EC2 While AWS documents the cost of each instance type by region in their EC2 Pricing, getting to the actual price of using these services requires much more consideration. The first thing to consider is the status of the EC2 instance. Customers pay for computing time, disk space, and data traffic if in a running state.  Customers may still incur charges for unattached IPs and any active (not deleted) storage when in a stopped state. Unfortunately, many users mistakenly believe that stopping their servers will stop further costs from accruing, and this is not the case.  Another potential hidden cost of using EC2 is data traffic. AWS calculates data traffic costs by tier, based on a pre-defined volume with traffic falling below the volume incurring less cost and anything above paying more.  Because AWS charges for data traffic at the account level, many manual monitoring processes fall short in projecting actual costs. Considering how many AWS services comprise the AWS account of a large-scale program or company, it's easy to imagine how difficult it would be to monitor and control cloud spending in AWS. How to reduce AWS EC2 Spending Here are some of the best practices to reduce EC2 spending in AWS: EC2 Right-Sizing Many developers fail to consider right-sizing when spinning up AWS resources, but it's a critical component of optimizing AWS costs. AWS also defaults to many flexible but pricey options like On-Demand instances. Choosing a suitable instance type and service tier can significantly reduce cost without impacting performance.  EC2 Generation Upgrade AWS offers different instances tuned specifically for various workloads, as discussed above. When selecting an instance type, look for the latest generation options because they often provide the best performance and pricing.  Unnecessary Data Transfers AWS charges for inter-Availability Zone data transfer between EC2 instances even if they are located in the same region. Whenever possible, co-locate all instances within a single Availability Zone to avoid unnecessary data transfer charges.  Stopped Instances Stopping EC2 instances does not eliminate the potential for charges. Resources attached to stopped instances like EBS volumes, S3 storage, and public IPs continue to accrue costs. Consider terminating attached resources or the instance if it is no longer in use. [CTA id="dcd803e2-efe9-4b57-92d5-1fca2e47b892"][/CTA] Optimize EC2 Cost with Umbrella Umbrella’s Cloud Cost Management solution makes optimization easy. It can easily connect to AWS, Azure and GCP to monitor and manage your spending. Even with multi-cloud environments, Umbrella seamlessly combines all cloud spending into a single platform allowing for a holistic approach to optimization measures.  What makes Umbrella for Cloud unique is how it learns each service usage pattern, considering essential factors like seasonality to establish a baseline of expected behavior. That allows it to identify irregular cloud spend and usage anomalies in real-time, providing contextualized alerts to relevant teams so they can resolve issues immediately.  Proprietary ML-based algorithms offer deep root cause analysis and clear guidance on the steps for remediation. Customers are already using Umbrella to align FinOps, DevOps, and finance teams' efforts to optimize cloud spending.  Accurate forecasting is one of the central pillars of FinOps and cloud cost optimization. Umbrella leverages AI-powered forecasting with deep learning to automatically optimize cloud cost forecasts and enable businesses to react to changing conditions before impacting cost. Rather than manually watching cloud resources and billing, your analysis teams will view cloud metrics with a business context in the same place as revenue and business metrics. That allows FinOps practitioners to optimize cloud investments to drive strategic business initiatives continually.
Blog Post 6 min read

Amazon S3 Cost Optimization Best Practices

Amazon S3 Explained Amazon Simple Storage Service (S3) is an essential cornerstone of AWS and among its most popular service offerings. S3 allows tenants to store, secure, and retrieve data from S3 buckets on demand. It is widely used for its high availability, scalability, and performance. It supports six storage classes and several use cases, including website hosting, backups, application data storage, and data lake storage. There are two primary components of Amazon S3: Buckets and Objects. Users create and configure S3 buckets according to their needs, and the buckets store the objects they upload in the cloud. The six storage classes of Amazon S3 and the price differentiation While S3 prides itself on its simplicity of use, choosing the correct storage class isn't always as easy and can have a tremendous impact on costs. The free tier limits storage to 5GB in the standard class, but it's only available for new customers. AWS has six S3 storage classes above the free tier: Standard, Intelligent Tiering, Infrequent Access, One-Zone Infrequent Access, Glacier, and Glacier Deep Archive. Each one offers different features, access availability, and performance. Here is an overview of each class: Standard S3 standard storage is best suited for frequently accessed data. It's elastic in that you only pay for what you use, and customers typically use it for data-intensive content that they want access to at all times, from anywhere. Infrequent Access Storage S3 Infrequent Access Storage is best suited for use cases where data access requirements are ad hoc or infrequent and available quickly when needed. An example could be backup and recovery images for a web or application server. The cost model for infrequent storage is cheaper than standard storage but scales more each time you access the data. One-Zone Infrequent Access The "regular" Infrequent Access Storage ensures the highest availability by distributing data between at least three availability zones within a region. For use cases where data access is infrequent, lower availability is acceptable, but that still need quick retrieval times, One-Zone Infrequent Access Storage is the best option. S3 will store the data in one availability zone, but the cost will be 20% less than Infrequent Access Storage. Intelligent Tiering Amazon offers a premium S3 service called Intelligent Tiering. It analyzes usage patterns and automatically transfers data between Standard and Infrequent tiers based on access requirements. The selling point of this tier is it saves operators the labor of monitoring and transferring the data themselves. That said, it comes with a charge of $.0025 for every thousand items monitored. [CTA id="dcd803e2-efe9-4b57-92d5-1fca2e47b892"][/CTA] Glacier Most customers use S3 Glacier for record retention and compliance purposes. Retrieval requests take hours to complete, making Glacier unsuitable for any use case requiring fast access. That said, the lower cost makes it ideal when access speed isn't a concern. Glacier Deep Archive S3 Glacier Deep Archive offers additional cost savings but carries further data access limitations. Deep archive is best suited for data that customers only need to access 1-2 times per year and when they can tolerate retrieval times upwards of 12 hours. How to Reduce AWS S3 Spending AWS S3 owes its popularity to its simplicity and versatility. It helps companies and customers across the globe store personal files, host websites and blogs, and empower data lakes for analytics. The only downside is the price tag, which can become pretty hefty in a hurry depending on how much data is stored and how frequently it's accessed. Here are some helpful tips for reducing AWS S3 Spend: Use Compression AWS bases so much S3 cost on the amount of data stored, so compressing data before uploading into S3 can reap significant savings. When users need to access the file, they can download it compressed and decompress it on their local machines. Continuously monitor S3 objects and access patterns to catch anomalies and right-size storage class selections Each storage class features different costs, strengths, and weaknesses. Active monitoring to ensure S3 buckets and objects are right-sized into the correct storage class can drastically reduce costs. Remember that you can leverage multiple tiers within the same bucket, so make sure all files have the right tier selected. Remove or downgrade unused or seldom-used S3 buckets One common mistake in managing S3 storage is users will delete the contents of an S3 bucket, leaving it empty and unused. It's best to remove these buckets entirely to reduce costs and eliminate unnecessary system vulnerabilities. Use a dedicated cloud cost optimization service rather than relying only on cloud provider tools The most important recommendation we can make to keep cloud costs under control is to use a dedicated, third-party cost optimization tool instead of relying strictly on the cloud provider. The native cost management tools cloud providers offer do not go far enough in helping customers understand and optimize their cloud cost decisions. - Disable versioning if not required. - Leverage endpoint technologies to reduce data transfer costs. Cloud Cost Management with Umbrella Organizations seeking to understand and control their cloud costs need a dedicated tool. Umbrella's Cloud Cost solutions easily connect to cloud providers like AWS to monitor and manage cloud spending in real-time and alert teams to critical cost-savings recommendations. Here are some of the key features: Umbrella makes lifecycle recommendations in real-time, based on actual usage patterns and data needs. Rather than teams manually monitoring S3 buckets and trying to figure out if and when to switch tiers, Umbrella provides a detailed, staged plan for each object considering patterns of seasonality. Versioning can significantly impact S3 costs because each new version is another file to maintain. Umbrella continuously monitors object versions and provides tailored, actionable recommendations on which versions to keep. Many customers don't realize how uploading files into S3 can significantly impact costs. In particular, large uploads that get interrupted reserve space until completed, resulting in higher charges. Umbrella provides comprehensive recommendations for uploading files and which files to delete in which bucket.  
Blog Post 7 min read

EC2 Reserved Instance: Everything You Need to Know

What is a Reserved Instance?   An Amazon EC2 Reserved Instance (RI) is one of the most powerful cost savings tools available on AWS. It’s officially described as a billing discount applied to the use of an on-demand instance in your account. To truly understand what RI is, we need to take a step back and look at the different payment options for AWS. On-Demand – pay as needed. No commitments. Today you can use 1,000 servers and tomorrow it can only be 10 servers. You are charged for what you actually use. Spot - Amazon sells its server Spot. This means Amazon sells its leftover server space that it has not been able to sell without the use of a data center. The server is the same server that they provide with the on-demand option. The significant difference is that Amazon can request the server back at 2 minutes notice (this can cause your services to have an interruption). On the other side, the price can reach a discount of up to 90%. In most cases, the chances of them asking for the servers back is very low (around 5%). Reserved Instances - Simply put, you are committing to Amazon that you are going to use a particular server for a set period of time and in return for a commitment, Amazon will give you a discount that can reach as high as 75%. One of the most confusing things about RI (as opposed to On-Demand and Spot) is that with RI you don’t buy a specific server but your on-demand servers still get the RI discounted rate. What is being committed?   Let’s look at the parameters that affect the height of the RI premise: The period: 1 year 3 year The Payment option: Full up-front Partial up-front No up-front (will charge 1st of each month) Offering Class: Standard Convertible Of course, the longer the commitment, and the upfront payment is higher, the assumption that Amazon offers is more significant. The above graph illustrates different RI options with respect to on-demand and recommending a specific RI that is tailored to each customer’s specific needs. In addition, when you purchase a RI, you are also committing to the following parameters: Platform (Operation system) Instance Type Region The RI is purchased for a specific region and at no point can the region be modified. To be clear, when we commit to Amazon on a particular server, we also have to commit to the operating system, region and, in some cases, instance size. Usually, after a few months the RI usage has improved its on-demand price and after the break-even point, every minute of running is considered “free” in relation to on-demand.   Related content: Read our guide to AWS Pricing Load Balancer Standard or Convertible offering   With RI, you can choose if we want the Standard or Convertible offering class. This decision is based on how much flexibility we need. We can decide how long we are willing to commit to using the RI and we can choose both our form of payment and if we prefer to pay in advance. Obviously, the more committed you can be to Amazon (longer period, prepay, with less change options etc.) the greater the discount you will get. We still need to clarify the differences between Standard and Convertible. In the Offering Class Standard, you commit to specific servers while Convertible is a financial commitment. This means, you commit to spend X money during this time period and are more open to flexibility in terms of the type of server. Below is a comparison from the AWS website about the differences between Convertible and Standard. Now that we have a better understanding of what RI is, we need to understand how to know how much you should commit to Amazon and what kind of commitment meets your needs. As we know, we cannot predict the future, but we can make educated conclusions on the future based on our past activity. It is also important to note that when you commit to RI, you must run the particular server 744 hours a month (assuming there are 31 days). The discount only applies per hour so if you were to run 744 servers in one hour, only one server will get the discount. In addition, it can be difficult to understand how Amazon figures out the charge. For example, if at some point there are 6 servers running together, Amazon can decide to give each server 10 minutes of the RI rate and 50 minutes of standard on-demand rate. The decision which server gets the discounted rate is Amazon’s alone. If a particular account has multiple linked accounts, and the linked account that bought the RI did not utilize the RI at a given time, the RI discount can be applied to another linked account that is under the same payer account. RI Normalization factor   Recently Amazon introduced a special deal for RI running on the Linux operating system. The benefit is that you do not have to commit to the size of the server but rather only to the server type. So assuming I bought m5.large but actually used m5.xlarge, 50% of my server cost would be discounted. The reverse is also true if I bought m5.xlarge but in practice, I ran m5.large it will get the discount (both servers will get the discount). Amazon has created a table, which normalizes server sizes, and it allows you to commit to a number of server-type units rather than size. In order to intelligently analyze which RI is best for you, it is necessary to take all the resources used, convert the sizes to a normalization factor and check how many servers were used every hour, keeping in mind that you will only get the discount for one hour of usage at a time. You also need to deduct RI that you have already purchased to avoid unnecessary additional RI purchases. Additionally, there will be some instances where servers may not run in succession and there is a need to unite between different resources. Lastly, it is also possible that certain servers may run for hours but do not complete a full month. Despite the above complexity and the need to analyze all of these factors, the high discount obtained through RI, may still result in a significant reduction in costs. Umbrella’s algorithm takes all the above factors and data into account, converts the Normalization factor wherever possible, tracks 30 days of history, and uses its expertise to provide the optimal mix for each customer. Undoubtedly, RI is one of the most significant tools for reducing your cloud costs. By building the proper mix of services combined with an understanding of the level of commitment you can safely reduce your cloud costs by tens of percent. Optimizing AWS EC2 with Umbrella   Umbrella’s Cloud Cost Management solution makes optimization EC2 compute services easy. Even with multi-cloud environments, Umbrella seamlessly combines all cloud spending into a single platform allowing for a holistic approach to optimization measures. Umbrella offers built in, easy-to-action cost-saving recommendations specifically for EC2, including: Amazon EC2 rightsizing recommendations EC2 rightsizing EC2 operating system optimization EC2 generation upgrade Amazon EC2 purchasing recommendations EC2 Savings Plans EC2 Reserved Instances Amazon EC2 management recommendations EC2 instance unnecessary data transfer EC2 instance idle EC2 instance stopped EC2 IP unattached Umbrella helps FinOps teams prioritize recommendations by justifying their impact with a projected  performance and savings impact. Umbrella learns each service usage pattern, considering essential factors like seasonality to establish a baseline of expected behavior. That allows it to identify irregular cloud spend and usage anomalies in real-time, providing contextualized alerts to relevant teams so they can resolve issues immediately. Proprietary ML-based algorithms offer deep root cause analysis and clear guidance on steps for remediation.
Blog Post 5 min read

AWS Savings Plan: All You Need to Know

Organizations using Amazon Web Services (AWS) cloud traditionally leveraged Reserved Instances (RI) to realize cost savings by committing to the use of a specific instance type and operating system within the AWS region. Nearly 2 years ago, AWS rolled out a new program called Savings Plans, which give companies a new way to reduce costs by making an advanced commitment of a one-year or three-year fixed term. Based on first impressions the immediate understanding was that saving money on your AWS would be significantly simpler and easier, due to the lowering of the customer’s required commitment. The reality is the complete opposite. With Amazon’s Saving plans, it is significantly harder to manage your spending and lower your costs on AWS Plans, especially if you only rely on Amazon’s tools. 1. What are Savings Plans? To understand why the new Saving Plans significantly complicate cloud cost management, it is necessary to briefly review the two savings plan options. EC2 Compute Saving Plan The EC2 Savings plan is just a Standard Reserved Instance without the requirement of having to commit to an operating system up front. Since changing an operating system is not routine, this has very little added value. Compute Saving Plan With this product Amazon has clearly introduced a new line. The customer no longer has to commit to the type of Compute he is going to use. You no longer have to commit to the type of machine, its size or even the region where the machine would run, these are all significant advantages. In addition, Amazon no longer requires a commitment to the service that will use Compute. It does not have to be EC2, which means that when purchasing Compute Saving Plans, using Compute in EMR, ECS EKS clusters or Fargate can also be considered a guarantee and you will receive a discount. In RI Convertible, to get a discount on a different server type, rather than the original server for which we purchased the RI an RI change operation was required. With the new Compute Plan, it is not necessary to make the change and the discount is automatically applied to the different types of servers. The bottom line is that you commit to the hourly cost of computing time, however, you choose whether the commitment is for one or three years and how you want to pay i.e. prepayment, partial payment, or daily payment. At this stage, it sounds like Compute Saving Plans would simplify and lower your costs, as the commitment is more flexible. However, as we stated above, the reality is much more complex. 2. Are Amazon’s Saving Plan Recommendations Right for Me? Let’s start with the most trivial yet critical question, how do I know the optimal computing time for me? Amazon offers you recommendations of what your computing time costs should be and what they feel you should commit to buying from them. It’s interesting that Amazon offers these recommendations considering they don’t share usage data with their users. So what is this recommendation based on? Amazon is recommending to their users to commit to spend hundreds of thousands of dollars a month without any real data or usage information to help users make an educated investment decision. Usually when people commit to future usage they do so based on past usage data. The one thing that Amazon does allow you to do is choose a time period on which their recommendation will be based on. For example, based on usage over the last 30 days of a sample account, Amazon recommended a spend of $ 0.39 per computing hour. The IT manager can simply accept Amazon’s recommendation, but with no ability to check the data the resulting purchase could cost the company a significant amount of additional and unnecessary money. In the example above, there was significant usage over the last 30 days, however a couple of weeks prior to this, there may have been a significant change, such as a reduction in server volume and/or a RI acquisition and therefore the recommendation here should have been particularly lower. This is even truer if Saving Plans had already been purchased and had earned an actual discount. 3. How do I know which savings plan is best for my company? On this large and significant vacuum Umbrella for Cloud Cost can provide a lot of value. Using Umbrella, you can see your average hourly cost per day for the last 30 days. Since the Saving Plan estimate does not include the Compute hours already receiving an RI discount, Umbrella only displays the cost of Compute on-demand. It is also critical for a user who has already purchased and is utilizing Saving Plans to know how this impacts his costs before making any additional commitments. Umbrella shows the actual cost of each individual computing hour over the last 30 days to enable educated decisions that can impact significant multi-year financial commitments. Umbrella utilizes its unique algorithm and analyses all your data to deliver customized recommendations on what will be the optimal computing time cost that you should actually commit to. It is important to note that when purchasing a Compute Saving Plans, it is not possible to know at the time of purchase what your exact discount will be. The actual amount of the discount can be only estimated in all cases other than RI. This uncertainty is due to an additional complexity that exists in Compute Saving Plans. Each type of server receives a different discount, so in practice the discounts that you receive depends on the type of server you actually run and if Amazon’s algorithm chooses to provide that type of server with the Saving Plan discounts offered.
Cloud Cost Management
Blog Post 13 min read

Manage Cloud Costs Like the Pros: 6 Tools and 7 Best Practices

Continuous monitoring, deep visibility, business context, and forecasting are essential capabilities for eliminating unpredictable cloud spend
Blog Post 5 min read

12 Must-Read Data Analytics Websites of 2024

When it comes to staying current on big data and analytics, you'll want to bookmark these leading blogs and sites.
Blog Post 10 min read

The Rise of FinOps

Many companies have tried to feed business data, such as business activity, into IT or APM monitoring solutions, only to discover the data is too dynamic for static thresholds. Some companies choose to depend on analyze BI dashboards to find issues, but that leaves anomaly detection to chance. As companies have tried to solve these challenges, AI is driving a future where monitoring business data is monitored autonomously.
Good Catch Cloud Cost Monitoring
Blog Post 5 min read

Good Catch: Cloud Cost Monitoring

Aside from ensuring each service is working properly, one of the most challenging parts of managing a cloud-based infrastructure is cloud cost monitoring. There are countless services to keep track of—including storage, databases, and cloud computing—each with its own complex pricing structure. Cloud cost monitoring is essential for both cloud cost management and optimization. But monitoring cloud spend is quite different from other organizational costs in that it can be difficult to detect anomalies in real-time and accurately forecast monthly costs.  Many cloud providers such as AWS, Google Cloud, and Azure provide you with a daily cost report, but in most cases, this is not enough. For example, if someone is incorrectly querying a database for a few hours this can cause costs to skyrocket—and with a daily report, you wouldn’t be able to detect the spike until it’s too late.  While there are cloud cost management tools that allow you to interpret costs, again these technologies often fall short as they don’t provide the granularity that’s required in real-time monitoring. Similarly, without a real-time alert to detect and resolve the anomaly, the potential to negatively impact the bottom line is significant.  As we’ll see from the examples below, only an AI-based monitoring solution can effectively monitor cloud costs. In particular, there are three layers to Umbrella’s holistic cloud monitoring solution, these include: Cost monitoring: Instead of just providing generic cloud costs, one of the main advantages of AI-based monitoring is that costs are specific to the service, region, team, and instance type. When anomalies do occur, this level of granularity allows for a much faster time-to-resolution. Usage monitoring: The next layer consists of monitoring usage on an hourly basis. This means that if usage spikes, you don’t need to wait a full day to resolve the issue and can actively prevent cost increases. Cost forecasting: Finally, the AI-based solution can take in every single cloud-based metric - even in multi-cloud environments - learn its normal behavior on its own, and create cost forecasts which allow for more effective budget planning and resource allocation. Now that we’ve discussed the three layers of AI-based cloud cost monitoring, let’s review several real-world use cases. Network Traffic Spikes In the example below, we can see that the service is an AWS EC2 instance, which is being monitored on an hourly basis. As you can see, the service experienced a 1000+ percent increase in network traffic, from 292.5M to 5.73B over the course of three hours. In this case, if the company was simply using a daily cloud cost report this spike would have been missed and costs would have also skyrocketed as it’s likely that the network traffic would have stayed at this heightened level at least until the end of the day. With the real-time alert sent to the appropriate team, which was paired with root-cause analysis, you can see the anomaly was resolved promptly, ultimately resulting in cost savings for the company. Spike in Average Daily Bucket Size The next use case is from an AWS S3 service on an hourly time frame. In this case, the first alert was sent regarding a spike in head request by bucket. As you may know, bucket sizes can go up and down frequently, but if you’re looking at the current bucket you often don’t actually know how much you’re using relative to normal levels. The key difference in the example below is that, instead of simply looking at absolute values, Umbrella’s anomaly detection was looking at the average daily bucket size. You can see that the spike in the bucket size is not larger than the typical spikes, but what is anomalous is the time of day of the spike. In this case, by looking at the average daily bucket size and monitoring on a shorter time frame, the company received a real-time alert and was able to resolve it before it incurred a significant cost. [CTA id="dcd803e2-efe9-4b57-92d5-1fca2e47b892"][/CTA] Spike in Download Rates A final example of cloud cost monitoring is monitoring the AWS CloudFront service, which was again being monitored on an hourly timescale.  In this case, there was an irregular spike in the rate of CloudFront bytes downloaded. Similar to other examples, if the company was only monitoring costs reactively at the end of the day, this could have severely impacted the bottom line. By taking a proactive approach to cloud-cost management with the use of AI and machine learning, the anomaly was quickly resolved and the company was able to save a significant amount of otherwise wasted costs. Summary: As we’ve seen from these three examples, managing a cloud-based infrastructure requires a highly granular solution that can monitor 100 percent of the data in real-time. If this unexpected cloud activity isn’t tracked in real-time, it opens the door to runaway costs, which in most cases is entirely preventable. In addition, it is critical that cloud teams understand the business context of their cloud performance and utilization. An increase in cloud costs might be a result of business growth - but not always. Understanding whether a cost increase is proportionately tied to revenue growth requires context that can be derived only through AI monitoring and cloud cost management. AI models allow companies to become proactive - rather than reactive - in their cloud financial management by catching and alerting anomalies as they occur. Each alert is paired with a deep root-cause analysis so that incidents can be remediated as fast as possible. By distilling billions of events into a single scored metric, IT teams are able to focus on what matters leave alert storms, false positives, and false negatives behind, gain control over their cloud spend, and proactively work towards cloud costs optimization.