- Article
- 7 minutes to read
You can significantly reduce your cost for Azure Monitor by understanding your different configuration options and opportunities to reduce the amount of data that it collects. Before you use this article, you should see Azure Monitor cost and usage to understand the different ways that Azure Monitor charges and how to view your monthly bill.
Note
This article describes Cost optimization for Azure Monitor as part of the Azure Well-Architected Framework. This is a set of guiding tenets that can be used to improve the quality of a workload. The framework consists of five pillars of architectural excellence:
- Reliability
- Security
- Cost Optimization
- Operational Excellence
- Performance Efficiency
Design considerations
Azure Monitor includes the following design considerations related to cost:
- Log Analytics workspace architecture
You can start using Azure Monitor with a single Log Analytics workspace by using default options. As your monitoring environment grows, you'll need to make decisions about whether to have multiple services share a single workspace or create multiple workspaces. There can be cost implications with your workspace design, most notably when you combine different services such as operational data from Azure Monitor and security data from Microsoft Sentinel. This may include trade-offs between functionality and cost depending on your particular priorities.
See Design a Log Analytics workspace architecture for a list of criteria to consider when designing a workspace architecture.
(Video) Azure Cost Optimization Deep Dive
Checklist
Log Analytics workspace configuration
- Configure pricing tier or dedicated cluster to optimize your cost depending on your usage.
- Configure tables used for debugging, troubleshooting, and auditing as Basic Logs.
- Configure data retention and archiving.
Data collection
- Use diagnostic settings and transformations to collect only critical resource log data from Azure resources.
- Configure VM agents to collect only critical events.
- Use transformations to filter resource logs.
- Ensure that VMs aren't sending data to multiple workspaces.
Monitor usage
- Send alert when data collection is high.
- Analyze your collected data at regular intervals to determine if there are opportunities to further reduce your cost.
- Consider a daily cap as a preventative measure to ensure that you don't exceed a particular budget.
Configuration recommendations
Log Analytics workspace configuration
You may be able to significantly reduce your costs by optimizing the configuration of your Log Analytics workspaces. You can commit to a minimum amount of data collection in exchange for a reduced rate, and optimize your costs for the functionality and retention of data in particular tables.
Recommendation | Description |
---|---|
Configure pricing tier or dedicated cluster for your Log Analytics workspaces. | By default, Log Analytics workspaces will use pay-as-you-go pricing with no minimum data volume. If you collect enough amount of data, you can significantly decrease your cost by using a commitment tier or dedicated cluster, which allows you to commit to a daily minimum of data collected in exchange for a lower rate. See Azure Monitor Logs cost calculations and options for details on commitment tiers and guidance on determining which is most appropriate for your level of usage. See Usage and estimated costs to view estimated costs for your usage at different pricing tiers. |
Configure tables used for debugging, troubleshooting, and auditing as Basic Logs. | Tables in a Log Analytics workspace configured for Basic Logs have a lower ingestion cost in exchange for limited features and a charge for log queries. If you query these tables infrequently, this query cost can be more than offset by the reduced ingestion cost. See Configure Basic Logs in Azure Monitor (Preview) for more information about Basic Logs and Query Basic Logs in Azure Monitor (preview) for details on query limitations. |
Configure data retention and archiving. | There is a charge for retaining data in a Log Analytics workspace beyond the default of 30 days (90 days in Sentinel if enabled on the workspace). If you need to retain data for compliance reasons or for occasional investigation or analysis of historical data, configure Archived Logs, which allows you to retain data for up to seven years at a reduced cost. See Configure data retention and archive policies in Azure Monitor Logs for details on how to configure your workspace and how to work with archived data. |
Data collection
Since Azure Monitor charges for the collection of data, your goal should be to collect the minimal amount of data required to meet your monitoring requirements. You have an opportunity to reduce your monitoring costs by modifying your configuration to stop collecting data that you're not using for alerting or analysis.
Azure resources
Recommendation | Description |
---|---|
Collect only critical resource log data from Azure resources. | When you create diagnostic settings to send resource logs for your Azure resources to a Log Analytics database, only specify those categories that you require. Since diagnostic settings don't allow granular filtering of resource logs, use a workspace transformation to further filter unneeded data. See Diagnostic settings in Azure Monitor for details on how to configure diagnostic settings and using transformations to filter their data. |
Virtual machines
Recommendation | Description |
---|---|
Configure VM agents to collect only critical events. | Virtual machines can vary significantly in the amount of data they collect, depending on the amount of telemetry generated by the applications and services they have installed. See Monitor virtual machines with Azure Monitor: Workloads for guidance on data to collect and strategies for using XPath queries and transformations to limit it. |
Ensure that VMs aren't sending duplicate data. | Any configuration that uses multiple agents on a single machine or where you multi-home agents to send data to multiple workspaces may incur charges for the same data multiple times. If you do multi-home agents, make sure you're sending unique data to each workspace. See Analyze usage in Log Analytics workspace for guidance on analyzing your collected data to make sure you aren't collecting duplicate data. If you're migrating between agents, continue to use the Log Analytics agent until you migrate to the Azure Monitor agent rather than using both together unless you can ensure that each is collecting unique data. |
Container insights
Recommendation | Description |
---|---|
Configure agent collection to remove unneeded data. | Analyze the data collected by Container insights as described in Controlling ingestion to reduce cost and adjust your configuration to stop collection of data you don't need. |
Limit Prometheus metrics collected | If you configured Prometheus metric scraping, then follow the recommendations at Controlling ingestion to reduce cost to optimize your data collection for cost. |
Configure Basic Logs | Convert your schema to ContainerLogV2 which is compatible with Basic logs and can provide significant cost savings as described in Controlling ingestion to reduce cost. |
Application Insights
Recommendation | Description |
---|---|
Use sampling to tune the amount of data collected. | Sampling is the primary tool you can use to tune the amount of data collected by Application Insights. Use sampling to reduce the amount of telemetry that's sent from your applications with minimal distortion of metrics. |
Limit the number of Ajax calls. | Limit the number of Ajax calls that can be reported in every page view or disable Ajax reporting. If you disable Ajax calls, you'll be disabling JavaScript correlation too. |
Disable unneeded modules. | Edit ApplicationInsights.config to turn off collection modules that you don't need. For example, you might decide that performance counters or dependency data aren't required. |
Pre-aggregate metrics from any calls to TrackMetric. | If you put calls to TrackMetric in your application, you can reduce traffic by using the overload that accepts your calculation of the average and standard deviation of a batch of measurements. Alternatively, you can use a pre-aggregating package. |
Limit the use of custom metrics. | The Application Insights option to Enable alerting on custom metric dimensions can increase costs. Using this option can result in the creation of more pre-aggregation metrics. |
Ensure use of updated SDKs. | Earlier versions of the ASP.NET Core SDK and Worker Service SDK collect many counters by default, which were collected as custom metrics. Use later versions to specify only required counters. |
Monitor workspace and analyze usage
After you've configured your environment and data collection for cost optimization, you need to continue to monitor it to ensure that you don't experience unexpected increases in billable usage. You should also analyze your usage regularly to determine if you have other opportunities to further filter out collected data that hasn't proven to be useful.
Recommendation | Description |
---|---|
Send alert when data collection is high. | To avoid unexpected bills, you should be proactively notified anytime you experience excessive usage. Notification allows you to address any potential anomalies before the end of your billing period. See Send alert when data collection is high for details. |
Analyze collected data | Periodically analyze data collection using methods in Analyze usage in Log Analytics workspace to determine if there's additional configuration that can decrease your usage further. This is particularly important when you add a new set of data sources, such as a new set of virtual machines or onboard a new service. |
Consider a daily cap as a preventative measure to ensure that you don't exceed a particular budget. | A daily cap disables data collection in a Log Analytics workspace for the rest of the day after your configured limit is reached. This shouldn't be used as a method to reduce costs as described in When to use a daily cap. See Set daily cap on Log Analytics workspace for information on how the daily cap works and how to configure one. |
Next step
- Get best practices for a complete deployment of Azure Monitor.
FAQs
What should you use to track the costs of Azure resources answer? ›
Azure Cost Management + Billing is the primary tool you'll use to analyze your usage and costs. It gives you multiple options to analyze your monthly charges for different Azure Monitor features and their projected cost over time.
How can we reduce the cost in Azure? ›- Shut down unused resources. ...
- Right-size underused resources. ...
- Add an Azure savings plan for compute for dynamic workloads. ...
- Reserve instances for consistent workloads. ...
- Take advantage of the Azure Hybrid Benefit. ...
- Configure autoscaling. ...
- Choose the right Azure compute service.
By default, Log Analytics workspaces will use pay-as-you-go pricing with no minimum data volume. If you collect enough data, you can significantly decrease your cost by using a commitment tier, which allows you to commit to a daily minimum of data collected in exchange for a lower rate.
What question can the Azure cost management tool help you answer? ›Azure Cost Management lets you analyze past cloud usage and expenses, and predict future expenses. You can view costs in a daily, monthly, or annual trend, to identify trends and anomalies, and find opportunities for optimization and savings.