- Azure FinOps Essentials
- Posts
- Azure FinOps Essentials
Azure FinOps Essentials
Taming the Logs: Cutting Azure Monitoring Costs Without Losing Insight
Hi there, and welcome to this week’s edition of Azure FinOps Essentials.
Before diving in: I’m proud to share that I’ve been re-awarded the Microsoft MVP award in the categories of DevOps and Azure Cost Management. Sharing what I learn, including through this newsletter, plays a big part in that, so thank you for reading and supporting it.
Now, onto this week’s topic.
If you’ve ever opened your Azure bill and been surprised by how much came from Application Insights or Log Analytics, you’re not alone. I’ve seen it across many projects, detailed telemetry piling up without a clear purpose or owner.
This edition is all about reducing monitoring costs without losing insight. I’ll walk through how developers and FinOps teams can work together to make logging more intentional, useful, and cost-effective.
Let’s jump in.
Cheers,
Michiel
Logging Everything, Understanding Nothing
Over the years, I’ve looked into dozens of Azure subscriptions where something odd kept happening.
The most expensive resource wasn’t compute.
Not storage.
Not even a PaaS service like Cosmos DB or App Service.
No — it was Log Analytics.
Month after month, quietly topping the cost charts.
In many cases, diagnostic logs were just enabled by default. They streamed detailed traces from every App Service, Function, Front Door, or SQL database into a central workspace. Nobody really knew why.
Developers set Application Insights to Verbose “just in case.”
Diagnostic settings sent logs to every destination: Log Analytics, Storage, Event Hub. Someone once said it was best practice.
No one ever checked the ingestion volume or looked at the retention policies.
And here’s the kicker:
When I asked the teams whether they were even using those logs, the answer was often no.
They weren’t building alerts.
They weren’t querying data.
They didn’t even have dashboards wired up.
It was observability by habit, not by design.
And it came with a price.
This edition is about exactly that. How observability costs can spiral out of control, and what you can do to make telemetry useful without setting your budget on fire.
Where Logging Costs Lurk in Azure
When we talk about “logging costs” in Azure, we’re usually referring to Log Analytics. This is the engine behind Application Insights and Azure Monitor. It’s powerful, but also easy to misuse.
Here’s where the cost tends to come from:
Ingestion volume: The more data you send, the more you pay. Every metric, trace, dependency, or custom event has a size. If you’re sending 10,000 requests per second with full trace context, the bill adds up quickly.
Retention: By default, Log Analytics workspaces store data for 30 days. Extending that to 90 or 180 days multiplies the cost. Often, no one questions why the data needs to be stored that long.
Duplicate logging: Many teams send the same logs to multiple destinations like Log Analytics, Event Hub, and Blob Storage “just in case.” Each of those adds its own cost.
No sampling or filtering: Using verbose-level logging in production creates a flood of trace data. Combined with retry logic or bursty traffic, it can lead to massive spikes in data volume.
Same settings in all environments: Development, test, and production often share the same telemetry configuration. This means verbose logging and long retention are applied everywhere, even where it is not needed.
Querying patterns: While queries do not incur direct charges, heavy dashboards with inefficient queries can slow down systems and increase backend resource consumption.
Once you know where the cost comes from, you can start applying FinOps principles.
Logging should be a strategic decision, not a default behavior. It should serve the people who actually use the data — developers, site reliability engineers, security analysts — and it should evolve as your system grows.
Next, I’ll walk through how to take back control.
Practical Ways to Reduce Azure Logging Costs
Once you become aware of how much logging contributes to your Azure bill, the good news is this: there is plenty you can do. And it includes both what you log and how you are charged for it.
Start with usage. Developers have direct control over what gets logged and where. A few targeted changes can already make a significant impact:
Lower log levels in non-critical environments. Do you really need verbose logs in test?
Tailor retention per environment. Production may need 90 days, but dev might be fine with 3.
Remove unused diagnostic settings. If no one reads the data, it should not exist.
Log with intent. Structure logs to be queryable and actionable, not just noise.
Use sampling or aggregation. Especially for high-volume telemetry like requests or traces.
Match log level and retention between production and non-production where it actually makes sense. Many teams keep verbose logs and long retention in dev by default.
But usage alone is not the full story. Even well-structured logs can be expensive if you are using the wrong pricing tier.
This is where rate optimization comes in.
Azure Monitor offers three ingestion plans:
Plan | Price per GB | Intended Use |
---|---|---|
Analytics Logs | $2.30 | Rich queries, alerts, long-term visibility |
Basic Logs | $0.50 | Simpler queries, low-value data |
Auxiliary Logs | Custom pricing | Archival, compliance, lookup across known datasets |
Assigning the correct plan to each table is one of the easiest ways to save. Debug or low-importance logs can go to Basic Logs, while alerting tables stay in Analytics Logs for full query and rule support.
If you ingest large volumes, committing to a daily ingestion tier brings real benefits. For example:
100 GB per day drops your cost to $1.96 per GB.
1 TB per day brings it down to $1.70 per GB.
At 10 TB per day, you pay $1.57 per GB.
Retention is another cost lever. Analytics Logs include 31 or 90 days by default. For longer retention, switch to long-term storage at just $0.02 per GB per month.
Also keep an eye on indirect charges:
Queries on Basic and Auxiliary logs cost $0.005 per GB scanned.
Exporting logs is billed at $0.10 per GB.
Data transformations that discard more than 50 percent of logs trigger ingestion charges.
None of these are surprising once you know how the pricing works. And all of them are manageable with the right collaboration between engineering and FinOps.
Logging is not just a technical detail. It is an ongoing decision about where to spend your budget and where to get value. Once you take back control, you can make sure that your telemetry adds insight, not just invoice lines.
Conclusion
Usage × Rate = Cost.
That formula still holds true, and logging is no exception.
As a developer, you have direct control over what you log, how long you retain it, and which ingestion plan you choose. As a FinOps practitioner, your role is to surface the costs, highlight opportunities for savings, and guide teams toward smarter defaults.
This is exactly where FinOps and engineering meet. Not by enforcing restrictions, but by enabling better decisions. Together, you can balance the value of observability with the cost of data, ensuring that your telemetry provides clarity instead of cost surprises.
It is not about logging less. It is about logging with purpose.
Interested in sponsoring this newsletter, then visit the sponsor page.
Thanks for reading this week’s edition. Share with your colleagues and make sure to subscribe to receive more weekly tips. See you next time!
Want more FinOps news, then have a look at FinOps Weekly by Victor Garcia
|
Reply