Azure FinOps Essentials

The Price of Convenience: How Azure Defaults Quietly Increase Your Bill

Hi there,

Azure makes it incredibly easy to get something running. You click through a wizard, choose a tier that looks safe, accept a few defaults, and within minutes your application or service is live. It is one of the reasons teams move fast in the cloud.

But over time those early choices begin to matter.

The default App Service tier keeps running even when traffic is low.

Cosmos DB holds on to a minimum throughput that your workload never reaches.

Storage accounts remain in the Hot tier far longer than anyone expected.

And diagnostics collect far more data than anyone will ever look at.

None of this feels wrong in the moment. Azure is simply giving you a configuration that works reliably. The cost only becomes visible later, when you realize that convenience came with a price tag.

In this edition of Azure FinOps Essentials, I explore why default settings can quietly inflate your bill, where this happens most often, and how a few small adjustments can bring your costs back in line with what your workloads actually need.

Cheers, Michiel

Why this problem is more common than people admit

Almost every team I speak with has a moment where their cloud bill stops making sense. Nobody made a big change. Nobody shipped a new service. Nothing unusual happened. Yet the numbers keep creeping up.

When this happens, teams start digging. They revisit the architecture. They hunt for zombie resources. They check logs and scaling rules. And sometimes they find something clear, but often they do not.

The real reason sits somewhere else.

It sits in the default settings they accepted on day one.

Azure tries to make your life easier. Many services come with recommended SKUs, generous configurations, and safe minimums. These defaults work well for most workloads, especially when reliability matters more than cost accuracy.

But convenience has a price. If you deploy quickly and rarely go back to review the configuration, you end up paying for capacity you never needed in the first place.

This edition explores a simple truth.

The biggest source of waste in Azure often isn’t a mistake.

It’s the silent cost of choosing defaults without questioning them.

Let’s look at where this happens most often, and what you can do to stay in control.

App Service Plans: When convenience hides continuous cost

Azure App Service is one of those services that feels almost too easy. You create a plan, select a tier that looks reasonable, deploy your code, and it simply works. The portal encourages this flow because the goal is clear. Get you online with as little friction as possible.

The problem is that most teams never return to the choices they made during that first setup. The recommended tier often leans toward the Standard plan or something similar. It offers features that sound useful. Autoscaling. Deployment slots. More CPU. More memory. It feels safe, and because everything works, nobody questions it.

Yet App Service Plans bill for the capacity you allocate, not the traffic you receive. Microsoft is very explicit about this. You pay for the compute resources defined in your plan, and billing continues even when the app is idle.

This means a service that handles a few requests per hour can cost the same as one handling thousands. Idle time does not reduce your spend. A plan that is too large keeps draining money even when nothing is happening. And if you deploy each app into its own plan instead of consolidating them, you multiply the problem.

This is not a warning against App Service. It is a reminder that the defaults are designed for reliability, not cost efficiency. Once the application is running, it’s worth taking a moment to ask a few questions.

Do you really need the tier you picked on day one?

Are multiple apps running on separate plans that could be combined?

Is autoscaling configured carefully, or left on the safest option?

Could off-hours scaling or scheduled reductions help?

These small checks make the difference between a plan that quietly drains budget and one that reflects the real needs of your workload.

Cosmos DB: Minimum throughput that becomes an invisible floor

Cosmos DB is one of those services that feels almost magical when you first use it. It is fast, global, flexible, and designed for applications that need consistent performance. Because of that focus, it comes with certain guardrails, and one of the most important is the minimum throughput requirement.

When you create a container with provisioned throughput, you must allocate at least 400 RU per second. That requirement is documented clearly in Microsoft’s pricing guidance.

For some workloads, this is perfectly reasonable. High-traffic services, stable APIs, or predictable applications justify this baseline. But many teams use Cosmos DB for smaller experiments, internal tools, low-volume endpoints, or unpredictable workloads. In those cases, 400 RU can be far more than what the workload actually consumes.

What often happens is simple. Someone creates a container during development. They leave the default throughput in place. The application grows slowly, but the usage stays low. Months later, the bill reflects consistent throughput you never truly used. Nothing is technically wrong, but the cost floor sits much higher than necessary.

Cosmos DB offers alternatives, but they are easy to overlook if you never revisit your configuration. Serverless mode is ideal for sporadic or low-traffic workloads. Autoscale fits services with peaks but low average usage. And reducing RU after the initial launch can deliver immediate savings without changing code.

Defaults are not the enemy here. They are designed to keep your application responsive. The real issue is leaving them untouched when your workload does not fit the assumptions behind them.

A simple throughput review can turn an always-on cost floor into something far more aligned with real usage.

Storage Accounts: Hot tiers that stay hot far longer than needed

Azure Storage is one of the most flexible building blocks in the platform. It supports many scenarios, from hot application data to long-term archives and analytics workloads. Because it needs to be broadly useful, Azure gives every new storage account a safe default. It starts in the Hot access tier.

Hot storage is fast and convenient. It is also the most expensive tier per stored gigabyte. Microsoft’s pricing table shows a clear difference. Hot storage carries a higher monthly cost than Cool or Archive storage because it is optimized for frequent access and low latency.

This default makes sense when you are building an application that reads and writes data constantly. It is less ideal when the account is created for diagnostic logs, export files, backups, or data that nobody plans to access frequently. Yet this is exactly how many workloads start. A developer needs a place to store something. They create a storage account. They accept the default tier. The data grows. And the tier remains unchanged for months or years.

The difficulty is that nothing feels broken. Applications keep running. Export jobs succeed. Nobody receives an alert. But the bill quietly increases because the data sits in a tier that was never meant for long-term retention.

Azure provides simple tools to fix this. Lifecycle management policies can move older data to Cool or Archive without manual effort. You can also use application patterns where logs or historical data immediately land in cheaper tiers. And if you periodically audit your storage accounts, you will often find large buckets of data that no longer justify the Hot tier at all.

The point is not to avoid Hot storage. It plays an important role. The point is to recognize that Azure starts you there for safety, not for cost efficiency. Unless you actively move your data into a tier that matches its real value, you pay more than necessary for storage that rarely gets touched.

Logging and Monitoring: Generous defaults that slowly grow into noise

Azure Monitor and Log Analytics are essential parts of running applications in the cloud. They give teams the visibility they need to diagnose issues, understand performance, and maintain stability. Because visibility is so important, many services in Azure enable broad diagnostics by default. They capture metrics, logs, and traces without requiring much configuration.

This is useful on day one. You deploy something new. You want to see everything. You want to understand how it behaves. And Azure makes that easy.

The challenge appears over time.

Those logs continue to accumulate.

Retention settings often remain at the default, which is commonly set to 30 days for Log Analytics workspaces.

Diagnostic rules gather far more categories than the team actually needs.

And nobody revisits the setup once the system is running smoothly.

Azure Advisor even calls this out directly. It warns that collecting unnecessary logs increases cost and recommends adjusting diagnostic settings and retention to match real operational needs.

Log data is not expensive at first. A few gigabytes here and there feel manageable. But logs grow predictably. Data from busy applications, verbose tracing, and platform-level diagnostics accumulate over months. Eventually the workspace becomes one of the top contributors to the monthly bill, even though most of the data is never queried.

This is not a failure of the platform. It is a natural side effect of generous defaults combined with long-term growth. The fix is simple, yet often ignored. Reduce retention for environments that do not need a full month of history. Disable categories that produce high volumes but add little value. Move older logs to cheaper storage if you still want to keep them.

Good monitoring is not about collecting everything. It is about collecting what you can act on. Anything beyond that only adds cost without improving reliability.

Bringing it all together

Azure defaults are designed with a simple goal. Make it easy to get something running. Prioritize reliability. Keep you from breaking things on day one. These defaults are safe, and they genuinely help teams move fast.

The trouble is that default settings rarely match the long-term needs of a real workload. They create comfort, but they also create hidden cost. Idle compute in an app service plan. Minimum throughput that exceeds real demand. Hot storage that never cools. Log retention that grows quietly until it becomes one of your biggest contributors.

None of this happens because someone made a bad decision. It happens because teams stay focused on delivery, and the configuration they picked during a quick setup eventually turns into the configuration they keep forever.

Cost control in Azure is not about mastering every advanced feature. It is about paying attention to the points where convenience quietly becomes expensive. When you revisit your choices, even briefly, you take back control. You question assumptions. You reduce idle waste. You tune storage and logs to match how your systems really behave.

The good news is that you do not need a complex process to benefit from this. You only need the habit of reviewing the defaults that got you started. A few small adjustments often bring your environment much closer to the actual needs of your workloads.

By replacing convenience with intention, you turn your cloud bill into something predictable, understandable, and aligned with how your services create value.

Interested in sponsoring, then visit the sponsor page.

Thanks for reading this week’s edition. Share with your colleagues and make sure to subscribe to receive more weekly tips. See you next time!

Want more FinOps news, then have a look at FinOps Weekly by Victor Garcia

FinOps WeeklySave on Your Cloud Costs with 5 Minutes every Sunday

Reply

or to participate.