Why Estimates Are Always Wrong
Estimates. We’ve all been there. The client needs to know when it’ll be done. The manager needs to know how much it’ll cost. And so, we give them a number. A number that, almost without fail, turns out to be wrong.
Why is this? Is it because we’re bad at our jobs? Are we lazy? Nope. The truth is far more fundamental. Software estimation is, by its very nature, an exercise in predicting the unpredictable.
The Illusion of Certainty
Think about it. When you estimate how long it’ll take to cook a new recipe, you have a pretty good idea. You’ve cooked before. You know roughly how long chopping takes, how long things bake. But what if the recipe calls for an ingredient you’ve never used? Or what if your oven runs a little hotter than it should? Or you get a call you have to take? Suddenly, your estimate is off.
Software is like that, but orders of magnitude more complex. We’re not just following a recipe; we’re inventing one as we go, often with brand new ingredients, in a kitchen that keeps changing.
Unknown Unknowns
This is the classic problem: the unknown unknowns. We know what we don’t know, and we know what we know we don’t know. But the real killer is what we don’t even know we don’t know. These are the bugs that only appear on a specific browser version, the API that behaves unexpectedly under load, the third-party library that suddenly has a breaking change.
These aren’t minor glitches; they’re fundamental roadblocks that can’t be foreseen. Trying to put a time estimate on them is like trying to guess the exact date of your next cold. You know you’ll probably get sick eventually, but when? And how bad will it be?
The Cost of Estimation
Beyond the inherent inaccuracy, the act of estimation itself can be costly. Teams spend hours, sometimes days, debating estimates. This is time not spent building. The pressure to provide a concrete number often leads to “optimistic” estimates, which then set unrealistic expectations. When those expectations aren’t met, it erodes trust and creates a blame culture. It’s a lose-lose situation.
What’s the Alternative?
If estimates are so bad, what should we do? The answer lies in embracing uncertainty and focusing on what we can control: the process and the feedback loops.
1. Focus on Throughput and Cycle Time
Instead of asking “When will this feature be done?”, ask “How long does it typically take us to deliver a similar feature from start to finish?”. Track your cycle time. This is the time from when work begins on an item until it’s completed. Over time, you’ll build a historical dataset that’s far more reliable than any upfront guess.
2. Break Down Work into Small, Manageable Chunks
Large features are inherently harder to estimate than small ones. By breaking work into the smallest possible pieces (think user stories that can be completed in a day or two), you reduce the scope for surprises. If a small task takes longer than expected, it’s a much smaller impact than a large one going wildly over.
3. Use Relative Sizing (Story Points)
Story points are a way to estimate the relative effort, complexity, and risk of a piece of work compared to other pieces of work. It’s not about hours. A 5-point story is roughly twice as much effort as a 2-point story. This abstracts away from the flawed time-based estimation and focuses on the inherent size of the problem.
4. Embrace Iteration and Feedback
Agile methodologies are built around this. Deliver working software frequently. Get feedback early and often. This allows you to course-correct constantly, rather than trying to predict the entire journey from the start.
The Reality
No one has a crystal ball. Software development is a complex, creative, and often messy process. While we can’t predict the future with certainty, we can build robust processes that adapt, learn, and deliver value incrementally. Stop chasing perfect estimates. Start focusing on predictable delivery.