There’s a conversation I have been involved with many times over my career, usually triggered by why the numbers came in differently than expected, whether a management report, programme financial plan or P&L insight. The forecast, budget or plan was inaccurate and the business was frustrated, with finance defending a process that nobody outside the function really understood or trusted.
The instinct in that moment is to focus on the accuracy problem. How can we get better data, better models, more granular assumptions. Sometimes this is the right response, but in my experience the accuracy problem is rarely the root cause. It is the result.
The thing underneath it, the thing that actually determines whether a forecast adds value or just adds noise, is whether the people involved in building it, challenging it, and acting on it are genuinely connected to what it’s supposed to achieve.
Research from Anaplan found that 87% of finance executives say their forecasts are already out of date by the time they reach cross-functional stakeholders, and that every leader they surveyed had faced personal consequences as a result.
The forecast isn’t primarily a number. It is a behavioural contract between finance and the rest of the business. It reflects what people believe is achievable with the information available to them, what they’re willing to commit to, and what they’re prepared to be held accountable for. When that contract is well designed and honestly held, the numbers that come out of it tend to be roughly right, and more importantly, they tend to drive the right conversations and the right decisions. When it isn’t, no amount of modelling sophistication will fix it.
What the forecast is actually for
Before worrying about how accurate a forecast is, it’s worth being clear about what it’s trying to do. That sounds obvious, but in practice the answer varies a lot depending on the nature and maturity of the business, and getting it wrong creates friction that compounds over time.
In a fast-growing or early-stage business, the forecast is primarily a navigational tool. It sets direction, aligns resource, and gives leadership a shared view of where the business is heading and what it needs to get there. Precision matters less than coherence, and the ability to update quickly matters more than the ability to predict accurately. Chasing a 99% accurate twelve-month forecast in that environment isn’t just a waste of effort. It’s a distraction from the conversations that actually matter.
In a larger, more complex organisation, the forecast takes on additional weight. It connects to external commitments, shapes how the business reports to its board or investors, and influences decisions across functions that don’t always have visibility of each other. Here, the accuracy question becomes more meaningful, though even in this context the most important variable isn’t usually the model. It’s whether the assumptions sitting inside the model reflect what the business genuinely believes and intends.
The question I find most useful when starting a forecasting conversation isn’t “how do we make this more accurate?” It’s “what behaviours are we trying to promote, and what are we trying to move away from?” That tends to surface a great deal about whether the current process is fit for purpose, because the behaviours a forecast drives are a direct reflection of how it’s been designed and who’s been involved in building it.
The human elements that determine whether it works
The end-to-end forecasting process has a lot of moving parts, and it’s easy to focus on the technical ones because they’re the most visible and measurable. But in my experience, the parts that most often determine whether the process adds value are the human ones. They don’t always show up on a process map.
The first is capability. Not just technical finance capability, though that matters, but the capability of finance business partners to operate as genuine thought partners to the business rather than as producers of a number. That shift from production to insight is one that most finance functions are somewhere in the middle of, and the gap between where they are and where they need to be often explains why forecasts feel like an admin burden rather than something useful.
AICPA and CIMA research found that while 60% of finance professionals now identify as business partners, the profession is still mid-transition, moving away from transactional processing toward partnering skills, data-driven insight, and support for strategic decision-making.
Finance teams that have genuinely made this shift spend more of their time understanding what’s driving performance and what decisions the business needs to make, and less time consolidating spreadsheets and chasing submissions. McKinsey’s research reinforces the point: leading organisations have achieved efficiency improvements of 39% or more in transactional finance, while seeing far fewer gains in the strategic areas like FP&A and finance business partnering, which is precisely where the highest value sits.
The second is engagement. A forecast built in isolation by finance, however technically sound, is rarely owned by the business. And a forecast the business doesn’t own is a forecast the business doesn’t act on. The way finance engages during the forecasting process, the quality of the conversations, the questions being asked, the clarity of what’s being asked for and why, shapes the reliability of the output as much as any model does.
The third, and probably the most uncomfortable one, is honest assumption-setting. If the culture of the organisation rewards optimism in the planning process and penalises shortfalls in delivery, the forecast will reflect that. The numbers won’t be wrong by accident. They’ll be wrong by design, because the people building them have learned that honesty doesn’t serve them well. No forecasting tool will correct for that. It requires a different kind of conversation at leadership level about what the process is actually for.
When the tools of the process work against it
It’s also worth being honest about how some of the most common techniques in forecasting and budgeting can quietly undermine those human elements if they’re not handled carefully.
Take expected results, the practice of building a forecast that reflects what the business genuinely expects to deliver rather than what it’s been asked to achieve. Used well, this is a grounding mechanism. It forces an honest conversation about what’s actually possible given current trajectory, capability, and market conditions, and it gives leadership a realistic baseline. The problem comes when expected results become a negotiation rather than an honest assessment, when P&L owners learn that their expected result will simply be challenged upward regardless of the evidence. At that point the process stops surfacing reality and starts producing numbers that nobody quite believes.
Stretch targets carry a similar tension. I’ve seen them work well in organisations where there’s genuine trust between leadership and the business, where the stretch is accompanied by real support and resource, and where falling short is treated as useful information rather than a failure. But in organisations where accountability is exercised through pressure rather than honest conversation, stretch targets tend to produce gaming of the baseline assumptions, or a quiet disengagement from the target altogether. Neither serves the business.
Then there are one-line overlays, where finance applies a top-down adjustment to the consolidated bottom-up view. There are legitimate reasons to use them. The bottom-up view may not reflect a strategic decision made at the centre, or it may aggregate to a number that the business knows isn’t credible at a total level. Used transparently and for a defined purpose, an overlay is a reasonable tool. Used repeatedly as a way of making an uncomfortable bottom-up number disappear without addressing the underlying gap, it destroys the connection between the people building the forecast and the number the business is being held to. The P&L owner who has submitted an honest view and seen it adjusted upward without explanation has no meaningful reason to own the revised number. And a forecast nobody owns is a forecast nobody delivers against.
The thread running through all of this is the same. The forecast and the budget are finance-led processes, but they only work when every P&L owner across the business feels genuinely accountable for their piece. That accountability isn’t created by templates, submission deadlines, or challenge meetings. It’s created by whether the people being asked to commit to a number feel that the process has been honest, that their input has been taken seriously, and that what they’ve signed up to is genuinely theirs.
The accuracy question deserves more honesty
There’s an assumption in most forecasting conversations that more accuracy is always better. It rarely is. Accuracy has a cost, and the cost isn’t just time and resource. It’s focus. A finance team spending significant energy trying to forecast to a level of precision the business doesn’t actually need is a finance team not spending that energy on insight and analysis that would genuinely move the needle.
The honest question is: how accurate do we need to be, and for what purpose? A directional view that’s 80% right and available quickly is often more useful than a highly precise view that arrives too late to influence the decision it was meant to support. Different businesses will have different answers to this. The problem is that most don’t ask the question explicitly, so they end up investing in precision by default rather than by design.
Getting this right requires a genuine conversation between finance and the business about what decisions the forecast is meant to support, what lead time those decisions require, and what level of accuracy is good enough to make a call. That conversation is more useful than most of the technical work that goes into improving forecast models, and it happens far less often than it should.
Where AI fits, and where it doesn’t
There’s a lot of enthusiasm at the moment about AI’s potential to transform financial planning and forecasting. Some of it is justified. AI tools are genuinely useful for processing large volumes of transactional data, identifying patterns that aren’t visible to the human eye, and automating the more repetitive elements of consolidation and reporting. Used well, they can free up significant capacity and shift effort toward insight work.
But there are two conditions that need to be in place before AI can do that effectively, and most organisations I’ve spoken to haven’t thought carefully enough about either.
The first is data quality. AI models are only as good as the data they’re trained on. If the transactional data flowing into a forecasting process is inconsistent, incomplete, or poorly connected across systems, AI will process it faster and at greater scale, but it won’t fix the underlying problem. In some cases it’ll make it worse, because the outputs will have a surface credibility that the underlying data doesn’t deserve. MIT Sloan research found that when organisations scale faster than their data foundations, finance teams end up spending more time explaining discrepancies than driving strategy, and that introducing AI without sufficient transparency risks producing outputs that are fast, persuasive, and wrong.
Mistrust in the numbers is one of the most corrosive problems a finance function can face. Introducing AI into a process where the data foundations aren’t solid is a reliable way to deepen that mistrust rather than resolve it.
The second is process clarity. To deploy AI effectively in a forecasting process, you need to understand the process end to end, including the human elements, well enough to know where automation genuinely adds value and where human judgement can’t be replaced. That’s not a technology question. It’s a process design question, and it requires the kind of honest assessment of capability, behaviour, and engagement that most organisations haven’t done.
A useful way to think about it is in stages. Start by mapping the end-to-end process with the human elements visible. Identify where the friction is and what’s causing it. Separate the problems that are genuinely about data processing from the ones that are about capability and behaviour. Then work out which elements AI can actually improve. That approach takes longer upfront but it produces outcomes that stick, rather than creating a new set of problems alongside the old ones.
The outcome worth aiming for
The finance functions I’ve seen operate most effectively aren’t the ones with the most sophisticated forecasting models. They’re the ones that have built a genuine connection between the strategy of the business and the people responsible for delivering it, through a process that’s honest, well-designed, and understood by the people it’s asking things of.
When that connection works, the forecast becomes something the business uses rather than tolerates. Finance business partners spend their time on conversations that influence decisions rather than on work that produces reports. Leaders have the insight they need to move quickly and confidently, rather than waiting for a number that arrives too late or carries too little credibility to act on.
I’m not sure any of that is easy. But I am sure it matters more than the model.
The Impact Architect is a newsletter for CFOs and transformation leaders navigating complex change. If this resonated, I’d welcome you sharing it with someone who’d find it useful. If you want to get in touch about how I can support your business and finance transformation activities, reach me at neilalderson@neilaldersonltd.co.uk and we can set up a conversation.
Sources
Anaplan — When Forecasts Fail: The High Cost of Inaccurate Forecasting: https://www.anaplan.com/blog/high-cost-of-inaccurate-forecasting/
AICPA & CIMA — Redefining Finance for a Sustainable World: https://www.cpapracticeadvisor.com/2024/04/22/survey-shows-divide-among-finance-pros-when-it-comes-to-the-future-of-finance/104306/
McKinsey — Finance 2030: Four Imperatives for the Next Decade: https://www.mckinsey.com/capabilities/operations/our-insights/finance-2030-four-imperatives-for-the-next-decade
MIT Sloan Management Review — Why Finance Leaders Must Fix Data Trust Before Betting on AI: https://www.mitsloanme.com/article/why-finance-leaders-must-fix-data-trust-before-betting-on-ai/