Accurate Business Forecasts - what the trends reveal

In this series of blogs I’m looking at how to improve the accuracy of business forecasts and the role of forecasting systems such as Financial Driver.  In my last blog I looked at collecting detail behind the numbers.  In this blog I will look at the use of trends.
A sure sign of an unrealistic forecast is when there is a step-change in performance from one period to the next.  I’m sure you have all seen the situation where throughout the year there is a steady increase (or decline) in costs or sales, and then in the space of one period, the forecast figure bucks the trend.  Now there may be a good reason for this, for example the launch of a new product, or some other event, but there must be a reason. This sudden change in direction is often seen in budgets where the first period target is much greater than the previous period actual.  Or where sales targets show a ‘hockey stick’ effect towards the end of the year. There are a number of ways in which a forecasting system can be set up to challenge these types of submissions:
  • As forecasts are entered, the system can place them on a timeline that shows them as a continuous chart following on from actual results.  Where the forecast shows a significant change in direction, then the system should prompt the user for an explanation.
  • As with the previous point, forecasts can be superimposed onto a chart that shows last years actual results.  The aim is to highlight any trend that doesn’t conform to last year’s data, so that the user can give a reason as to why the forecast is different.
  • I know of one organisation where the forecast system performed statistical analysis of the previous two years of actual results.  This was then used to provide the user with a statistical degree of confidence on the forecast being entered.  For example, it would tell the user the percentage likelihood that the forecast being entered would be achieved, based on statistical analysis.  They found that this type of information made users think, as they knew that senior managers would be getting the same statistical data.  The result was that forecasts proved to be more accurate than when this information was not provided.
  • As mentioned in my blog on driver based forecasting, the forecast system could plot dependent measures together.  For example, when requesting the forecast for an outcome, the dependent resource and workload measures could also be displayed.  Given that some outcomes are delayed in time, the system would need to show measures for a number of periods so that their effect can be assessed.
Those charged with overseeing the forecasting process should also be provided with the above information.  Analysing and approving forecasts is just as an important role as submitting them. In my final blog on forecasting accuracy, I will take a look at assessing the variability of individual forecasts.  In the meantime if I can help you with your forecasting process, do get in touch via the Contact Us page.