The Bank of England’s chief economist recently said his profession was “to some degree in crisis” over its failure to predict the 2008 financial crash and – to a lesser extent – the apparent resilience of the UK economy after the Brexit vote. Dimitri Zenghelis looks at how economic forecasting works and explains that its models are not infallible. Economists should be honest about the limitations of their predictions.

‘The Bank of England has admitted to a Michael Fish moment – but economists have plenty to learn from weather forecasters’. So said London’s free business newspaper City AM this week, and with some justification. The headline referred to a comment made by Andy Haldane, the Bank of England’s chief economist, who recently spoke of economists having their Michael Fish moment. For non-native Brits or those under the age of 40, the analogue needs explaining. It relates to an infamous occasion in October 1987 when the BBC’s star weather forecaster, Michael Fish, began his afternoon bulletin with the words “apparently a woman rang the BBC and said she heard there was a hurricane on the way, well if you are watching don’t worry there isn’t… most of the strong winds will be down over Spain.”

Within hours hurricane force winds devastated south-east England, levelling buildings, cutting off power in the capital and felling entire forests. Poor Michael Fish, a first-rate forecaster and presenter, has since had his name associated indelibly with humiliating prediction errors.

So when Haldane described economists’ failure to predict the financial crisis of 2008 as a “Michael Fish moment”, the blow hit well below the belt. He claimed his profession was “to some degree in crisis” following this and the Brexit vote. In the former case, few economists predicted the timing and scale of the financial crash. In the latter, many predicted something close to a crash which, so far at least, has failed to materialise.

Economists deserve some flak for this. The profession does itself no favours in trying to oversell its capabilities. In doing so, it undermines its credibility and often loses the ability to influence the debate even where the assessment is considered and sure-footed.

Most open to criticism is the art (and it is mostly an art) of economic forecasting. This is a game I am intimately familiar with, having previously run the Treasury economic model for more than half a decade. One of the first lessons one learns is that there is no such thing as ‘the model’ telling you something. The model is a tool which you use to express your assessment, not to form it. True, running an economic model requires internal consistency and has merit in forcing the modeller to articulate assumptions and express the relationships behind those assumptions, but it does not help you predict the future.

This is an inconvenient truth. We all want to know the future. American inventor Charles F. Kettering once said: “my interest is in the future because I am going to spend the rest of my life there.” But the fact that economic models are not very useful at making forecasts about the future is why JK Galbraith, an economist himself, quipped as far back as 1988 that “the only function of economic forecasting is to make astrology look respectable.”

As I have argued before, long-term economics forecasts spanning decades are particularly dubious. They rely on economists’ own assessments of technologies, production processes, institutions, fashions and tastes in the distant future. By contrast, short run forecasters—those who attempt to look ahead a year or two—can at least rely on these basic structures of the economy remaining unchanged. Yet even short-term economic forecasts have shown themselves to be only marginally better than random guesswork. That said, marginally better is still well worth paying if you happen to run a financial institution, government or central banks.

None of this means economic models are a total waste of time. Far from it. They remain extremely useful for simulations and probabilistic forecasts. Unlike unconditional economic forecasts, which attempt to make predictions of the kind “this is what will happen at that time…”, simulations rely on conditional forecasts of the kind “if you do this, you are this much more likely to get that”. Simon Wren Lewis uses helpful medical analogies. The fact that a doctor cannot tell you precisely when heavy smoking will lead to a heart attack or a lung condition does not mean her advice is valueless. She knows that the probability of becoming afflicted by such conditions, all else equal, rises in proportion to regularly inhaling cigarette smoke.

So it is with economic forecasting. Conditional forecasts are less vulnerable to specific prediction error and offer valuable insights once interactions through the whole economy have been simulated. A rise in the oil price, all else equal, increases UK inflation by so much; cutting central bank interest rates, all else equal, boosts spending by so much; a bad trade deal with Europe, all else equal, increases the chances that companies invest elsewhere by so much. All these are expressed as relative to a counterfactual baseline which is entirely hypothetical.

Conditional forecasts based on economic models can be quite accurate in capturing behaviour because they are based on good empirical and theoretical foundations which underlie the model’s properties. By contrast, predicting the exact rate of inflation or economic growth at any one point in the future remains near impossible because, however good your understanding of the economy, economic growth and inflation depend on a host of other factors, just as human health depends on much more than smoking habits.

This would not matter were it not for the fact that pinning too much on a particular forecast with spurious precision can lead to an outright rejection of rational, fact-based, analysis in today’s “post-truth”, “post-fact” world, as noted recently  by John Llewellyn. Making forecast that have a high probability of being wrong plays hostage to fortune for those who, like Michael Gove, the former cabinet minister and leading Brexiter, argue illogically that because experts appear not to know what they are talking about, anyone’s view is equally valid. (Gove said famously before the referendum that Britain “has had enough of experts”).

So it is not at all surprising that forecasters got the timing of a post-Brexit slowdown wrong. All else was not equal and there were many moving parts, which predictably scuppered the accuracy of the forecasts. For example, the sterling exchange rate fell providing an immediate boost to exporters, whose domestic costs became more competitive overnight, while the downside hit to UK consumers was delayed because importers were locked into longer term price contracts. At the same time, the Bank of England aggressively eased policy and extended quantitative easing to boost domestic spending. Even the world economy failed to oblige the forecasters by expanding faster than anyone expected, while investment decisions took time to be deployed. Finally, investors and consumers who voted for Brexit had a rather different expectation of its likely impact than did most economists, and this was reflected in their more confident spending behaviour. All this is easy to diagnose now, but difficult to predict in advance. But that’s just the point.

Last summer’s much-trumpeted unconditional forecasts of UK economic growth stalling by the end of 2016 proved wide of the mark. Does this mean economists have nothing useful to say about the effect of Brexit given their poor track record at forecasting? It would be tempting to think so. But this would be incorrect. Once again, this is where clarifying the difference between conditional simulations and unconditional forecasts is vital.

Unlike unconditional predictions about the impact of Brexit, which attempt to capture changes in all variables, conditional forecasts are driven by rigorous microeconomic studies like international trade and investment models, which employ robust ‘gravity equations’ that capture the impact of tariffs or regulations on trade depending on the distance to market. These effects are well-known and predictable. As are the effects of lower immigration on GDP and the public finances (immigrants tend to be younger and more economically active than the average citizen, and therefore make net contributions to GDP and the public finances). Also predictable with a degree of accuracy is the fact that lower real incomes will eventually undermine spending, though perhaps not immediately, because consumers can always borrow more and save less. Quantifying these effects is exactly what economic models are good for.

All else being equal, these particular aspects of leaving the EU can be predicted. What these models alone do not tell you is when and how they will interact with other unforeseen events, nor whether they are the only outcomes that might result from Brexit: good or ill. But this is a debate, backed up by good evidence, which is worth having – unlike trying to predict whether growth will bottom out in the first or second quarter of next year.

So economists are their own worst enemies. They have something unique and meaningful to offer on issues such as the impacts of Brexit, and it is their responsibility not to undermine this contribution by falsely raising expectations about the accuracy of forecasts. As with doctors giving preventative health advice, economists would be wise to warn against future risks and help design more resilient economies, rather than half-blindly attempting to predict exactly when the next economic cardiac arrest will come.


Note: This post was first published on the Brexit blog.

About the Author

Dimitri Zenghelis is co-head of policy at the Grantham Research Institute on Climate Change and the Environment at the LSE.



Print Friendly, PDF & Email