Can central bankers become Superforecasters?
Aakash Mankodi and Tim Pike
Bank Underground, March 12, 2018
Tetlock and Gardner’s acclaimed work on Superforecasting provides a compelling case for seeing forecasting as a skill that can be improved, and one that is related to the behavioural traits of the forecaster. These so-called Superforecasters have in recent years been pitted against experts ranging from U.S intelligence analysts to participants in the World Economic Forum, and have performed on par or better by accurately predicting the outcomes of a broad range of questions. Sounds like music to a central banker’s ears? In this post, we examine the traits of these individuals, compare them with economic forecasting and draw some related lessons. We conclude that considering the principles and applications of Superforecasting can enhance the work of central bank forecasting.
Setting the scene
It is helpful to begin by considering the purpose of forecasting in central banks, and how the process works in practice. This speech by Gertjan Vlieghe explains how forecasting is an important tool that helps policymakers diagnose the state and outlook for the economy, and in turn assess – and communicate – the implications for current and future policy. So achieving accuracy is not always the sole aim of the forecast. However, forecasts are also a means to provide public accountability of central bank actions, and the presence of persistent or significant forecast errors may damage the credibility of the policy making institution amongst key stakeholders (individuals, governments and financial and capital markets).
A typical forecast set-up at a central bank is (see here) supported by two pillars: i) statistical frameworks underpinned by specific (for example, New Keynesian General Equilibrium) economic concepts, which can be supported by tools that process a range of economic and financial data, and ii) monetary policymakers’ judgements and deliberations that overlay these strict model-based forecasts – all of which form part of the deliberation process.
The accuracy of such forecasts has come under much scrutiny (see here) since the financial crisis, resulting in a great deal of effort to improve their performance. Several reviews and studies (see Stockton (2012), BoE IEO (2015), FRBNY Staff Report (2014) and ECB WP 1635 (2014)) have evaluated forecast performance across many major central banks and suggested improvements in calibrating economic models (e.g. to reduce bias), challenging prior conventions more and learning more from other central banks/economic forecasters. The BoE’s MPC for example has also started commenting on its own ‘key judgements’ in its quarterly Inflation Report.
This is all welcome progress. But this iterative process from inside the central banking community over time leaves us with an impression that improving forecast performance could benefit from further considering the successes of forecasting in other fields (similar to taking an “outside view” when forecasting as described by Kahneman). We may then move forward from this process of gradual evolution… to a potential revolution.
Superforecasters have been described as “unusually thoughtful humans on a wide spectrum of problems”. They are drawn from necessarily diverse backgrounds, and include amateurs and experts in a given field. They compete in tournaments which test their judgements on a range of questions about economic or geopolitical events. And through making these predictions they are expected to hone a range of forecasting skills. They are judged on several measures (including a daily average ‘Brier score’ – a measure of forecast accuracy originally proposed to test weather forecasting), and they receive their title by consistently outperforming a top percentile of their peers.
Superforecasters were first identified on the back of The Good Judgement Project (now a private enterprise), which was part of a US Intelligence Agency program in 2011. The GJP’s testing team included renowned advisors from psychology, statistics and economics. Their work used personality trait tests and training methods to reduce cognitive biases and improve the forecasting abilities of their volunteer forecasters. They then identified individuals who consistently out-performed their peers. Subsequent studies of this experiment found that when these top forecasters were placed in teams with other such forecasters (described here as ‘group of average citizens doing Google searches in their suburban town homes’), they performed around 30% better than the average for intelligence community analysts who had access to confidential intercepts and other relevant data. Pretty Super-ising results one might say!
Can central banks become this ‘super’?
The story so far could imply that the answer simply lies in replacing central bank forecasters with these Superforecasters and leaving them to it. However, central bank forecasting is as much about forming a coherent economic narrative (the preserve of economists) as it is about numerical accuracy (for which the traits that make these individuals outperform the ‘experts’ matter). Central bankers may have a comparative advantage in the former, but their forecasting can be enhanced by considering key behavioural traits of those responsible for forecasting.
So how do central bankers fare against these Superforecasters?
The similarities: Superforecasters (most importantly) have a ‘growth mind-set’, which is a real willingness to address why a forecast is different from its eventual outcome, rather than just an ex-post evaluation of whether the prediction was correct. They also demonstrate a good balance of data and judgements when forming conclusions, not placing undue weight on either one.
Central bankers in comparison likely fare favourably against these traits, given most major central banks provide detailed updated assessments (the ‘why’) accompanying changes to their forecasts on a regular (usually quarterly) basis. At the BoE, these follow a substantial consultation process between staff and policymakers.
Using the wisdom of crowds: Another key trait of Superforecasters is that their forecasting abilities are enhanced when working in diverse teams – with people drawn from a range of disciplines, levels and areas of expertise. This enables them to tap into the well-known concept of the wisdom of crowds, and the process reportedly leads to better forecasts by providing a more stimulating environment for debate. Results are further aggregated to give more weight to forecasters who have a better track-record.
The central bank forecasting process does incorporate some elements of this – for example, many central bank policymakers make decisions in committees, after debate and exchanging views. Moreover, several central banks regularly use surveys of external economic forecasters as an input to the forecasting process, or draw on external views, e.g. the use of the Agency or Market Intelligence networks to gather views in the BoE.
But (we would assert that) the forecasting outputs do not benefit in the same way as the Superforecasting process, where particular behavioural traits, a mix of expert/non-expert opinions or previous track records of those forecasters are considered. Engaging a wider cohort of participants in forecasting could address this. One suggestion would be to createCitizen Economists – as suggested by Andy Haldane in this speech, who argued that the wisdom of crowds can be harnessed by regularly canvassing the views of the public on the economy. Central banks could consider creating an online platform that engages the public directly with forecasting – which might also improve public understanding of policy and the economy (the RSA’s Citizen Economic Council and the Bank’s recently announced Citizen Panels for example intend to achieve a similar purpose). Central bankers can use these as an input into their own forecast process (perhaps even publishing the alternative crowd sourced forecast, similar to the way that the Fed publishes a staff forecast alongside the FOMC’s official one), though policymakers would remain accountable for their own forecasts and any policy decisions based on them.
Competing forecasts: A further avenue to explore is whether central banks might engageexternal Superforecasters (who aren’t constrained by the same institutional challenges as central banks) to produce their own macro-economic forecasts. Superforecasters currently partner with organisations (e.g. humanitarian or policy-making) around the world on topics ranging from geopolitics, future currency movements, and economics. In a similar vein, central banks could use Superforecasters’ macro-economic forecasts alongside their surveys of external economic forecasters as an additional input to the forecasting process.
Central Bank Superforecasters: Central banks could also try to identify and train their owninternal economic Superforecasters, by employing the same techniques of cognitive training, team-work and result aggregation as another additional input to the forecasting process. One part of such a programme could be the continual assessment of forecasting performance, as measured by Brier scores.
With continuous forecasting challenges on the horizon in coming years, perhaps it is an opportune time to incorporate these ideas in the central banking sphere. Economic forecasting will always be an imperfect science. So while it is unlikely that a major shock such as the global financial crisis would have been averted by improving the accuracy of forecasting efforts in these ways, we believe the lessons learnt through the experiment of Superforecasting have a lot to offer to take forecasting a step forward in that direction. Potentially over time, we might be able to create a next generation of central bank Superforecasters.
Aakash Mankodi works in the Bank’s Market Intelligence and Analysis Division and Tim Pike works in the Bank’s Agencies Division.