RDP 2012-07: Estimates of Uncertainty around the RBA's Forecasts 6. Alternatives and Limitations

6.1 Forecast Versus Model Errors

Past forecast errors are one possible method of gauging forecast uncertainty. Another possible approach is to use an economic model. Model-based estimates of forecast uncertainty are considerably more flexible than estimates based on historical forecast errors. Model-based estimates can accommodate new variables, longer horizons, new information sets or forecasting techniques, and so on. Model-based estimates are attractive whenever there is a substantial change in economic structure, the forecasting framework, or in how the forecast is presented.

The disadvantage of model-based confidence intervals is that they can be unrealistic, though the direction of overall bias will vary. Models can overstate uncertainty because they use too little information. They tend to be considerably simpler than the approach used by forecasters, who often pool together multiple data sets, forecasting techniques and qualitative data. Reflecting this, no one model provides a good guide to the Reserve Bank forecast, in which judgement plays a large role.

However, models can also understate uncertainty because, in a different sense, they use too much information. Most obviously, they are typically estimated using revised data that was only available after the forecast. In principle, recursive estimation using real-time data (‘pseudo-real-time forecasting’) can remove this advantage, and mimic some of the conditions of actual forecasting. But perhaps more important are issues of model specification. After the event, it may seem obvious that some variables trended while others were stationary, and that some influences were important while others were irrelevant. But in real time, these issues are often unclear. Unless one can use models actually used at the time, it is very difficult to deprive a model specification of the benefit of hindsight.

To illustrate, we compare our estimates of uncertainty with those of a representative Bayesian VAR, specifically, the ‘BVAR2’ model of Gerard and Nimark (2008, Section 2.1).[15] This comprises two lags of domestic GDP growth, underlying inflation, the cash rate and the exchange rate, together with foreign output growth, inflation and interest rates. Gerard and Nimark provide further details. We depart from their specification by including CPI inflation and the unemployment rate, using the OLS estimate of the shock covariance, and estimating over 1993:Q2–2011:Q3.

Figure 7 compares the 70th percentile of absolute errors from the two approaches, a standard model-based metric. This measure, approximately one standard deviation, is added on either side of the central forecast to construct the confidence intervals in Figure 3. It is often referred to as a ‘half-70 per cent confidence interval’.[16] The model suggests more certainty about the outlook for unemployment and for GDP growth than do past forecast errors, but less certainty about CPI inflation. These differences presumably reflect different information sets. For example, the model knows about the downtrend in unemployment over this sample, though this was not known before the event. In contrast, the model did not know in advance about the introduction of the GST. More could be said about these alternative estimates. For example, they are presumably sensitive to changes in sample period or model specification. However, the key point is that the estimates of uncertainty are broadly similar. Hence model estimates could be used in place of, or to augment, estimates based on past forecast errors.

Figure 7: Forecast Versus Model Errors

6.2 Past Forecast Errors are an Unreliable Guide to the Future

Forecastability will vary with economic conditions. For example, forecast errors are likely to be greater than usual following large macroeconomic shocks. In principle, it might be tempting to address these changes through models of conditional heteroskedasticity. In practice, the macroeconomic shocks that give rise to the greatest uncertainty are often unusual and difficult to quantify, so data-based modelling is unreliable.

Foreign central banks have dealt with this issue in different ways. The Bank of England calculate average historical forecast errors, then adjust these judgementally, so as to reflect the uncertainty about the present forecast more accurately. The US Federal Reserve presents estimates of average forecast errors over the previous twenty years, coupled with a qualitative assessment of the degree to which current uncertainty may differ from normal. For references, see Appendix A.

A more fundamental problem is that average levels of uncertainty seem to be unstable. For example, there was a large reduction in the variability and predictability of macroeconomic conditions in many OECD countries in the early 1980s, the so-called ‘Great Moderation’. There have been large increases in unpredictability (for some variables and countries) over the past few years associated with the global financial crisis.

The difficulties this instability poses are illustrated by the experience of the US Federal Reserve.[17] The Federal Open Market Committee (FOMC) started releasing estimates of uncertainty with its forecasts in November 2007. (These should not be confused with the measures of disagreement among FOMC participants that the Fed also releases.) At that time the midpoint of FOMC projections of the unemployment rate in 2010:Q4 was 4.8 per cent. This projection was associated with a RMSE of 1.1 percentage points, which was the average over the previous twenty years (FOMC 2007, pp 10 and 12). In the event, the unemployment rate rose to 9.6 per cent, representing a forecast error of 4.8 per cent, or 4.4 RMSEs. If forecast errors were normally distributed, with a constant mean and variance, then such an error should occur once every 80,000 forecasts. More likely explanations are that the variance is not actually constant or (perhaps more plausibly, given that subsequent errors have been modest) that the distribution is not normal.

A corollary of this instability is that our estimates of uncertainty are sensitive to the sample period we have used. Were we to include forecast errors from the recession and disinflation of the early 1990s, our RMSEs would increase. Were we to include the volatile 1970s and 1980s, our RMSEs would presumably increase further.

A closely related problem is that there have been many substantive, presentational, and other changes in the forecasts over the past two decades. Some of these are noted in Appendix B. Perhaps more important, the economy we are trying to predict continues to evolve. So past performance is only a rough guide to the future.

Although this instability means that future uncertainty may differ from the past, it does not indicate in which direction. Historical estimates are not obviously biased. This is in contrast to, for example, subjective estimates of uncertainty, which numerous studies in other areas have found to be overconfident (see footnote 2).

Footnotes

Thanks to Penny Smith for constructing these estimates. [15]

Ordinarily, one might expect the 68th percentile of the distribution of forecast errors (shown in Figure 6) and 70th percentile (shown in Figure 7) to be extremely close. But with small samples, as we have at far horizons, sizeable differences can arise. This is an example of the sampling variability discussed in footnote 13. [16]

Full disclosure: one of us was involved in producing the Fed's estimates, see Reifschneider and Tulip (2007). [17]