RDP 2017-01: Gauging the Uncertainty of the Economic Outlook Using Historical Forecasting Errors: The Federal Reserve's Approach 3. Methods for Gauging Uncertainty

How might central banks go about estimating the uncertainty associated with the outlook?[5] The approach employed by the FOMC and several other central banks is to look to past prediction errors as a rough guide to the magnitude of forecast errors that may occur in the future.[6] For example, if most actual outcomes over history fell within a band of a certain width around the predicted outcomes, then a forecaster might expect future outcomes to cluster around his or her current projection to a similar degree. Such an error-based approach has two attractive features. First, the relationship of the uncertainty estimates to historical experience is clear. Second, the approach focuses on the actual historical performance of forecasters under true ‘field conditions’ and does not rely on after-the-fact analytic calculations, using various assumptions, of what their accuracy might have been.[7]

The approach of the FOMC is somewhat unusual in that historical estimates are compared with qualitative assessments of how uncertainty in the forecast period may differ from usual. Most FOMC participants have judged the economic outlook to be more uncertain than normal in well over half of SEPs published since late 2007.[8] A majority has also assessed the risks to some aspect of the economic outlook to be skewed to the upside or downside in more than half of the SEPs released to date, and in many other releases a substantial minority has reported the risks as asymmetric.

These qualitative comparisons address two potential drawbacks with the error-based approach. First, the error-based approach assumes that the past is a good guide to the future. Although this assumption in one form or another underlies all statistical analyses, there is always a risk that structural changes to the economy may have altered its inherent predictability. Indeed, there is evidence of substantial changes in predictability over the past 30 years, which we discuss below. These signs of instability suggest a need to be alert to evidence of structural change and other factors that may alter the predictability of economic outcomes for better or worse. Given that structural changes are very difficult to quantify in real time, qualitative assessments can provide a practical method of recognizing these risks.

Second, estimates based on past predictive accuracy may not accurately reflect policymakers' perceptions of the uncertainty attending the current economic outlook. Under the FOMC's approach, participants report their assessments of uncertainty conditional on current economic conditions. Thus, perceptions of the magnitude of uncertainty and the risks to the outlook may change from period to period in response to specific events.[9] And while analysis by Knüppel and Schultefrankenfeld (2012) calls into question the retrospective accuracy of the judgmental assessments of asymmetric risks provided by the Bank of England, such assessments are nonetheless valuable in understanding the basis for monetary policy decisions.

Model simulations provide another way to gauge the uncertainty of the economic outlook. Given an econometric model of the economy, one can repeatedly simulate it while subjecting the model to stochastic shocks of the sort experienced in the past. This approach is employed by Norges Bank to construct the fan charts reported in their quarterly Monetary Report, using NEMO, a New Keynesian DSGE model of the Norwegian economy. Similarly, the staff of the Federal Reserve Board regularly use the FRB/US model to generate fan charts for the staff Tealbook forecast.[10] Using this methodology, central bank staff can approximate the entire probability distribution of possible outcomes for the economy, potentially controlling for the effects of systematic changes in monetary policy over time, the effective lower bound on nominal interest rates, and other factors. Moreover, staff economists can generate these distributions as far into the future as desired and in as much detail as the structure of the model allows. Furthermore, the model-based approach permits analysis of the sources of uncertainty and can help explain why uncertainty might change over time.

However, the model-based approach also has its limitations. First, the estimates are specific to the model used in the analysis. If the forecaster and his or her audience are worried that the model in question is not an accurate depiction of the economy (as is always the case to some degree), they may not find its uncertainty estimates credible. Second, the model-based approach also relies on the past being a good guide to the future, in the sense that the distribution of possible outcomes is constructed by drawing from the model's set of historical shocks. Third, this methodology abstracts from both the difficulties and advantages of real-time forecasting: It tends to understate uncertainty by exploiting after-the-fact information to design and estimate the model, and it tends to overstate uncertainty by ignoring extra-model information available to forecasters at the time. Finally, implementing the model-based approach requires a specific characterization of monetary policy, such as the standard Taylor rule, and it may be difficult for policymakers to reach consensus about what policy rule (if any) would be appropriate to use in such an exercise.[11] Partly for these reasons, Wallis (1989, pp 55–56) questions whether the model-based approach really is of practical use. These concerns notwithstanding, in at least some cases model-based estimates of uncertainty are reasonably close to those generated using historical errors.[12]

A third approach to gauging uncertainty is to have forecasters provide their own judgmental estimates of the confidence intervals associated with their projections. Such an approach does not mean that forecasters generate probability estimates with no basis in empirical fact; rather, the judgmental approach simply requires the forecaster, after reviewing the available evidence, to write down his or her best guess about the distribution of risks. Some central banks combine judgment with other analyses to construct subjective fan charts that illustrate the uncertainty surrounding their outlooks. For example, such subjective fan charts have been a prominent feature of the Bank of England's Inflation Report since the mid-1990s.

Judgmental estimates might not be easy for the FOMC to implement, particularly if it were to try to emulate other central banks that release a single unified economic forecast together with a fan chart characterization of the risks to the outlook. Given the large size of the Committee and its geographical dispersion, achieving consensus on the modal outlook alone would be difficult enough, as was demonstrated in 2012 when the Committee tested the feasibility of producing a consensus forecast and concluded that the experiment (at least for the time being) was not worth pursuing further considering the practical difficulties.[13] Trying to achieve consensus on risk assessments as well would only have made the task harder. And while the FOMC needs to come to a decision on the stance of policy, it is not clear that asking it to agree on detailed features of the forecast is a valuable use of its time.

Alternatively, the FOMC could average the explicit subjective probability assessments of individual policymakers, similar to the approaches used by the Bank of the Japan (until 2015), the Survey of Professional Forecasts, and the Primary Dealers Survey.[14] The relative merits of this approach compared to what the FOMC now does are unclear. Psychological studies find that subjective estimates of uncertainty are regularly too low, often by large margins, because people have a systematic bias towards overconfidence.[15] Contrary to what might be suspected, this bias is not easily overcome; overconfidence is found among experts and among survey subjects who have been thoroughly warned about it. This same phenomenon suggests that the public may well have unrealistic expectations for the accuracy of forecasts in the absence of concrete evidence to the contrary – which, as was noted earlier, is a reason for central banks to provide information on historical forecasting accuracy.

Footnotes

For a general review of interval estimation, see Tay and Wallis (2000). [5]

Among the other central banks employing this general approach are the European Central Bank, the Reserve Bank of Australia, the Bank of England, the Bank of Canada, and the Sveriges Riksbank. For summaries of the various approaches used by central banks to gauge uncertainty, see Tulip and Wallace (2012, Appendix A) and Knüppel and Schultefrankenfeld (2012, Section 2). [6]

Knüppel (2014) also discusses the advantages of the errors-based approach to gauging uncertainty as part of a study examining how to best to exploit information from multiple forecasters. [7]

In all SEPs released from October 2007 through March 2013, a large majority of FOMC participants assessed the outlook for growth and the unemployment rate as materially more uncertain than would be indicated by the average accuracy of forecasts made over the previous 20 years; a somewhat smaller majority of participants on average made the same assessment regarding the outlook for inflation in all SEPs released from April 2008 through June 2012. Since mid-2013, a large majority of FOMC participants has consistently assessed the uncertainty associated with the outlook for real activity and inflation as broadly similar to that seen historically. [8]

FOMC participants, if they choose, also note specific factors influencing their assessments of uncertainty. In late 2007 and in 2008, for example, they cited unusual financial market stress as creating more uncertainty than normal about the outlook for real activity. And in March 2015, one-half of FOMC participants saw the risks to inflation as skewed to the downside, in part reflecting concerns about recent declines in indicators of expected inflation. See the Summary of Economic Projections that accompanied the release of the minutes for the October FOMC meeting in 2007; the January, April, and June FOMC meetings in 2008; and the March FOMC meeting in 2015. Aside from this information, the voting members of the FOMC also often provide a collective assessment of the risks to the economic outlook in the statement issued after the end of each meeting. [9]

The FRB/US-generated fan charts (which incorporate the zero lower bound constraint and condition on a specific monetary policy rule) are reported in the Federal Reserve Board staff's Tealbook reports on the economic outlook that are prepared for each FOMC meeting. These reports (which are publicly released with a five-year lag) can be found at www.federalreserve.gov/monetarypolicy/fomchistorical2010.htm. See Brayton, Laubach and Reifschneider (2014) for additional information on the construction of fan charts using the FRB/US model. Also, see Fair (1980, 2014) for a general discussion of this approach. [10]

Achieving agreement on this point would likely be difficult for a committee as large and diverse as the FOMC, as was demonstrated by a set of experiments carried out in 2012 to test the feasibility of constructing an explicit ‘Committee’ forecast of future economic conditions. As was noted in the minutes of the October 2012 meeting, ‘… most participants judged that, given the diversity of their views about the economy's structure and dynamics, it would be difficult for the Committee to agree on a fully specified longer-term path for monetary policy to incorporate into a quantitative consensus forecast in a timely manner, especially under present conditions in which the policy decision comprises several elements’. See www.federalreserve.gov/monetarypolicy/fomcminutes20121024.htm. [11]

For example, the width of 70 percent confidence intervals derived from stochastic simulations of the FRB/US model is similar in magnitude to that implied by the historical RMSEs reported in this paper, with the qualification that the historical errors imply somewhat more uncertainty about future outcomes for the unemployment rate and the federal funds rate, and somewhat less uncertainty about inflation. These differences aside, the message of estimates derived under either approach is clear: Uncertainty about future outcomes is considerable. [12]

See the discussion of communications regarding economic projections in the minutes of the FOMC meeting held in October 2012 (www.federalreserve.gov/monetarypolicy/files/fomcminutes20121023.pdf). Cleveland Federal Reserve Bank President Mester (2016) has recently advocated that the FOMC explore this possibility again. [13]

Under this approach, each FOMC participant would assign his or her own subjective probabilities to different outcomes for GDP growth, the unemployment rate, inflation, and the federal funds rate, where the outcomes for any specific variable would be grouped into a limited number of ‘buckets’ that would span the set of possibilities. Participants' responses would then be aggregated, yielding a probability distribution that would reflect the average view of Committee participants. [14]

See Part VI, titled ‘Overconfidence’ in Kahneman, Slovic and Tversky (1982) or, for an accessible summary, the Wikipedia (2016) entry ‘Overconfidence Effect’. [15]