RDP 1999-10: The Implications of Uncertainty for Monetary Policy 2. Sources of Forecast Uncertainty

Clements and Hendry (1994) provide a taxonomy of forecast error sources for an economic system that can be characterised as a multivariate, linear stochastic process. This system can generally be represented as a vector autoregressive system of linear equations. Furthermore, most models of interest to policy-makers can be characterised by a set of trends that are common to two or more variables describing the economy. In these cases, the economic models can be written as vector error-correction models.

For these models, forecast errors can come from five distinct sources:

  1. structural shifts;
  2. model misspecification;
  3. additive shocks affecting endogenous variables;
  4. mismeasurement of the economy (data errors); and
  5. parameter estimation error.

The first source of forecasting error arises from changes in the economic system during the forecast period. The second source of forecasting error may arise if the model specification does not match the actual economy. This may arise, for example, if the long-run relationships or dynamics have been incorrectly specified. Forecasting errors will also arise when unanticipated shocks affect the economic system. These shocks accumulate, increasing uncertainty with the length of the forecast horizon. If the initial state of the economy is mismeasured then this will cause persistent forecast errors. Finally, forecast errors may also arise because finite-sample parameter estimates are random variables, subject to sampling error.

By definition, without these sources of forecast error, there would be no uncertainty attached to the forecasts generated by a particular system of equations. Recent research by Clements and Hendry (1993, 1994 and 1996) explores the relative importance of each of these sources of forecasting error. They find that structural shifts in the data generating process or model misspecifications, resulting in intercept shifts, are the most persistent sources of forecasting error in macroeconomic models.

Rather than comparing the relative importance of each of these sources of uncertainty for forecasting, this paper makes a contribution toward understanding the implications of parameter uncertainty for monetary policy decision-making. In the context of our economic model, we solve for ‘optimal policy’ explicitly taking into account uncertainty about the parameter estimates. Comparing these optimal policy responses with those which ignore parameter uncertainty allows us to draw some conclusions about the implications of parameter uncertainty for monetary policy.

Why focus only on uncertainty arising from sampling error in the parameter estimates? This question is best answered by referring to each of the remaining sources of forecast error. Firstly, structural change is the most difficult source of uncertainty to deal with because the extent to which the underlying structure of the economy is changing is virtually impossible to determine in the midst of those changes. Clements and Hendry have devoted considerable resources toward exploring the consequences of such underlying structural change. However, this type of analysis is only feasible in a controlled simulation. In the context of models that attempt to forecast actual economic data, for which the ‘true’ economic structure is not known, this type of exercise can only be performed by making assumptions about the types and magnitudes of structural shifts that are likely to impact upon the economy. With little or no information on which to base these assumptions, it is hazardous to attempt an empirical examination of their consequences for the conduct of monetary policy.[1]

Similarly, it is difficult to specify the full range of misspecifications that may occur in a model. Without being able to justify which misspecifications are possible and which are not, analysis of how these misspecifications affect the implementation of policy must be vague at best.

Turning to the third source of forecast errors, it is well known that when the policy-maker's objective function is quadratic, mean zero shocks affecting the system in a linear fashion have no impact on optimal policy until the shocks actually occur.[2] To the extent that a linear model is a sufficiently close approximation to the actual economy and given quadratic preferences, this implies that the knowledge that unforecastable shocks will hit the economy in the future has no effect on current policy. For this reason, the impact of random additive shocks to the economic system is ignored in this paper. Instead, we concentrate on multiplicative parameter uncertainty.

It is a common assumption in many policy evaluation exercises that the current state of the economy is accurately measured and known with certainty. However, the initial state of the economy, which provides the starting point for forecasts, is often mismeasured, as reflected in subsequent data revisions. It is possible to assess the implications of this data mismeasurement for forecast uncertainty by estimating the parameters of the model explicitly taking data revisions into account (Harvey 1989). For example, in a small model of the US economy, Orphanides (1998) calibrated the degree of information noise in the data and examined the implications for monetary policy of explicitly taking this source of uncertainty into account. Because this type of evaluation requires a complete history of preliminary and revised data, it is beyond the scope of this paper.

In light of these issues, this paper focuses on parameter estimation error as the only source of uncertainty. From a theoretic perspective, this issue has been dealt with definitively by Brainard (1967). Brainard developed a model of policy implementation in which the policy-maker is uncertain about the impact of the policy instrument on the economy. With a single policy instrument (matching the problem faced by a monetary authority, for example) Brainard showed that optimal policy is a function of both the first and second central moments characterising the model parameters.[3] Under certain assumptions about the cross correlations between parameters, optimal policy under uncertainty was shown to be more conservative than optimal policy generated under the assumption that the true parameters are actually equal to their point estimates. However, Brainard also showed that it is possible for other assumptions about the joint distribution of the parameters to result in more active use of the policy instrument than would be observed if the policy-maker ignored parameter uncertainty.

Resolving the implications of parameter uncertainty for monetary policy then becomes an empirical issue. To this end, the next section describes the impact of this Brainard-type uncertainty on monetary-policy decision making in a small macroeconomic model of the Australian economy.

Footnotes

However, it may be possible to exploit the work of Knight (1921) to generalise the expected utility framework for policy-makers who are not certain about the true structure and evolution of the economy. See, for example, Epstein and Wang (1994). [1]

This is the certainty equivalence result discussed, for example, in Kwakernaak and Sivan (1972). [2]

Only the first and second moments are required because of the assumption that the policy-maker has quadratic preferences. Otherwise it is necessary to maintain the assumption that the parameters are jointly normally distributed. [3]