RDP 9812: An Optimising Model for Monetary Policy Analysis: Can Habit Formation Help? 1. Introduction

With the resurgence of interest in the effects of monetary policy on the macroeconomy, led by the work of the Romers (1989), Bernanke and Blinder (1992), Christiano, Eichenbaum and Evans (1994), and others, the need for a structural model that could plausibly be used for monetary policy analysis has become evident. Of course, many extant models have been used for monetary policy analysis, but many of these are perceived as having critical shortcomings. First, some models do not incorporate explicit expectations behaviour, so that changes in policy (or private) behaviour could cause shifts in reduced-form parameters (i.e. the critique of Lucas 1976). Others incorporate expectations, but derive key relationships from ad hoc behavioural assumptions, rather than from explicit optimising problems for consumers and firms.

Explicit expectations and optimising behaviour are both desirable, other things equal, for a model of monetary analysis. First, analysing the potential improvements to monetary policy relative to historical policies requires a model that is stable across alternative policy regimes. This underlines the importance of explicit expectations formation.

Second, the ‘optimal’ in optimal monetary policy must ultimately refer to social welfare. Many have approximated social welfare with weighted averages of output and inflation variances, but one cannot know how good these approximations are without more explicit modelling of welfare. This of course implies that the model be closely tied to the presumed objectives of consumers and firms, hence the emphasis on optimisation-based models.

A number of recent papers (see, for example, King and Wolman (1996); McCallum and Nelson (1997, 1998); Rotemberg and Woodford (1997)) have begun to develop models that incorporate explicit expectations, optimising behaviour, and an economy in which monetary policy has real effects. However, to date, I would argue that most of these efforts have not been very successful. In essence, their failure arises from inadequate empirical validation of the restrictions imposed by the model.

It is certainly not the case that any model based on an optimisation problem with rational expectations will be a good candidate for use in monetary policy analysis. In particular, in earlier work (Fuhrer 1997a), I document what I view as the failure of simple standard optimising models to adequately mimic the dynamics found in the data for key variables. If a model fails significantly at matching these dynamics, it becomes much harder to claim that the model represents the underlying behaviour of consumers and firms. One therefore cannot trust the model's welfare rankings across alternative monetary policy strategies.

In order to identify models whose underlying assumptions reasonably approximate the objectives and decisions of consumers and firms, one must carefully test the model's implications for the dynamic evolution of key variables against the behaviour of these variables in the data. For monetary policy analysis, it is not enough to match first and second unconditional moments, or a subset of conditional moments implied by the model, to those implied by the data, as in any number of early equilibrium business cycle studies (see, for example, King and Plosser (1984); King, Plosser and Rebelo (1988); Christiano and Eichenbaum (1990)). The working assumption among most economists is that monetary policy has only short-run effects on real variables. If so, it would be a major omission not to fully evaluate the short-run dynamic effects of monetary policy in a candidate model.

Similarly, it is not sufficient in general to match a model's impulse response for a single shock to that in a reduced-form model (see, for example, Rotemberg and Woodford (1997) and my comments in the same volume). While such a procedure might be used to generate consistent parameter estimates given the model's restrictions, it does not validate the model's ability to match the dynamic behaviour of its key variables to the data. In particular, relying on the model's response to a monetary policy shock (typically a federal funds rate shock) may be quite misleading. As Leeper, Sims and Zha (1996) have shown, the fraction of the variance of output, inflation, or interest rates accounted for by the unanticipated component of monetary policy is generally quite small. By using a single impulse response to assess the validity of the model, especially a response to a monetary policy shock, one is restricting oneself to a subset of the full range of dynamic behaviours implied by the model. The use of such a metric can be quite misleading.

I advocate, and in this paper I implement, the use of likelihood-based evaluation criteria for distinguishing among models. The simple rationale is that the likelihood incorporates all of the dynamic covariances among observable variables, weighted according to their contribution to the likelihood. It should as a result be less subject to the criticisms levied above against less formal evaluation criteria. As a graphical tool, I often compare the data's and the model's vector autocovariance functions (ACF). To be more precise, I compare the ACF implied by an unconstrained vector autoregression with the ACF of the constrained structural models that are nested within the VAR. This often reveals important differences in model and data dynamics that underlie statistically significant differences in the likelihood. Thus, the ACF may highlight visually a behavioural deficiency in the model that is more difficult to interpret from statistical evidence.[1]

In the next section, I review the evidence on the deficiencies of several standard sticky-price models in the literature. One striking failure is the inability of a life-cycle model of consumption – even if augmented with Campbell–Mankiw rule-of-thumb behaviour – to adequately capture the dynamic interaction of consumption, income, and interest rates. In particular, the life-cycle model does not capture the ‘hump shaped’ response of consumption to shocks that appears to characterise the aggregate data. In Section 3, I develop a model of habit formation in consumer behaviour, based on the work of Carroll, Overland and Weil (1995), and related in spirit to the pioneering work of Duesenberry (1967). In Section 4, I examine the extent to which habit formation – one form of non-time separability in utility – can improve the dynamic behaviour of the simple model. I will argue that, because habit formation imparts a utility-based smoothing motive for both changes and levels of consumption, it significantly improves the ability of the model to match the dynamic response of consumption to shocks. Section 5 examines the response of the model during a disinflation, Section 6 examines the quality of the linear approximations used, Section 7 discusses some welfare considerations, and Section 8 concludes.


In this regard, this paper supports the conclusions of Fair (1992), i.e. a return to the Cowles Commission approach (perhaps somewhat modernised) to specification and testing of empirical macroeconometric models. [1]