RDP 8302: Economic Forecasts and their Assessment I. Are Economic Forecasts Accurate?

(a) The issues

The most common criticism of economic forecasts is that they are usually wrong. In a literal sense this is, of course, true; forecasters cannot hope to get things right to the last decimal point. In a general sense, however, the proposition is wrong; some economic variables can be forecast to a reasonable degree of accuracy most of the time. To illustrate this point, Table 1 shows recent forecasts of real GDP by the OECD and by the large U.S. forecasting firm, Data Resources Inc (DRI).[2] These were chosen because they are respectively the best-known forecaster of the international economy and of the U.S. economy.

On the surface, the forecasts are astonishingly accurate. Judged by this sort of performance, it is hard to see why there should be such scepticism about the efficacy of economic forecasting. Unfortunately there is more to the story than appears from Table 1.

Table 1: Forecasts of growth of real GDP
per cent change, year-on-year
  OECD forecast of OECD area DRI forecast of US
Forecast made in previous December Outcome Forecast made in previous September Outcome
1977 3–3/4 3.7 5.7 5.3
1978 3–1/2 3.7 4.6 5.0
1979 3–1/4 3.3 2.8 2.9
1980 1 1.3 −0.9 −0.4
1981 1 1.2 1.5 1.9
  • GDP is one of the easier variables to forecast.[3] Even though economists give pride of place in their forecasting effort to GDP, and, to a lesser extent, inflation, businesses often have greater need for forecasts of more specific variables. These specific variables include components of GDP, such as housebuilding, and key prices such as interest rates, exchange rates, commodity prices etc. In these areas the forecasting record is generally much poorer.
  • These forecasts are only for one year ahead. It was no doubt useful to know in 1979 that GDP growth was going to be negligible in 1980. However, the really important thing to have known was that it was going to remain negligible for three years. Forecasters who correctly picked the 1980 turning point have been justifiably criticised for failing to see that the world was entering the longest recession in the post-war period. It has been established on numerous occasions (Zarnowitz (1967 and 1979), Christ (1975), Fromm and Klein (1976), McNees (1976), Su (1978)) that, in general, the accuracy of the forecast declines the further ahead is the forecasting period.[4]
  • The period shown in Table 1 is flattering to the forecasts as it did not include a major shock i.e. an outcome for any one year outside the range of recent experience. It is possible to find one by going back a few years further. Between 1973 and 1974 GDP growth in the OECD area fell from 6.1 per cent to 0.9 per cent, by far the sharpest turnaround in the post war period. In 1974 all the major forecasting groups failed to predict the severity of the downturn. (The OECD forecast was 3–1/2 per cent).

This third point is the crucial criticism of economic forecasting. Economic forecasts, at least of real GDP growth, are usually quite good; they are near the mark in most years and over reasonable periods they outperform simple extrapolative methods. The problem is, that when something really large occurs, economic forecasts either fail to pick it or grossly underestimate its size.[5]

A better way of illustrating this is to look at some history. Between the first world war and the Great Depression there was a flourishing economic forecasting industry in the U.S.[6] Its failure to predict the latter event led to its demise. Forecasters that depended for revenue on the sale of their forecasts went out of business; others, such as those associated with banks, continued in spite of an equally poor performance.

In the post-war period there have also been a number of major failures by economic forecasters. The main ones were:

  • the false prediction of a recession immediately after World War II;[7]
  • the failure to predict the magnitude of the acceleration in inflation in the early seventies;
  • the failure to predict the severity of the fall in output and employment in 1974;
  • over recent years, the failure to predict the duration of the international recession (see above), and to predict the rise in interest rates (see below).

In summary, the legitimate criticism of the accuracy of economic forecasts is that they are only good at predicting the predictable. When the movements of economic variables are within the range of recently observed movements, forecasting accuracy can seem to be quite good. When movements are outside the range of recent experience, forecasts look poor. All the failures of forecasting listed above, except for the post-World War II recession, are examples of this tendency. It could be claimed that, as most years do not contain an extreme movement in an economic variable, economic forecasts are good most of the time. Unfortunately, users of economic forecasts have a disproportionate need to be alerted to the extreme movements.

It is not much comfort to have been correctly told that GDP growth was going to rise from 2–1/2 per cent to 3–1/2 per cent, if you were not told that interest rates were going to rise to an all-time record.

(b) Some Australian results

This section looks at an interesting sub-set of Australian forecasts to illustrate some of the points made above; for reasons which will be made apparent later, it is not an assessment of the relative worth of different forecasters. The sub-set of forecasts is that collected each January over the last six years by Terry McCrann of The Age. This collection has covered over twenty forecasters in some years, twelve of whom have replied to all five of the completed years analysed in this paper. The calculations in this section are confined to this constant group of twelve forecasters and the five variables shown in diagram 1, namely, real GDP growth, the change in the CPI, the level of unemployment, the current account defict and the bond rate. The twelve forecasters include private forecasting companies, a university-affiliated economic institution, the economic departments of trading banks, other financial institutions and a public company. The forecasts made by the Treasury and published in Budget Statement No. 2 are not included in this list as they refer to financial years.[8]

Diagram 1 shows the range of forecasts for each year for each variable, along with the outcome.[9] It was possible to compare the outcome with the range of forecasts for the five variables for five years. In these twenty-five observations the outcome was outside the range of forecasts on nine occasions or 36 per cent of the time. These are shown in Table 2.

Table 2: Occasions on which outcome was outside the range of forecasts
Variable Year
GDP 1982
CPI 1979
Unemployment 1981
  1982
Current account 1978
  1981
Bond rate 1979
  1980
  1981
DIAGRAM 1
FORECASTS AND OUTCOMES AUSTRALIA
REAL GROSS DOMESTIC PRODUCT
ANNUAL PERCENT CHANGE
Diagram 1 Forecasts and Outcomes Australia – Real Gross Domestic Product
CONSUMER PRICE INDEX
PERCENTAGE CHANGE OVER YEAR TO DECEMBER QUARTER
Consumer Price Index
UNEMPLOYMENT
JUNE – NOT SEASONALY ADJUSTED
Forecasts and Outcomes Australia
CURRENT ACOUNT DEFICIT
Forecasts and Outcomes Australia
AUSTRALIAN GOVERNMENT BOND RATE
END DECEMBER
Forecasts and Outcomes Australia

This is not a very impressive result, but it is, in a broad sense, consistent with the usual findings about the effectiveness of forecasting mentioned in the first section.

  • The forecasts perform better than simple extrapolation. To test this proposition, another set of forecast ranges based on simple extrapolation were constructed. These effectively assumed that each variable followed a random process. The forecast ranges for each year were thus the outcome of the previous year plus or minus the average size of changes in the series. When this was calculated, the resulting forecast ranges failed in thirteen occasions out of twenty-five to include the actual outcome. Especially as the ranges were about one and a half times as large as those produced by the twelve forecasters, it is reasonable to conclude that they were inferior to the actual forecast ranges. (An appendix contains details of this extrapolation.)
  • The forecasts are worst for the variable that showed the most extreme movement. For three years in a row all forecasters underestimated the bond rate. The bond rate was the only variable whose movements were outside the range of previous experience; the rise of 6.2 percentage points in a three year period was unprecedented. It is true that the number of unemployed and the dollar value of the current deficit were also at record levels. However, in relative terms, the movements in these variables were not exceptional.
  • Somewhat surprisingly the forecast of the CPI was as good as that for GOP. The reason is that movements in the CPI were smaller than those for GDP over this period and so a basically extrapolative procedure for forecasting prices worked reasonably well. The coefficient of variation of GDP was about 50 per cent against 13 per cent for the CPI. An earlier draft of this paper covering the first four years' results was able to conclude that the forecasts of GDP were better than for the other variables considered, however the recession in 1982 altered that conclusion.[10] In short, GDP and unemployment experienced a very large shock within the evaluation period while prices did npt.

Footnotes

This five-year period was chosen as it was used by Caton (1982) to show DRI results which he said “should sustain (him) at least through (his) next two bouts of scepticism of all forecasting methods”. The OECD results were from the December issues of the “Economic Outlook”. [2]

Real GDP is generally forecast better than the other major macro-economic variable – the inflation rate. The coefficient of variation of OECD forecasts for inflation is about 1–1/2 times larger than for GDP. Caton (1982) says of the DRI inflation forecasts over 1977 to 1981 that they are “uninspired, marginally outperforming the naive model, which they closely resemble”. Zarnowitz (1979) and Daub (1981) also found that inflation forecasts were hardly distinguishable from extrapolations. [3]

One neglected reason for this is that forecasters become excessively cautious when they extend the forecasting period. They are often prepared to forecast extreme outcomes (i.e. large rises or falls in economic variables) in the near future, but are reluctant to do so further ahead. Most quarterly or half yearly forecasts incorporate “a return to normality” after about six months or a year (this can be verified for OECD half yearly forecasts). This is because extreme forecasts are hard to defend. In the case of forecasts made for only six months or a year ahead, it is often possible to defend them by reference to current conditions, leading indicators, anticipations data etc. Forecasts out beyond a year cannot rely on such information for their defence. [4]

The first of tnese points – that economic forecasts are better than simple extrapolation – has been made by Zarnowitz (1967, 1978), Mincer and Zarnowitz (1969), Christ (1975) and Shapiro and Garman (1981). The second point, namely that forecasts are worst when large changes are occurring, has been documented by Zarnowitz (1979) and Shapiro and Garman (1981). [5]

By 1927 there were more than half a dozen commercial forecasting services with a national clientele. The academic world was also represented e.g. by the Harvard Economic Service and Irving Fisher who published an annual forecast in the American Economic Review. Systematic assessments of forecasting accuracy were also carried out. Some of these are described in Shapiro and Garman (1981). [6]

See Sapir (1949) and Zarnowitz (1978). [7]

Pagan et. al (September 1982) point out that many private forecasters “seem to adjustment (their forecasts) towards those given in the Budget”. This would suggest that the Treasury forecast would end up in the middle of the range of forecasts by January, even if it had not been moved. As all calculations in this section are based on ranges of forecasts, the omission of the Treasury forecast would not be likely to lead to major changes in the conclusions. [8]

The outcomes used throughout this paper are the latest estimate available rather than the first estimate. Although it is possible to argue for either, the prevailing view is that “the main object of forecasting is to anticipate what will actually happen in the economy rather than what the data source agencies, on the basis of incomplete information, initially estimated had happened”. McNees (1981b). Others have taken a different view. Suprisingly it makes little difference which estimate of the outcome is used for comparison. In fact some studies have found that forecasts have been closer to the final outcomes than to the preliminary ones. [9]

Note that the specification of some of the forecast variables made it easier for the forecasters in 1982 than recent events might suggest. For example:
  • Unemployment had to be forecast for mid-year but the big shakeout did not occur until the second half of the year. Although the June outcome was slightly above the top of the forecast range, the end-year outcome was well above it;
  • The bond rate had to be forecast for the end of the year. By mid-year it had risen to an all-time high which was well above the top of the forecast range. It was not until December that the bond rate fell back into the forecast range.
[10]