Speech Remarks at the Walkley Business Journalism Award

Thank you very much for the invitation to be here. The financial media is very important to the Reserve Bank, playing a critical role in helping us communicate our message to the public.

Today I am going to talk about the challenges that uncertainty causes journalists as well as policymakers. I will draw on Charles Manski, Nate Silver and Philip Tetlock and repeat the warning of the first of these: ‘Beware the Lure of Incredible Certitude’.[1]

What does Manski mean? He means try to avoid presenting point predictions and estimates without conveying any sense of the uncertainty around them. This is hard because as Manski notes, ‘the public, impatient for solutions to its pressing concerns, rewards those who offer simple analyses leading to unequivocal policy recommendations’.

Should we be giving the people what they want? Can they handle the truth? Or in this case, can they handle the fact that we don't really know the whole truth? To channel Mulder and Scully, the truth is out there, but its whereabouts are unknown.

We are always making decisions under uncertainty. Uncertainty is unavoidable. Sometimes the uncertainty isn't that important and we can ignore it. But a lot of the time, it is important and we can't ignore it.

So what do we do? We estimate. As Philip Tetlock said in his great book Superforecasting, ‘estimating is what you do when you do not know’.

Given we are in a world of probabilistic statements, how do we assess them? How do we hold those who make these statements to account?

Probabilistic statements should be falsifiable and they should be verifiable. They need a time frame and they need some numbers. They should avoid phrases like ‘x has a significant chance of happening’. Or ‘there is a possibility of x happening’. Significant and possible can mean anything. Pin people down; get them to commit!

If instead we have a statement such as ‘x has a one in three chance of happening in the next two years’, we can work out that it is less likely to occur than something that has a 50/50 chance of happening. But if the event does actually happen in the prescribed time frame, which forecast was more accurate? Unfortunately that's hard to assess. In the end, track records built up over time can help you sort out luck from skill.

Time frames help here too. Beware the ‘I told you so’ forecaster. Put a prediction out there without any expiry date and many years down the track when the event actually happens then ‘I told you so’. Pick the one in 100 event that no one saw coming. Was the forecaster really good? Or were they the monkey that wrote Hamlet?

Closely related to ‘I told you so’ is ‘just wait, it’s still coming'. Tetlock highlights an example from 2012 when a bunch of luminaries wrote a petition that quantitative easing (QE) in the US would generate high inflation and currency debasement. Six years down the track, inflation in the US is struggling to stay above 2 per cent. But without a time limit on the prediction, they might one day claim to be right. To be a useful prediction, it needs some time frame and some probability of occurrence. The best Australian example of this might be a housing market crash.

Ideally the probability of something occurring gets updated through time as new information comes to hand so you can say whether it is getting more or less likely. This is the Bayesian approach that underpins Nate Silver and the Fivethirtyeight team's approach to prediction described in Nate's book The Signal and the Noise. Continually update your assessment of likely outcomes as new information comes in.

Journalists (and policymakers) need to filter and clarify. Filtering all the predictions out there is hard. Clarifying what they actually mean is hard. How do you assess the validity of the claims? Should you assess them or just report them? How much editorial input should you bring to bear?

What is the motivation of those making the claims? Are they rewarded by the headline or the click? Are you reporting the story that will deliver the best headline, the most quotable quote or are you reporting the more boring, more likely outcome?

To give an example, often the most accurate statement about why the stock market rose today is that it occurred for any one of a hundred different reasons, or for a mix of all of them.[2] Or my favourite explanation: because there were more buyers than sellers. Not as punchy as ‘the market rose today because Carlton won the premiership’.

All that said, people need to be informed of the tail outcomes, but not bombarded with them. I know that bad news sells but a continual reporting of possible tail events conveys the sense that these events are more likely than they really are. Downside risks are easy to articulate, particularly if they are bad, even if they are very low probability. I'm not arguing that we all should be Pollyannas, but at the same time we seem to have a surplus of Cassandras. There are so many black swans identified each day that we may as well all be living in Perth.

I will finish using the employment numbers as an example of the challenges of reporting on economic data accurately. Each month the point estimates are dutifully reported. But that's what they are: point estimates. The ABS provides a band of uncertainty around those estimates, which is ±30,000. Not so large a forecast error in terms of the level of employment, but it is large in terms of the monthly changes. Do you report the full range of this uncertainty? On this, Manski cites LBJ, ‘Ranges are for cattle. Give me a number’.

So what is the signal versus the noise in each monthly release? You need to filter and clarify. Maybe use the trend, though trends are generally slow to pick turning points, just like a lot of forecasters. You can bring other information to bear, cross-validate, consistency check.

So it is definitely a challenge for business journalists to convey uncertainty to the public, just as it is a challenge for us policymakers to make decisions under uncertainty. That is the challenge, but a challenge neither of us can ignore.

Endnotes

Manksi C (2018), ‘The Lure of Incredible Certitude’, available at <http://www.nber.org/papers/w24905>, NBER Working Paper No 24905; Nate Silver (2012), The Signal and the Noise, Penguin; Tetlock P and D Gardner (2015), Superforecasting: The Art and Science of Prediction, Crown Publishing, New York. [1]

Tetlock p 57. [2]