In this interview, Tim Harford, author of The Undercover Economist and senior columnist at the Financial Times, explains why Vote Leave’s bogus £350m NHS claim was so effective, why it’s so difficult to predict stock prices and why early researchers often get too much credit.

How was Brexit affected by misleading statistics?

I think the debate was derailed by endless discussion of the lie on the bus – the demonstrably untrue assertion that the UK sent £350m a week to the EU. Andrew Lilico, one of the small number of economists campaigning for Leave, told me during the campaign that he would have preferred to use a smaller number. When we discussed it after the vote, though, he reflected that the larger number had been politically effective, simply because the Remain campaign had devoted so much energy to rebutting it.

It’s something to reflect on for anyone interested in the practice of fact-checking: simply rebutting untrue claims may be unhelpful or even counter-productive. We need to be thoughtful in the way we do it.

Do you believe that enough statistical research can turn hypotheses into facts? Or will there always be doubt?

There will always be doubt but we need to bear in mind that doubt has been systematically used as a weapon against expertise. Arguably this started with the cigarette companies, who employed Darrell Huff – author of the famous How To Lie With Statistics – to chip away at the idea that the epidemiologists knew what they were doing. Then we saw the same thing with climate change, and now populist politicians.

More research is always needed, the world is always complicated, and experts often get things wrong – but we need to be careful not to let healthy scepticism turn into corrosive cynicism.

In your opinion, what can and can’t be predicted? E.g., Do you think it is possible to predict the weather? Or the consequences of Brexit? Or Apple’s stock price a month from now?

These are very different projects. The weather is a complex system, but with more data and more powerful computers you can have a go at forecasting a few days out, with some demonstrable success. (It helps that weather forecasters get a lot of quick feedback.)

Brexit is simply complicated: we’re predicting the impact of an as-yet-unknown political agreement on any already-complex economy. I think we can make some reasonable conditional forecasts – the economic equivalent of “if you eat a lot of doughnuts you will probably put on weight” but that’s about it. And if someone eats doughnuts despite our advice, there’s always room for them to deny that the doughnuts are making any difference.

As for Apple’s stock price, a stock price is a prediction of the discounted value of future cash flows. So you’re asking for a prediction of a prediction; that’s intrinsically very hard, because the prediction influences itself. Storms don’t come because we predict storms, but shares can soar or fall because we think they will.

What are some pitfalls that statisticians should be aware of when doing research?

One difficult balancing act is to reflect the history of a claim.

On one hand, we sometimes give the early researchers too much credit. There’s a lot of social psychology at the moment based on fairly small samples with noisy measures that is proving hard to replicate, yet remains quite sticky. We seem reluctant to dismiss a small, noisy study on the basis of a larger, later study, unless the evidence is overwhelming. If the small noisy study had come later, we’d ignore it.

And physicists have found that errors in estimating physical constants – such as the charge on an electron – tend to persist. A more accurate experiment comes along, but people are reluctant to completely dismiss the earlier work.

On the other hand, if there’s a huge body of work out there and some crazy new study grabs all the headlines, the simple heuristic should be that if it’s astonishing, it’s probably wrong. This is not an easy balance to strike but we need to try.

Do you think statistics should be taught differently to how it is today?

I think we should think more about the way statistics are communicated. This is an old argument – Florence Nightingale and William Farr debated how her famous Coxcomb diagrams should be presented. Farr said that “statistics should be the dryest of all reading”; Nightingale believed that statistics needed to be presented with impact if they were to make a positive difference. She was right – but of course statistics can be presented with misleading impact, too.

If you would like to be sent a list of our interviews and puzzles each week, sign up to Black Swans here.