Get all your news in one place.
100’s of premium titles.
One app.
Start reading
The Guardian - UK
The Guardian - UK
Entertainment
Sam Glover

The Big Idea: can you learn to predict the future?

illustration of a clock
Illustration: Elia Barbieri/The Guardian

From Nostradamus to Paul the “psychic” octopus, who supposedly foresaw the results of World Cup matches, there has been no shortage of people who argue they – or their animals – are able to predict the future. In most cases it’s easy to dismiss such claims, be they incredibly vague, biblical-sounding prophecies (as with Nostradamus) or slippery coincidences (as with Paul).

But are there any people who actually can tell us what’s going to happen? We do, after all, look to academics or well-known political pundits to help us make sense of the world. If we want to know what’s coming down the line in Ukraine, for example, we might ask someone who has studied the Russian military forces, or perhaps a foreign policy guru. For the outlook on inflation in 2023, we might go to an economist. What’s surprising is that the evidence tells us academics and commentators don’t, in fact, do particularly well.

In the mid-1980s, political scientist Philip Tetlock decided to put experts’ predictions to the test. He recruited hundreds of academics and pundits who spent their lives thinking about politics, and signed them up to “forecasting tournaments”. They turned their minds to questions such as how long the Soviet Union might last, or who would win the next presidential election, estimating the probability of each outcome. For example, someone might say that there was a 30% chance that the Soviet Union would collapse before 1990. Over time, these forecasts were tested against reality to see exactly how accurate they were, and it turned out the experts just weren’t that good at anticipating events. Many of them performed about as well as someone guessing completely randomly. A few managed to beat the metaphorical dart-throwing chimp, albeit only by a small margin.

Crucially, it wasn’t just experts that Tetlock signed up to the forecasting tournaments. He also placed adverts aimed simply at curious individuals interested in predicting the future. In the first year, 3,200 people signed up. After the tournament had been running for a while, he implemented an algorithm designed to give the predictions of the most accurate forecasters extra weight. He also “extremised” the forecasts, pushing the probabilities assigned closer to 100% or 0%. Helped by the algorithm, the ordinary people who’d replied to adverts ended up producing better forecasts than intelligence analysts who had access to classified information, and much better ones than academics and political pundits. The individuals with the best track records were anointed “superforecasters”, and they continued to trounce others involved in the competition.

What made them so good? Crucially, the superforecasters seemed almost immune to the biases that affected the predictions of others. One of those biases is “scope insensitivity”, made famous by the Nobel prize-winning psychologist Daniel Kahneman. Imagine you’re asked to give a prediction on how likely it is that Keir Starmer is the leader of the opposition in one year’s time. What seems like a reasonable forecast to you? Is there a 90% chance? 85%, maybe?

Now imagine that you’d been asked about two years hence, rather than one. Would your answer have been any different? If not, you’re probably guilty of scope insensitivity, which is to say that you tend to give the same answers to questions that are superficially similar, but actually require quite different calculations. Most people aren’t very “scope sensitive”, but superforecasters are. They also seem less prone to other cognitive distortions such as confirmation bias and overconfidence – all of which allows them to make better predictions.

So are these uncanny abilities something you’re born with, or could anyone become a superforecaster with enough effort? The answer is, somewhat annoyingly, a bit of both. It’s true the best forecasters have characteristics that many seem to lack. Think about the following question: “If it takes five machines five minutes to make five widgets, how long would it take 100 machines to make 100 widgets?” If you answered 100 minutes, I’m sorry to say that you just failed a typical cognitive reflection test, designed to weed out those who go with their gut rather than thinking more carefully. The correct answer is five minutes and most superforecasters get it without any trouble. They’re the kind who hear a question and immediately start thinking about why the seemingly obvious answer is likely to be wrong.

There are, however, ways to improve your prediction skills. A training programme created by Tetlock increased the accuracy of novice forecasters by about 10%. It involves learning classic forecasting techniques such as focusing on the “base-rate”. Suppose we think about what a good prediction would have been for the Batley and Spen byelection in 2021. This was an election in which many thought the Conservatives stood a good chance of taking the seat from Labour: on polling day you could get 6/1 odds of Labour holding it, which implies a probability of only 14%. In fact, Labour did manage to win. This ought not to have been surprising: since 2010, there had been 25 byelections in Labour-held seats, with Labour winning 23. That gives you a “base-rate” of 92%, a far cry from the 14% implied by the odds.

This is all very interesting, but can it make a difference in the real world? The British government is betting that it can. Since April 2020, civil servants have been making forecasts on everything from Covid infection rates to the chance that China is going to invade Taiwan as part of the “Cosmic Bazaar”, one of the world’s biggest forecasting tournaments. In September 2021, a US official confirmed that the United States is also looking at setting up a similar forecasting platform in a bid to improve intelligence analysis. Elsewhere, NGOs have been working with superforecasters and early warning experts to anticipate humanitarian crises around the world, putting them in a better position to respond rapidly. While we don’t know how influential these attempts might become (nobody has got round to forecasting that yet!), it’s clear that predicting the future has the potential to become more of a science than an art – and we no longer need to leave it in the hands of astrologers or octopuses.

Sam Glover writes about social science, politics and philosophy at samstack.io

Further Reading

Superforecasting by Philip Tetlock and Dan Gardner (Random House, £9.99)

Noise by Cass Sunstein, Daniel Kahneman and Oliver Sibony (Harper-Collins, £10.99)

Thinking in Bets by Annie Duke (Portfolio, £12.17)

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.