Every major opinion poll failed to predict Scott Morrison’s re-election in 2019, but once again news organisations have run numerous stories based on polls in the current campaign, including some that point to dramatic results nationally and in specific seats.
The 2019 result sent shock waves through the polling industry, and kicked off a period of reflection, innovation and transparency.
Pollsters now promise greater rigour, and have deployed some new techniques, but they have also urged voters to think differently about what polls can tell them.
So what weight can we lay on the opinion polls as the election approaches?
What went wrong in 2019?
Murray Goot, an emeritus professor of politics and a leading polling expert, believes one problem was that the polling companies herded together behind a Labor victory as “the risk of being the lone fool was much greater than being one of many fools”.
An inquiry into the performance of the polls in 2019 found the errors were not the result of a last-minute shift in voter sentiment, nor of voters deliberately misleading pollsters, but that the polls overestimated Labor’s vote because their samples were “unrepresentative and inadequately adjusted”.
Most polls published by news outlets rely on online survey-based polling, with the exception of Roy Morgan and Ipsos, which also incorporate telephone interviews. Pollsters ask respondents about their voting intention and past voting patterns, as well as demographic and socioeconomic information.
Many of these companies will pay respondents for completing the online-based surveys – so if you’re wondering if you’ve ever been polled by one of the newspapers, you should know.
Also in the mix is so-called “robopolling”, more commonly used in seat- and topic-specific polling from groups such as uComms. These are the phone calls you get from a random number, with an automated voice listing the voting options and asking which demographic categories you fall into. One industry source described this method as “cheap and cheerful”.
As the companies conducting this polling in 2019 relied on lists of voters’ landline numbers, there were concerns that it skewed towards older respondents, and therefore the data did not accurately reflect the diversity of the voting pool.
Social researcher Rebecca Huntley says one of the problems in 2019 was that polling seemed to confirm what people already believed.
“There was an accepted wisdom that the Labor party was going to win and it seemed to be confirmed by polling and a Liberal party in disarray,” Huntley says.
But, she says, “the problems with polling in other democracies eventually crept up on Australia”.
How have pollsters tried to fix it?
In response to the 2019 failures, the Australian Polling Council was established, with major players adhering to a code of conduct and agreeing to make their methodologies public – with the exception of Resolve, which is not a member of the council.
National political polls published by newspapers have entirely moved away from robopolling, but otherwise the various companies have made different adjustments.
Goot said there is now “a very big spread in methodologies”, particularly in how polls try to gauge voter intention.
Peter Lewis, the executive director of Essential Media – which conducts polling published by Guardian Australia – agrees. “It’s a whole lot less monolithic now.”
An example of this differentiation is the way undecided voters are surveyed.
Most polls offer respondents some way of answering that they don’t know who they will vote for, with the exception of Resolve, which requires respondents to pick a candidate. While this is designed to simulate the decision they will have to make on election day, some observers question whether this accurately captures undecided voter sentiment, as respondents only get paid if they complete the survey.
Some polls ask undecided voters a secondary question on how they are leaning, while others, such as the Guardian’s Essential poll, allow a respondent to complete the survey without making a choice.
Essential polls reported by Guardian Australia no longer include undecided voters in a two-party-preferred score that adds up to 100, instead recording the parties’ share as, for example, 49% to 45%.
The companies have also sought to account for other factors that may lead to errors, such as the underrepresentation of voters for minor parties (who more commonly decline to take part in polling), and assumptions about how preferences flow.
Some polls have brought in quotas based on different demographics, such as socio-economic status, that they incorporate into samples.
One of the latest innovations is YouGov’s “multi-level regression with post-stratification” (MRP). The results of the statistical technique, which surveyed about 19,000 voters and were published by News Corp, has given an insight into the intentions of voters in each of Australia’s 151 seats – a much more ambitious snapshot than other polls.
The Australian newspaper reported on Wednesday that the poll showed Labor would win 80 seats, giving it an outright majority, and that the Liberals were on course to lose Goldstein, Kooyong, Chisholm and Higgins in Melbourne, and Reid, Robertson, Lindsay and Bennelong in New South Wales.
Goot says MRP is not solely polling, but a predictive model that relies on the sophisticated use of survey and demographic data about the “nature of the seat”.
“While the 19,000 respondents to make seat-by-seat predictions is ridiculously small if you divide by 151 electorates, the MRP model gets a lot of data about each respondent,” he said.
It then makes informed assumptions about the proportionality of their sentiment in the electorate based on 2016 census data and more recent data from the Australian Bureau of Statistics, about factors such as homeownership, education level and religion.
MRP was pioneered in the UK in 2017, and Goot says it has performed well there after an uncertain start. This is the first time MRP has been used in Australia.
Huntley agrees there have been improvements, including the establishment of the polling council, greater transparency about questions and methods, and new methodology (such as MRP), but still sounds a note of caution.
“These are some of the improvements, but we still need to come back to the fact that it is not definitive. We always have to have a modest approach to people’s expectations of polling.”
“It may be that the result is largely what the polls said it would be – but that does not mean that polling is a crystal ball. It just means we did the best with the tools that we have.”
Still, Huntley says, polls are a useful tool, particularly when combined with qualitative research, and in marginal seats.
“Regardless of the [polling] results, it is unlikely, based on my qualitative research, that Scott Morrison is going to be returned as prime minister because of the palpable dislike of him,” she says.
Should you trust single-seat polls?
Experts broadly believe that nationwide two-party-preferred polling is a more reliable predictor of the election outcome and that individual seat polls can be fraught.
Kevin Bonham, an electoral studies and scientific research consultant, says a constant problem for seat-specific polls is “demographic churn”, especially in inner-city seats, where there are “a lot of transient votes”.
“They have a long history of being very badly polled,” Bonham says.
Further complicating the reliability of individual seat polling is the involvement of independents, Bonham says.
“Their support often snowballs towards the end of the campaign.”
Will the outcomes be more robust this time?
We can’t be sure, but the onus is partly on the public to know how to read them, Bonham says.
“People don’t realise polls are snapshots, not forecasts – they’ve got predictive value but they change,” he says.
“The only thing that is certain is that things will happen that can’t be predicted by the polls.”
One other certainty is that the pollsters will be awaiting the outcome of the election more anxiously even than most voters, particularly those who have made confident claims for their improved methodologies and new methods such as MRP.
“Pollsters this time around are terrified of getting the wrong result,” Goot says.
“This election will really be a bit of a test as to which model is most accurate. A lot will be learned after election day.”
Why two-party preferred figures don’t indicate a win
The two-party preferred figure combines preferences to show which major party is ahead, such as Labor on 52% and Coalition on 48%, and will add up to 100.
However, Goot says pollsters are at pains for the public to understand that a party winning this 2PP figure alone is not enough to predict an election winner, as winning the popular vote in Australia does not guarantee winning a 76 seat majority in the lower house.
He and other pollsters the Guardian spoke to point to the election pendulum concept – which lists seats held by each major party based on marginality at the last election with the most marginal seats closest to the centre – as a better predictor.
This election, Labor needs 51.8% of the two-party preferred vote – a universal swing of 3.3% toward the party compared with the 2019 election – to win the seven seats needed to govern in its own right.
ABC election tsar Antony Green explains the pendulum is lopsided this election due to the strong margins the Coalition enjoys in some seats, because of Labor’s collapse in Queensland at the 2019 election, and the fact that swings to Labor in seats it already holds safely don’t help it claw back a parliamentary majority.