Australian national opinion polling has just suffered its worst failure in result terms since 1980 and its worst failure in margin terms since 1984. This was not just an "average polling error", at least not by the standards of the last 30+ years. The questions remain: what caused it and what can be done (if anything) to stop it happening again.
A major problem with answering these questions is that Australian pollsters have not been telling us nearly enough about what they do. As Murray Goot has noted, this has been a very long-standing problem.
In general, Australian pollsters have taken an approach that is secretive, poorly documented, and contrary to scientific method. One notable example of this was Galaxy (it looks like correctly) changing the preference allocation for One Nation in late 2017, and not revealing they had done this for five months (in which time The Australian kept wrongly telling its readers Newspoll preferences were based on the 2016 election.) But more generally, even very basic details about how pollsters do their work are elusive unless you are on very good terms with the right people. Some polls also have statistically unlikely properties (such as not bouncing around as much as their sample size suggests they should, either in poll to poll swing terms or in seat-polling swing terms) that they have never explained.
ELECTORAL, POLLING AND POLITICAL ANALYSIS, COMMENT AND NEWS FROM THE PEOPLE'S REPUBLIC OF CLARK. IF USING THIS SITE ON MOBILE YOU CAN SCROLL DOWN AND CLICK "VIEW WEB VERSION" TO SEE THE SIDEBAR FULL OF GOODIES.
Showing posts with label poll failure. Show all posts
Showing posts with label poll failure. Show all posts
Saturday, June 1, 2019
Tuesday, May 28, 2019
Oh No, This Wasn't Just An "Average Polling Error"
As previously noted, Australian opinion polling has just experienced its first clear predictive failure, in pick-the-winner terms, in a federal election since 1980. Every campaign poll by four different pollsters (one of them polling under two different brands) had the Labor Opposition ahead of the Liberal-National Coalition (as it had been for the entire term), and yet the Coalition has won an outright majority. Moreover, polls in the final weeks were extremely clustered, with 17 consecutive polls (plus an exit poll) landing in the 51% to 52% two-party preferred range after rounding, a result that is vanishingly unlikely by chance. No pollster has yet made any remotely useful contribution to explaining this clustering - those who have even commented have generally said they didn't do it and it must have been somebody else.
The general reaction has been dismay at this unusual level of pollster error in a nation where national polls have a proud record of accuracy. The Ninefax press, as I call them (SMH/The Age), have even announced that they now have no contract with their pollster, Ipsos, or with any other pollster. (This may just be for show, since in the past Fairfax often took long breaks in polling after elections.) News Corp is, for now, standing by Newspoll. The Association of Market and Social Research Associations has announced a review, although this may be of little value as its only member who is involved is Ipsos.
Tuesday, May 21, 2019
The Miracle Is Over: The 2019 Australian Federal Election Poll Fail
![]() |
Nice 2PP. Shame it's for the other side ... |
"I have always believed in miracles" said re-elected Prime Minister Scott Morrison very late on Saturday night. But many (not all) of us who study national Australian polls and use them to try to forecast elections have believed in a miracle for one election too many. The reason we believed in this miracle was that it kept delivering. While polls failed to forecast Brexit, Trump and two UK elections in a row (among other high profile failures) Australian national polls continued to churn out highly accurate final results. The two-party preferred results in final Newspolls from 2007 to 2016 are an example of this: 52 (result 52.7), 50.2 (result 50.1), 54 (result 53.5), 50.5 (result 50.4).
Predicting federal elections pretty accurately has long been as simple as aggregating the polls, adjusting for obvious house effects and personal votes, applying probability models (not just the simple pendulum) and off you go; you generally won't be more than 5-6 seats wrong on the totals. While overseas observers like Nate Silver pour scorn on our little polling failure as a modest example of the genre and blast our media for failing to anticipate it, they do so apparently unfamiliar with just how good our national polling has been since the mid-1980s compared to polling overseas. As a predictor of final results, the aggregation of at least the final polls has survived the decline of landlines, volatile campaigns following leadership changes or major events, suspected preferencing shifts that frequently barely appeared, herding with the finish line in sight, and come up trumps many elections in a row. This has been put down to many things, not least that compulsory voting makes polling easier by removing the problem of trying to work out who will actually vote (another possibility is the quality of our public demographic data). But perhaps it was just lucky.
Friday, November 16, 2018
Wentworthless: Another Epic Seat Poll Fail
The failures of seat polling have been a common subject on this site this year. See Is Seat Polling Utterly Useless?, Why Is Seat Polling So Inaccurate and How Did The Super Saturday Seat Polls Go?
The recent Wentworth by-election was difficult to poll because of a late strategic-voting swing of probably a few to several points from Labor to the winner Kerryn Phelps. All seven polls that polled a Liberal vs Phelps two-candidate preferred vote did actually get the right winner. But that is all the good news that there is. In so many other respects, the seat polls for the historic Wentworth by-election, perhaps the most polled seat in Australian history, were way wrong. And like other recent seat poll failures in such seats as Bass, Macarthur, Dobell, Lindsay and Longman, the failures were characterised not just by the polls being very wrong, but also by them tending to be wrong in the same direction. The problems go beyond small sample size, and beyond even the tendency of seat polls to be less accurate than their sample sizes say they should be. They point to systematic errors not random ones, and in this case, I suspect, to the oversampling of the politically engaged.
The recent Wentworth by-election was difficult to poll because of a late strategic-voting swing of probably a few to several points from Labor to the winner Kerryn Phelps. All seven polls that polled a Liberal vs Phelps two-candidate preferred vote did actually get the right winner. But that is all the good news that there is. In so many other respects, the seat polls for the historic Wentworth by-election, perhaps the most polled seat in Australian history, were way wrong. And like other recent seat poll failures in such seats as Bass, Macarthur, Dobell, Lindsay and Longman, the failures were characterised not just by the polls being very wrong, but also by them tending to be wrong in the same direction. The problems go beyond small sample size, and beyond even the tendency of seat polls to be less accurate than their sample sizes say they should be. They point to systematic errors not random ones, and in this case, I suspect, to the oversampling of the politically engaged.
Monday, July 23, 2018
Why Is Seat Polling So Inaccurate?
The accuracy of Australian seat polling has been an important topic lately, especially given the coming by-elections. By-elections are very difficult to forecast. Even after throwing whatever other data you like at them (national polling, government/opposition in power, personal vote effects, state party of government) they are less predictable than the same seats would be at a normal election. So it would be nice if seat polling would tell us what is going to happen in them.
Unfortunately single-seat polling is very inaccurate. I discussed this in a recent piece called Is Seat Polling Utterly Useless?, where I showed that at the 2016 federal election, seat polling was a worse predictor of 2PP outcomes than even a naive model based on national polling and assumed uniform swing. The excellent article by Jackman and Mansillo showed that seat polling for primary votes was so bad that it was as if the polls had one sixth of their actual sample size. It doesn't seem that seat polls are useless predictively, but we certainly can't weight them very highly.
Unfortunately single-seat polling is very inaccurate. I discussed this in a recent piece called Is Seat Polling Utterly Useless?, where I showed that at the 2016 federal election, seat polling was a worse predictor of 2PP outcomes than even a naive model based on national polling and assumed uniform swing. The excellent article by Jackman and Mansillo showed that seat polling for primary votes was so bad that it was as if the polls had one sixth of their actual sample size. It doesn't seem that seat polls are useless predictively, but we certainly can't weight them very highly.
Saturday, March 3, 2018
Election Day: Blue Skies With A Fair Chance Of A Poll Fail
Welcome to my on-the-day election coverage. (See my main guide page with links to electorate pages.) I may be adding comments now and then through the day on anything I feel like commenting on. I expect this to continue up to the release of the Southern Cross exit poll around 6 pm. I will be live blogging on the Mercury tonight with coverage expected to start around 6:30. Once that goes live a link will be posted here at the top of the page. After I finish the coverage there will be comments posted here overnight - I am hoping this will include the rollout of postcount threads, but it may not.
Note for media: I won't be available for any interviews other than the Mercury between 5:30 and 11; I may be available briefly after 11. Also, tomorrow (because nobody paid me to stay at home) I will be on a field trip to Tooms Lake, will not be available for in-person interviews and may at times be out of mobile phone range.
My advice to those still to vote is simple: number all of the boxes. Even if you find you are getting into candidates you cannot stand or have never heard of, putting the lesser evils ahead of the greater evils will make your vote more powerful than if you stop. Numbering all the boxes will never disadvantage the candidates you prefer or advantage those you most dislike, because your vote only flows on to your less preferred candidates once those you most prefer have all been elected or eliminated.
Note for media: I won't be available for any interviews other than the Mercury between 5:30 and 11; I may be available briefly after 11. Also, tomorrow (because nobody paid me to stay at home) I will be on a field trip to Tooms Lake, will not be available for in-person interviews and may at times be out of mobile phone range.
My advice to those still to vote is simple: number all of the boxes. Even if you find you are getting into candidates you cannot stand or have never heard of, putting the lesser evils ahead of the greater evils will make your vote more powerful than if you stop. Numbering all the boxes will never disadvantage the candidates you prefer or advantage those you most dislike, because your vote only flows on to your less preferred candidates once those you most prefer have all been elected or eliminated.
Thursday, June 15, 2017
The UK And Australian Elections Weren't That Similar
Last week the UK had its second straight surprising election result. In 2015 an expected cliffhanger turned into an easy win for the Conservatives while in 2017 an expected landslide turned into a cliffhanger. The government went to the polls three years early (that's a whole term over here), supposedly in search of a strong mandate for its position on Brexit, yet came away with fewer seats than it went in with. The real motive seemed to be to turn a big lead in the polls into a bigger majority, and if that was the aim then it backfired spectacularly.
In the wake of this result the Australian commentariat have put out several articles that seek to stress parallels with Australian politics. The primary themes of these articles are as follows: that Malcolm Turnbull is Theresa May and that Anthony Albanese is Jeremy Corbyn.
Let's start with the Turnbull-May comparison. Turnbull has no hope of winning the battle of perceptions on this one, because it's the kind of analogy many of those who consume political chatter will congratulate themselves on having thought of first. But on a factual basis, the comparison is twaddle.
In the wake of this result the Australian commentariat have put out several articles that seek to stress parallels with Australian politics. The primary themes of these articles are as follows: that Malcolm Turnbull is Theresa May and that Anthony Albanese is Jeremy Corbyn.
Let's start with the Turnbull-May comparison. Turnbull has no hope of winning the battle of perceptions on this one, because it's the kind of analogy many of those who consume political chatter will congratulate themselves on having thought of first. But on a factual basis, the comparison is twaddle.
Labels:
2016 federal,
Corbyn,
debunkings,
early elections,
federal,
Labor,
leaderships,
media coverage of politics,
netsats and 2PP,
poll failure,
pseph,
Rudd,
Shorten,
Theresa May,
Trump,
Turnbull,
UK 2017,
UK Labour
Thursday, November 10, 2016
Trump Wins: Another Major Poll And Modelling Failure
Well here we are again. As with the UK election, as with Brexit, as with many other voluntary voting elections we have an unexpected result with the election of Donald Trump as the next President of the USA. Pollsters are in disrepute because most had Clinton with a modest popular-vote lead, but overconfident modellers deserve their share of the blame for the level of public surprise at the result.
A few days ago, Nate Silver of fivethirtyeight was the target of a terrible Huffington Post article and an argument broke out about whether it was more accurate to say Donald Trump had about a one in three chance of becoming President, or virtually no chance at all. HuffPo was to double down with this rather pretentious piece by a stats prof accusing Silver of overstimating Trump's chances - a piece that has proved to have an exceedingly short shelf life indeed. Silver's model might not look crash hot in the wake of what has happened, but it still looks a great deal better than those that were saying Trump had only a 1% chance of winning.
A few days ago, Nate Silver of fivethirtyeight was the target of a terrible Huffington Post article and an argument broke out about whether it was more accurate to say Donald Trump had about a one in three chance of becoming President, or virtually no chance at all. HuffPo was to double down with this rather pretentious piece by a stats prof accusing Silver of overstimating Trump's chances - a piece that has proved to have an exceedingly short shelf life indeed. Silver's model might not look crash hot in the wake of what has happened, but it still looks a great deal better than those that were saying Trump had only a 1% chance of winning.
Subscribe to:
Posts (Atom)