Advance Summary
1. A major polling failure in the 2019 Australian federal election has been attributed to unrepresentative sampling, coupled with inadequate reweighting, that produced a large skew in primary vote and 2PP estimates in Labor's favour.
2. A recent report argues that a skew in federal 2PP polling was present throughout the period 2010-2019 and was not specific to 2019.
3. If this was the case, and was so for the same reason, then a skew to the ALP should also be expected from the much larger sample of state-level final polls taken over the same period.
4. However, state level polls in Australia from 2010-2020 do not display any overall two-party skew to the ALP.
5. Also, while federal polling for 2010 overestimated Labor, final 2PP polling at the 2013 and 2016 federal elections was mostly very accurate.
6. While federal polls overall (not all specific polls) do have a record through recent decades of on average overestimating the Labor 2PP, this record is much inflated by a single pollster (Morgan).
--------------------------------------------------------------------------------
This week has been another bad week for Australian public perceptions of polling. In the US Presidential election, pollsters turned in a mediocre to locally bad performance, generally underestimating the performance of Donald Trump and the Republican Party more broadly (the latter more so than the former). Quite aside from obituaries for these polls being written prematurely before "blue shifts" in many states reduced the error in the polling, this all fed a widespread narrative that polling misses always overestimate "the left" (UK 2015, Brexit, Trump 2016, Australia 2019 etc).
But in fact that narrative is bogus. In Australia, we're less than two weeks out from the Queensland election, at which Labor outperformed all the published polls. We're less than a month out from the 2020 New Zealand election, at which the main pollsters polling til the last possible moment underestimated Labour's margin by 9.4 points and 9.7 points and were outdone by a fossilised Morgan that fell a mere 5.4 points short of predicting Jacinda Ardern's landslide. And how quickly everyone has forgotten the UK 2017 election - at the time a "here we go again" moment for the UK polling industry - where most final polls had Labour slumming it in the mid-30s and losing outright, only for Labour to poll 40% and take away the Tories' majority.
Those who like to claim that the polls always fail to the left would have had their views reinforced by some media reporting of the recently released Association of Market and Social Research Organisations (AMSRO) report on the 2019 Australian polling failure. The report is a very useful piece of work, compiled under trying conditions in view of a predictably modest level of cooperation from pollsters, and I recommend reading it in full to anyone interested in the 2019 failure. However, one conclusion in the summary requires more investigation. I quote some important sections as background:
The performance of the national polls in 2019 met the independent criteria of a ‘polling failure’ not just a ‘polling miss’. The polls: (1) significantly—statistically—erred in their estimate of the vote; (2) erred in the same direction and at a similar level; and (3) the source of error was in the polls themselves rather than a result of a last-minute shift among voters.
We rule out as contributing factors to the poor performance of the polls not only a late swing after the final polls were conducted (except possibly to a very minor extent); but also the impact of ‘shy conservatives’, measurement error arising from the voting intentions questions, respondents deliberately misleading pollsters, early voting, and ballot order effects. The allocation of preferences led to a slight increase in overall poll error in some estimates of the two-party-preferred vote, but was not a major contributing factor to the failure overall.
The Inquiry Panel could not rule out the possibility that the uncommon convergence of the polls in 2019 was due to herding. This could not be ruled out largely because, despite our requests, the pollsters provided no raw data to enable us to attempt to replicate their results.
Our conclusion is that the most likely reason why the polls underestimated the first preference vote for the LNP and overestimated it for Labor was because the samples were unrepresentative and inadequately adjusted.
* The polls were likely to have been skewed towards the more politically engaged and better educated voters with this bias not corrected.
* As a result, the polls over-represented Labor voters.
However, it's the next part of the summary that has been seized upon:
Such a skew has been evident in recent election cycles, with 17 of the 25 final poll results since 2010 (68%) overestimating 2PP support for Labor.
This finding stands independent of methodology because even though the methods used by the pollsters differ they share a common difficulty in struggling to establish contact with and gain the cooperation of a representative sample of voters. This conclusion is broadly similar to that reached by the reviews into the performance of the 2015 UK polls and the 2016 US polls.
This was picked up in headlines like "Flawed political polls have underestimated Coalition support for a decade" which reads to me like something from an alien planet. On the one I'm used to, while most pollsters had Labor a point or two too high in 2010, the 2013 and 2016 federal elections both had remarkably accurate final national polling. Six of eight final polls in 2013 and either four or five of six in 2016, had the 2PP right to within a point. (I used the previous-election preferences figure for Ipsos based on reporting by the poll's sponsor; the pollster used the respondent preferences figure as the headline figure, though placing reliance on the previous-election figure for swing projections.) The only poll to be more than two points out on 2PP in either year was an experimental Lonergan mobile-phone only poll in 2013, hardly a mainstream poll that anyone was taking any notice of. Excluding it, the average skew in 2013-6 was virtually zero. (That said, in 2013 there was some skew in primary votes to Labor, cancelled out in a 2PP sense by preference flow changes.)
I also think that if we want to answer a question along the lines of "Was the skew observed in 2019 a once-off issue or part of a longer trend?" (my wording, not AMSRO's) it's best not to include the 2019 data in that assessment. What is left for 2010-6 (a 12-7 split of polls leaning to Labor) is suggestive but not statistically significant. If I use 1993-2016 (per Appendix 4 in the report) the split is 28-21 in Labor's favour, but if Morgan (with its long history of often using skewed face-to-face polling) is excluded the split for 1993-2016 becomes 19-19. That said, the errors on Labor's side have tended to be bigger: the average miss for the 1993-2016 polls excluding Morgan is about half a point on Labor's side. (It rises to about a point if Morgan is included. Note: the AMSRO report gives the error for Morgan 1996 as 8.6 points but I have it as 3.6 points (50-50, phone poll, 28-9 Feb).)
It occurred to me that if unrepresentative sampling and/or lousy scaling have been producing an overall skew to Labor in recent federal elections (not just 2019) then we should also see that in the far larger body of state elections. But do we?
Methods
My sample for this article consists of 57 final statewide polls taken at 15 mainland state elections from 2010 onwards. A poll is defined here as a final poll if it was the last poll credited to a given pollster for that election and it was released within six weeks of election day (the vast majority were released during the final weeks). Exit polls are ignored, as are internal party polls. The pollster's primary published 2PP result is used and compared to the published 2PP, except for SA 2018, for which no 2PP final polling was published. For SA 2018 the primary vote gap between the major parties is used and compared to the published primary gap, with the difference halved.
I omit Tasmania because Tasmania uses Hare-Clark, which is not a 2PP system. I considered including Tasmania and using the gap between the major parties, but this creates problems because Tasmanian polling has tended to (at times massively) overstate the Greens vote. If the major party difference is used as a surrogate then Tasmanian polls since 2010 leaned to the Coalition by the equivalent of 0.73% 2PP, but this is misleading in right/left terms because of the issue with the Greens vote. (The polls underestimated both major parties, but Labor by more than the Liberals).
Some notes on special cases:
* For Queensland 2015, 2PP polls predicted the wrong result because of a massive shift in preferencing behaviour. For this reason I use two sets of results, the (a) set which uses the 2PPs as published, and the (b) set which uses primary vote differences for the major parties as discussed above.
* For Queensland 2017 I use the average of my 2PP estimate (51.2) and Antony Green's (51.3). I also very grudgingly use the final ReachTEL that was released by Sky News at 5 pm on election day, as it was apparently not in any way an exit poll. For Queensland 2020 I use my own final estimate.
* While Wikipedia reports ReachTEL as having published a 52% 2PP to Labor in the case of SA 2018, this figure is ultimately attributed to Sky News. Based on the description on Poll Bludger I am not satisfied that it was intended as an overall 2PP rather than an estimate of preference flow from minor party voters.
* I treat Newspoll up to mid-2015 and Newspoll after that as different pollsters as the series were administered by entirely different companies. On the other hand, rather than attempting to draw a line in a process of evolution, I treat Galaxy, YouGov Galaxy and YouGov - when polling under those names rather than as Newspoll - as the same pollster. (This isn't really about the properties of individual pollsters, so it doesn't matter all that much.)
For each pollster and for each election, I calculate both the lean of the results (how much they leaned to or against the Coalition on average) and the average absolute difference between the polls and the results. Because of the Queensland 2015 issue I publish two sets of averages, one with the published 2PPs (the (a) set) and one with the two-party primary gaps (the (b) set).
Table
The following is my table of results. Figures are in many cases hand-calculated and I don't guarantee they are 100% error free. However, I am confident that if there are any errors in the individual data entries they will not make a substantial difference to the overall pattern. I am happy to add more final poll results if anybody finds any I have missed.
Excepting special cases as noted above, the figure in each case is the overestimation (positive) or underestimation (negative) of the Coalition 2PP. To convert to the standard used in the USA, double the numbers.
Key to pollsters: NP1 Old Newspoll (pre-2015), NP2 Newspoll administered by YouGov/Galaxy (post-2015), YG/G YouGov/Galaxy/YouGov Galaxy RM Roy Morgan Ess Essential uCom uComms RTel ReachTEL ACN AC Nielsen Lon Lonergan (others as stated)
Summary findings
* Overall, rather than displaying a skew to Labor, the state polls listed displayed a very slight average 2PP lean to the Coalition, of either 0.19 points or 0.41 points depending on the treatment of Queensland 2015.
* Overall, state polls overestimated the Coalition at eight elections and Labor at six, with the verdict split in the case of Queensland 2015 depending on method used.
* Of pollsters with more than five polls, one, Morgan, overestimated the Coalition on average by more than a point. Newspoll 2 and YouGov/Galaxy did so by less than a point. Newspoll 1 and ReachTEL displayed virtually no skew. Essential overestimated Labor.
* The average 2PP miss for all elections individually was below 2% with the exceptions of Victoria 2018, NSW 2019 and the published-2PP version of Queensland 2015.
* The average 2PP miss for all polls was 1.37 points or 1.52 points, depending on method.
* Of the 57 results, if the published Queensland 2015 2PP figures are used there were 25 misses in the Coalition's favour by more than half a point and 21 in Labor's favour by more than half a point. If the alternative Queensland 2015 figures are used there are 20 and 23 respectively.
* It is amusing that Morgan has been the most Coalition-skewed poll at state level when it has often been very Labor-skewed federally. This is mainly because its state polls since 2010 have mostly been SMS polls whereas its federal polls historically (but not so much recently) were often face to face.
Comments
Overall, Australian state election final polling since 2010 has been excellent. Only a few elections stand out as having had poor final polling. Of these Queensland 2015 was only poor because of incorrect preferencing assumptions, while NSW 2019 was probably affected by all bar one pollster going out of the field before Labor's campaign self-destructed. Victoria 2018 was by far the worst result for the pollsters, but was largely overlooked because the side expected to win easily won very easily.
Furthermore, these results don't show any skew to speak of either way. If pollsters through 2010-20 were systematically overcapturing highly engaged and educated lefties and failing to sufficiently scale their way out of the mess, then this should have shown up in state elections. It doesn't, and there were far more of them than federal elections.
That's not to say the federal pollsters were doing everything right in that time. In 2016 they were lucky to dodge the bullet that eventually hit them in 2019. More likely they were indeed making the same mistakes all along, but in some elections (such as 2016) that may not have mattered. Over-represented voter types may not have had a strong preference between the Malcolm Turnbull Liberal campaign (which was very inner-city focused) and Labor, or the pollsters may have made other mistakes that cancelled out. The 2019 election may have brought out issues with faulty sampling unusually strongly because of the campaign style and leadership mix at that election. Belief that the Coalition could not possibly win may also have influenced subjective decisions by some pollsters regarding what weightings to use, when to change weightings, and so on. Without sufficient detail about the inner workings of the polls in the 2016-9 term it's impossible to say.
I'd be interested to hear from pollsters if and when they changed methodology for the Federal polling in response to the 2018 Victorian miss. Up until that point, Australian polling had been very smug with its accurate record especially in comparison to the Trump and Brexit polling misses overseas. The Campbell Newman miss could be explained through a miss on preferencing behaviour as you say. But the 2018 Victorian miss was simply a huge miss, and one wonders if that led pollsters to an overcorrection.
ReplyDeleteI don't buy the story of a decade long bias to the ALP in the polls at all. Across that many years, pollsters, demographic changes and technology changes the idea of some consistent background skew to either party is ludicrous and the numbers as you say don't bear it out either.
I wonder if there would actually be a significant skew against Labor in state election polling, once you toss in WA 2021 (+3.7%).
ReplyDelete