Introduction
Following a Newspoll finding the Indigenous Voice to Parliament Yes side trailing by a horrendous if not all that surprising 38-53 this week, the website arguably still known as Twitter has been even more awash with polling denialism than at other stages in the Yes side's slide down the slippery slope. The number of people recycling and reciting the same unchecked viral false claims has become so large that it is almost impossible to manage a response to them. Inspired by the AEC's electoral disinformation register, I have decided to start a register of commonly encountered false claims about polling that are spread on social media, mostly from people claiming to be on the left. A few right-wing claims are included too, but I don't see those so often at the moment. (I do see a lot of right-wing electoral disinfo, especially the "3 out of 10 voted Labor" preferential-voting denial.)
While many of the claims are just whoppers or have some grounding in something that used to be true but no longer is, some result from confusion between what polls said and what media commentary misinterpreted the polls as saying. Polling itself is blamed for the way that media sources misinterpret polls (often to play up false narratives that an election is close, but also sometimes to pander to an audience, or at times through more innocent errors.) This confusion between polling and interpretation means that polls that may have been accurate are falsely blamed and unjustly distrusted, while media figures who have wilfully or cluelessly misinterpreted the polls get away scot free. Similarly, deniers blame good polls for the results of bad polls, again meaning that the good polls are wrongly suspected while the bad ones go unpunished.
In this way, poll denialism enables more bad poll reporting. Instead of falsely claiming that polls that showed their side winning easily were wrong, party supporters annoyed about media reports of cliffhangers should be complaining about the media who chose to misinterpret those polls.
This guide is intended as a handy resource for replying to poll deniers. Feel free to link to it in reply to them and refer the deniers to the appropriate item. I'm sure this will save me a lot of effort and I hope it also does for you. These people have displayed no effort in terms of checking their facts before speaking in public, and they deserve no effort in reply. I will fight their laziness with laziness. I'll be responding to tweets covered by this article with a list of which disinfo claims they match and a link to save myself explaining why multiple people have the same thing wrong over and over again.
Nominations for new additions are welcome, as are any corrections. Finally, despite the similarity in title with the AEC's disinformation register this piece is unofficial. While vague ideas for a piece along these lines have been considered now and then in the past, the idea to write it up as a disinformation register was a sudden flash of inspiration yesterday.
What Is Polling Denial?
It is important to be aware of the potential for error in all polls, especially single polls in isolation, and some more than others - there is always room for healthy scepticism, provided that it is informed by facts not fibs. Polls can be wrong individually and at times collectively through random error, trouble getting balanced samples, data processing errors and poor question design. Many issue polls deliberately employ skewed wording to try to get the result that they want, in order to influence public opinion, politicians or media coverage. Supposed polls that are not polls (opt-in surveys, "reader polls" etc) are often passed off as polls by media. Believing everything that is claimed to be a poll result is even worse than believing none of them.
Polling denial, however, is not the same thing as justified caution. Polling denial occurs when someone denies that a poll could be, or is likely to be, true not because they have a valid objection to the poll because they either cannot cope with the idea that the poll might be correct, or wish to discredit the poll for political reasons. They then mobilise false claims about the poll, the pollster, or polling generally to justify their view. These claims are often virally spread on social media, with prominent (usually biased pro-Labor or independent left) accounts often starting them where they are then repeated by others without checking.
An amazing thing about poll deniers is their selective gullibility. On the one hand they will say that certain right-leaning media can never be trusted. On the other if those media claim that a poll has found a certain result, then polling deniers will believe that the poll exists and that it has found that result, even if one or both of these things is false. Not only that but they will also then believe that that result was found by a major poll even if it was actually a minor outlet or not a poll at all.
I am calling most if not all of the myths debunked below disinformation and not merely misinformation because I have seen them all spread so much by people who either clearly do know better (continuing to spread them despite corrections) or who spread them wilfully with no attempt to research the facts. Specific people spreading these myths may not know they are false and may be recirculating them in the honest belief they are true, but disinfo is disinfo all the more when it spreads through an innocent host.
---------------------------------------------------------------------------------------------------------------
Donations Welcome!
If you'd like to help support my efforts, there's a Paypal in the sidebar, or click on my profile to email me for account details.
---------------------------------------------------------------------------------------------------------------
Section A: Polling Methods And Connections
A1: "Newspoll only calls landlines"
FACTS: Newspoll ceased landline-only polling in 2015. It shifted to a hybrid mode of online polling and landline robopolling for 2015-9. This method failed alongside all others at the 2019 election. In late 2019 Newspoll switched to online-only panel polling for national and state polls, and no longer calls phones at all. (It has used targeted phone polling for a small number of seat polls only.)
A2: "Newspoll/polls generally only reach older voters because younger voters do not have landlines or answer mobile phone calls from unknown numbers"
FACTS: No major Australian regular voting intention pollster exclusively uses phone polling. For instance Newspoll and Essential are exclusively online and therefore do not call at all. Resolve is exclusively online except for an online/phone hybrid for final pre-election polls, and Morgan uses online/phone hybrid polling and SMS polls. Some less prominent polls still use phone polling - for instance uComms robopolls - but phone polling of random numbers is not very accurate anymore because younger voters are so hard to reach and those who are reached tend to be unrepresentative. Phone polling of targeted numbers has been trialled by Newspoll/YouGov as a seat polling method with some success.
Even when young voters were first becoming hard to reach by phone methods, random phone polling remained a viable method until about 2016 because pollsters could simply adjust to that by upweighting the young voters who they did contact, or by setting quotas and calling til they got enough young voters. Poll deniers generally think that if a polling method has an age skew in respondents, then that must be reflected in the final outcome, but polls do not have to weight all respondents equally.
(A common combo of A1 and A2 is the "Bob Ellis", which says that Newspoll only dials landlines and only reaches voters over 60 years of age. This combo originated with said late alleged sex pest and polling crank - whose cluelessness about psephology was highlighted by a string of nearly always false election predictions. It is still circulating seven years after he died, and eight years after the "only dials landlines" part ceased being true.)
A3: "Only old people have time to take online polling surveys; young people are not represented"
FACTS: Good online panel polls have access to large panels of potential respondents for each survey (often tens of thousands, if not hundreds). Even if a panel is skewed towards older voters, the pollster can deliberately target a higher share of the young voters and thereby get a more representative sample. And as with A2, a pollster can also use weighting to correct for any remaining skew.
Many panel polls frequently publish online age breakdowns for age groups such as 18-35, and Essential has even at times published numbers of respondents polled in different age brackets. Despite such things and despite explicit statements from the more transparent pollsters about the setting of quotas or weights by age, poll deniers continue to claim that younger respondents who pollsters have reported polling can't possibly exist.
(Panel polling participants receive small incentives for participation, so the idea that only older voters will bother is spurious - people of a range of ages and backgrounds may decide to do surveys in search of a little extra income.)
A poll denier who is corrected on their use of A2 will often then immediately switch to A3.
A4: "Newspoll and its questions are owned and controlled by MuRdOcH/News Corp"
FACTS: Newspoll is an opinion polling brand that is owned by The Australian, a subsidiary of NewsCorp. An in-house company formerly existed to administer Newspoll but was disbanded in 2015 and since then the poll has been contracted out, making Newspoll a brand, not a pollster. Firstly it was contracted to Galaxy, which later became YouGov Galaxy (YouGov purchased Galaxy), with this arrangement lasting until 2023. It is now contracted to Australian independent private polling company Pyxis.
The wording of Newspoll voting intention and leadership questions has been unchanged through the 1985-present history of the brand. Some other questions (eg budget) also have a very long history of stability. The question wording of referendum polls is based on the referendum question once known. While the client will from time to time commission once-off issue polls, the wording of which I will sometimes disagree with, the wording of major polling such as voting intention, leadership questions and referendum polling is neutral and beyond reasonable reproach.
A5. "Polls are black boxes and we don't know how they work! The details are not published!"
FACTS: This one is true of some polls, but as a generalisation across polls as a whole it's untrue, and insulting to those who have worked hard within the industry to improve its transparency in the wake of the 2019 polling failure. Poll deniers employ this one against all polls, but what they really mean is that they don't know the information because they have made no effort to find it. It's also quite common for poll deniers to claim that, for instance, the sample size and notional margin of error for Newspoll have not been published, when they are routinely published in the print edition (which can be viewed for free through NewsBank through participating libraries).
Australian Polling Council members (Pyxis which does Newspoll, Essential, SEC Newgate, Australia Institute, Ipsos, KJC Research, Redbridge, Lonergan, uComms, 89 Degrees East and YouGov) are compelled to release fairly detailed information about their polling methods on their websites shortly after a poll is referenced in the public domain, with the very rare exception of genuine leaks. Some pollsters that are not APC members (eg Resolve) release some level of methods information while falling short of the APC's modest standards. A few others (eg Morgan, Freshwater) release very little.
As Newspoll attracts the most comment concerning this, here is a link to the current Newspoll disclosure statements page. Also, Newspoll details such as sample size, effective theoretical margin of error, question wordings, dates and percentage of uncommitted voters are routinely published in The Australian print editions when Newspoll is released.
A6. "Newspoll only polls readers of The Australian or audiences of other NewsCorp media"
FACTS: The sample base for Newspoll's polling has never had the slightest thing to do with whether people read The Australian or not, or with what media they consumed. In the old days it involved random phone numbers from the population at large. As online polling has come in it has involved market research panels that people have signed up for; these people may not even necessarily be aware that Newspoll and The Australian exist (Newspoll does not announce itself as Newspoll to people taking it). The previous YouGov panel was one people could openly sign up for and this was widespread knowledge on social media, although signing up did not guarantee being polled.
People concerned about "polls" that only poll audiences of a specific medium should instead be concerned about the opt-in reader/viewer "polls" that are commonly used by commercial newspapers, radio and TV outlets and have even been used by some ABC programs (shudder). A difference between the results of professional polling and those of unscientific opt-ins of that sort should not be hard to spot.
A7. "Newspoll only polls right-wing electorates!"
FACTS: If this was true Newspoll would display a massive skew to the right compared to election results but it obviously doesn't (see B1 through B6 below). I live in Clark (Tas), one of the most left-wing divisions in Australia as measured by two-party preferred, and I've personally been polled by Newspoll a number of times through the years. Other residents of left-wing electorates have often reported being polled by Newspoll. This is one that people tend to make up out of thin air because they are clutching at straws.
A8. "Online panel polls are just self-selected polling just like reader polls! They can be easily manipulated!"
FACTS: Large online panel polls are in fact very different to opt-in reader polls run by newspapers (which are not really polls at all, are easily stacked and have been successfully manipulated). In most cases someone wanting to manipulate a specific poll would not know which online survey panel to sign up for, since there are a great many online panels and most online panel pollsters do not advertise what panel they are using (YouGov is one exception that has open signup). Some online panels also recruit members offline (for instance through polls conducted by other methods, through marketing databases or through initial offline contact).
Secondly online panel polls do not just do polling but also send respondents a wide range of invitations to surveys on different subjects. If a bunch of people from a certain political movement signed up to a platform with intent to stack a poll they would find they would be wasting a lot of their time on "This is an interesting survey on how you found your current job and how you intend to find your next job" for every poll they took. Thirdly even if a bunch of people had signed up to YouGov with intent to stack Newspoll (when YouGov was doing Newspoll) they may have never even been Newspolled at all (see C9) and quotas and weighting would be likely to pick up demographic skews predictive of their leanings anyway. Newspoll worked successfully through the YouGov panel in recent elections despite using open signup and it being an open secret on left-leaning social media that one could sign up to that panel to get Newspolled.
In theory someone who found out which panel poll was being used by a given pollster might try to stack that panel using bots (if it used open online signup), but this is again easier said than done. Panels use a range of tests to detect bogus respondents. These might be circumvented through AI, but suffice to say despite all the concerns about analytics targeting, foreign interference and so on, it hasn't yet had any noticeable influence in countries whose elections are much juicier targets than Australia's.
A9. "Online panel polls just poll the same thousand people over and over again."
FACTS: This false claim (as concerns major public polls) rests on a confusion between the sample for each running of an online panel poll and the broader panel of potential respondents. Major online panel polls have access to tens to hundreds of thousands of potential respondents and send out invites to only a small proportion of the panel each time. Some respondents will get polled more often than others and some with unusual combinations of demographic properties might get polled reasonably often, but overwhelmingly the sample is not the same people from poll to poll.
Some irregular low-quality "polls" that sometimes get reported in the press repeat-poll the same panel because that is all the respondents they have access to, but this is bad proper polling practice. Usually such polls will have recruited their sample through opt-in methods including viral social media, which is even worse than just repeat sampling as it means the panel is totally unrepresentative.
Section B. Specific Elections / Readings
B1: "Newspoll/polls predicted Daniel Andrews would lose his seat at the 2022 Victorian election"
FACTS: No significant pollster conducted a public seat poll of Premier Andrews' seat of Mulgrave. The claim that there was such polling at all referred to a claimed exit poll conducted by connections of Andrews' Independent opponent, Ian Cook. Everything about the claim was unfit for publication: there was no evidence the "poll" had been conducted professionally or independently, and there was clear evidence the sample size was risibly small.
To their eternal and most extreme shame (except that they don't have any), the Herald-Sun and Sky News both published this supposed exit polling anyway. That is an extremely dim reflection on those outlets but it has nothing to do with real polling.
B2: "Newspoll/polls predicted Labor would lose the 2022 Victorian election, or said it would be very close"
FACTS: Every released public poll taken in the last two years of the 2018-22 Victorian cycle showed the Labor government clearly ahead on two-party preferred (2PP) or would have if a 2PP figure was included, and hence in an election-winning position. During the last year Labor's leads frequently exceeded 55-45. The final Newspoll had a 54.5-45.5 2PP to Labor (the official 2PP was 55.00 but a more accurate figure is 54.95). The average 2PP of all final polls was 54.3 to Labor. (See full review of poll accuracy here.)
Some media sources - eg The Age - engaged in overbaked speculation about a hung parliament being a likely rather than possible outcome at this election, some of this being based on misinterpreting primary vote polling as more important than two-party preferred. One pollster, Redbridge, also contributed to this narrative, but in an analysis that was not based on any specific published polling. Resolve were also quoted as suggesting Victoria was potentially "on the cusp of minority", which would have been the case had their final poll been spot on concerning the 2PP vote and had the swing been uniform, but neither of those things happened.
There was no public campaign-period poll that could have been reasonably interpreted as pointing to a hung parliament as more likely than not. It was however reasonable based on the state polling and pendulum to have some doubt about whether Labor would retain a majority. The Australian reported the final Newspoll as pointing towards a Labor majority, albeit a reduced one.
Statewide polls did not predict that the Andrews government would increase its seat share despite a modest 2PP swing against it - a rather unusual result - and could not do this because they lacked the regional data needed (if accurate) to do so. Statewide polls aim to predict vote share, not seat share, and were largely accurate on that basis.
B3: "Newspoll/polls predicted Labor would lose the 2022 federal election, or said it would be very close"
FACTS: The Scott Morrison led Coalition lost the aggregated polling lead for the May 2022 election in February 2021 and never recovered. A tiny minority of outlier polls still had it possibly ahead during 2021 but on average it fell further behind and did not win any poll in the 2022 election leadup. In fact, in general during the leadup polls had Labor ahead by more than Labor eventually won by.
Some polls late in the campaign did suggest a hung parliament was a serious chance, though as with Victoria 2022 there was also a lot of media spinning to try to drum up the narrative of a close contest. However even these polls were not the same thing as an election being a cliffhanger - in the end Labor very narrowly won a majority, but won the election comfortably because the crossbench was so large and many of the crossbenchers would not have supported a Coalition government.
The final Newspoll did not predict a Labor defeat or even a close result - it had Labor ahead 53-47 (actual result was closer, 52.13-47.87). Moreover, YouGov (then publishing Newspoll) published a groundbreaking MRP seat projection which, while underestimating the success of independents and Greens, correctly predicted every "classic" two-party seat bar four (a remarkable achievement) and projected 80 Labor seat wins and majority (Labor won 77 a few weeks later) at a time when Nine media and the AFR were spruiking a hung parliament as likely or inevitable.
No poll of the 2022 federal election was perfect, but the average projected final poll 2PP was 52.2 and the actual 2PP was 52.13 so anyone not satisfied with that won't be satisfied with anything. I judged Newspoll and Resolve to be the most accurate final national polls while finding Newspoll/YouGov to be the most useful outfit overall.
B4: "Newspoll/polls predicted Labor would lose the 2023 NSW election"
FACTS: A similar case to B2 and B3. Polling in the final six months of the 2019-2023 NSW term consistently showed Labor ahead on published or implied two-party preferred, generally by at least 52-48 and on average often by 54-46 or more. Interpreting the NSW election polling was difficult because the seat by seat pendulum was a bad one for Labor, meaning that Labor could win the 2PP without being clearly in a position to form government. By election day Labor was in a clearly strong position (whether it would convert to a majority or not) although many media wilfully turned doubts about whether Labor could win a majority into a spurious claim that the election was extremely close. In the end Labor won comfortably but narrowly missed a majority. The final Newspoll had Labor ahead 54.5-45.5 and the actual result was 54.26-45.74. The other final polls underestimated Labor, but not seriously, and possibly as a result of a genuine late swing in voting intention after they had stopped polling.
B5: "Newspoll has been consistently wrong at recent elections"
FACTS: Since revamping its methods following the 2019 mass polling failure, Newspoll has correctly predicted the winner of five state and one federal elections, predicting the vote shares of four straight elections and a referendum in 2022-3 within 1% two-party (or two-answer for the referendum) preferred. (This is an outstanding feat as the global average error converts to 2.5%).
B6: "Polls said No would win the same-sex marriage postal survey but Yes won easily"
FACTS: No public poll found No even close to winning the 2017 marriage law postal survey. In fact, the final polls all estimated Yes to be winning more clearly than Yes actually did, with the exception of one insufficiently reported ReachTEL. Newspoll was also very close to the correct answer. The only thing that might be mistaken for a poll that had No winning was a social media analytics study by Bela Stantic, who despite thereby making a predictive failure on a scale that would make even Ellis blush continues to be regarded as a Nostradamus of political prediction by some very gullible journalists.
B7. "The polls were wrong in 2022 because they failed to predict that the teals would cause the Coalition to lose the election!"
FACTS: The teals did not cause the Coalition to lose the election; Labor won an absolute majority of seats and would have done so even had every teal independent contesting the election lost. Labor won the election primarily because of a two-party swing in its favour, and the teals were a sideshow that damaged the Coalition's seat tally but did not affect the outcome of the election.
How many independents will win seats cannot be reliably predicted from national polling as that is heavily dependent on how concentrated the independent vote is and where it occurs. It is a matter for seat polling instead. And there were in fact seat polls predicting the teal seat gains: Redbridge correctly predicted teal wins in all four of the teal seats it polled with the average 2CP winning margin also correct (Wentworth, North Sydney, Goldstein and Kooyong), Utting Research predicted the teal gain and the margin in Curtin and the final teal gain (Mackellar) had no public seat polling. (The YouGov MRP also predicted some of the teal gains but was not likely to be as well adapted to capture impacts of intense local campaigning of this sort).
B8. '"Anthony Albanese leads Peter Dutton 58-26 as Better PM"
FACTS: A Newspoll graphic showing Albanese leading Dutton 58-26 is now and then virally tweeted with people claiming it to be a new Newspoll. It is in fact a very old reading from 29 March - 1 April 2023. Other such cases of zombie poll graphics have also been seen at various times. Following this one another example involved Dr Victoria Fielding tweeting a September 2022 Resolve graphic showing Albanese leading 53-19 as supposed evidence that the Voice referendum had harmed support for Peter Dutton. This was community-noted by Twitter users and then deleted, but copies continued to circulate.
As for Newspoll specifically, it never had the LNP more than 55-45 ahead and its final estimate was 52.5-47.5 with the accompanying article saying the LNP was on track to win 48 seats - both slightly lower than the final numbers.
Section C: General Properties
C1: "Polls skew to the left because people who support right-wing movements are afraid to tell pollsters what they really think. Morrison, Brexit, Trump etc"
FACTS: This is the modern version of what used to be known as the "Shy Tory" theory, which was based around the idea that British voters feared being considered to be nasty people if they told a pollster they were going to vote for the Conservatives. This has always been an overrated theory, with most polling errors that were supposed to be caused by it being explained by other factors, at times including just not calling enough people who were going to vote Conservative in the first place, or calling too many who were going to vote for the other lot. (The 2019 Australian failure was most likely an example of this, primarily caused by unrepresentative samples and exacerbated by a kind of not necessarily conscious herding to a common expected outcome by some pollsters.)
The 2016 Trump surprise was not even a failure of national polling at all (the national polls that year were good - the problem was in a small number of crucial states, and in some cases happened because a state was so written off that pollsters did not bother sampling it enough). US polling errors in 2020 were in fact worse than 2016 but did not affect the winner. The US 2020 case saw a new possible cause of Shy Tory like polling error, but it was not exactly the same thing: instead of lying to pollsters, some pro-Trump voters were just refusing to take polls at all. However, this issue seems at this stage to be just an American thing.
One major issue with "Shy Tory" adherents is that they cherrypick - they notice every time when the polls overestimate the left, but they fail to notice cases like UK 2017, Victoria 2018 and New Zealand 2020 where the left has greatly outperformed the polling. They also fail to notice cases where the polls are accurate.
Another is that there's just no reason for any Tory to be shy when they are taking an online poll or punching numbers on a robopoll for that matter - they're not interacting with an actual human being. It was quite different with methods with human interaction such as live phone polling or (more so) face-to-face polling. Shy Tory theory has no place in Australian poll analysis. See also Armarium Interreta's comprehensive debunking of the theory as it applies to Australia.
C2: "Because Newspoll is released by MuRdOcH it is skewed in favour of the Coalition"
FACTS: It is bizarre people still make this claim following the 2022 election where Newspoll predicted Labor to win and Labor did, and the 2019 election where Newspoll predicted Labor to win and Labor lost, especially when Newspoll overestimated Labor's primary vote in both cases!
Newspoll has polled 13 Australian federal elections. Its average released final poll 2PP in that time has been 50.2% to Labor, and Labor's average result has been 49.6, meaning that the Newspoll brand has actually slightly overestimated Labor, not the Coalition. The Newspoll brand has also displayed very little average error either way at state elections at least since 2010.
It may surprise people that The Australian would support a poll with a fine history of neutrality, but doing so is a selling point for a newspaper that claims to be a leading broadsheet. If the poll had a history of skew that selling point would be lost. Collectively, News Corp outlets do make up for it by giving credence to a wide range of lower quality "polls" (see eg B1 above).
C3: "Morgan is way more accurate than Newspoll"
FACTS: This is another one that Bob Ellis was a major culprit for; he called Morgan "the accurate one" despite historic evidence to the contrary. Morgan has something of a cult following online from people who dislike other polls but the history of the poll is that it has often been wrong in Labor's favour. It predicted Labor victories in 2001 and 2004 (both of which Labor lost), with its 2001 miss losing it its contract with the Bulletin. It also had a 3.6%* 2PP error in 1996. It recorded a 2PP bullseye off not very accurate primaries in 2013, went AWOL with a month to go in 2016, was wrong like everyone else in 2019, and did OK but not brilliantly in 2022. At state level it has also not often topped my tables though it did for the 2022 Victorian election where its SMS poll was another 2PP bullseye. Morgan has frequently changed its polling methods and its current polls do not display the same skew as the old face to face polls and should not be judged based on the past - but Morgan still has to get more runs on the board before it can be considered as accurate as Newspoll. Morgan is also a rather bouncy poll from week to week, often showing substantial movements at times when aggregation says there's nothing to see. People using Morgan to justify denying other polls tend to cherrypick Morgan results that are good for Labor while ignoring that some Morgan results are very bad for Labor.
(* AMSRO polling failure report from 2019 gives 8.6% but I have not verified the existence of a final 55-45 to Labor Morgan in that year, which one would think would have been known about. I have gone with the figure based on Morgan's archives.)
C4. "Online polling cannot be accurate as people who do online surveys are unusual and not representative!"
FACTS: While it might appear that at some stage some particular issue might cause people who do online surveys to break differently to the general population in some way that targeting and weighting couldn't fix, so far it hasn't been an issue. The fact that Newspoll could get 2PPs at four elections in a row within 1% shows that this sort of thing is not biting on a regular basis, if it ever bites at all.
Polling has never been entirely random - these kinds of issues have always been there (even when phone polling was dominant there were some people who would never answer the phone). But someone can't just say that online polling is especially untrustworthy without looking at its actual results compared to elections, which leads into ...
C5. "Polls are failing more and more! They're not as accurate as they used to be!"
FACTS: The perception of a sudden surge in poll failure has been a common one following high profile examples like Brexit and Trump 2016 (though the latter was only a failure in certain states anymore). But this is another case where confirmation bias is the culprit - when polls for an election are excellent, those who believe polls are failing more and more ignore that, cherrypick specific polls that were wrong, or complain about the polls not being exactly right as if they ever were. The claim that polling errors were getting worse over time was debunked by a worldwide study in 2018. In Australia, average polling has been significantly more accurate since the start of Newspoll in 1985, which is partly why 2019 was such a shock to the system. A part of this myth is that people expect it must be true because respondents have been getting harder to contact, but at the same time pollsters have developed more advanced ways of getting around that, and hence polls haven't got worse.
As well as the Trump 2016 case (where the national polling was largely accurate) there are other non-Australian examples where polls were in general right but people claim they were wrong anyway, a good example of this being Boris Johnson's win in UK 2019. Polls very accurately predicted vote shares in that election. Poll-based seat projections prior to the exit poll predicted Johnson would win easily but underestimated his seat tally. The exit poll was very close to the final results.
C6: "I've found the big secret, regular polls exist to influence opinion not to measure it!"
FACTS: The most prominent statement of this nonsense came from an error-riddled video by West Media in 2021. Cases where polling evidence that a party is popular makes that party become more popular are known as a "bandwagon effect". There is no evidence that bandwagon effect is a thing in Australian politics outside perhaps Tasmanian state elections, and indeed in federal elections if anything it is commoner for the underdog to outperform their polling. (At state elections there's no effect either way.)
Those who are concerned about polls that exist to influence opinion should put that concern to more use and focus on the irregular commissioned issue polls released by lobby groups - but they won't because mostly the deniers come from the same side of politics. Many of these polls - and I'll single out the Australia Institute as the highest-volume and most irritating offender, but there are plenty of them - routinely feature preambles or question orders that are likely to affect the result. They are also used to make ridiculous claims that a high percentage of voters have a certain view about some obscure issue that the vast majority of voters have never heard of. Internal polling is also sometimes released to try to influence political outcomes or condition expectations rather than as part of a measurement exercise.
The bandwagon theory was especially common in the 2023 Voice referendum as despairing Yes supporters claimed that polls showing that No is winning were making the No position more socially acceptable and causing more and more people to move to No, and that newspapers like The Australian were printing polling for this reason. This ignores the fact that earlier in 2023 Yes was overwhelmingly winning the Voice polling and newspapers like The Australian published that fact. It is more likely polls were measuring a bandwagon that was happening naturally: as people discovered that people they respect were voting No, they felt safer in voting No themselves without fear of being considered racist. The 2019 election is another emphatic demolition of "bandwagon effect" - every poll for years showed Labor winning, Labor lost, some bandwagon huh?
Fans of bandwagon effect are prone to hypothesis switching. When it is pointed out that Labor failed to win in 2019 they switch to saying that in that case polls were asked to show Labor ahead to set Labor up for a fail. Ultimately no matter what the leadup polling or the outcome they will just invent some just-so story about how polling caused it.
C7. 'This poll is wrong! Nobody that I know thinks this! I did a Twitter poll and 95% agreed with me!'
FACTS: The purpose of polls is to try to tell us all what the people who we do not know are thinking. People can easily underestimate how much the people they know differ politically from the average (things like age, industry, location, interests and so on can all have strong political skews) but also in the extent to which people they know can be social chameleons who try to avoid openly disagreeing with opinions, especially strongly expressed ones. And often the number of people who a person has talked about political issues with in depth is an unreliably small number anyway.
On social media, people tend to follow people who have similar political views and interests, and will sometimes block people they encounter with divergent views. Political social media polls also tend to get retweeted/shared into echo chambers of support for one side or another. On Twitter, where the left-wing ghettos tend to be larger than the right-wing ones but the latter also definitely exist, I have seen Voice "polling" with results anywhere from 70% No to 95% Yes. Social media polls are not scientific samples no matter how many people choose to participate them, because the way they spread is the opposite of balanced sampling.
C8. 'This poll polled only 1500 people nationwide, 1500 out of 15 million voters, that can't be accurate'
FACTS: People making this type of claim (or the same if it's 1000, 1200, 2000 etc) are ignorant of basic random sampling theory, a fundamental of statistics. I invite them to perform the following experiment. Take a bunch of ten normal coins and toss them together, write down how many were heads and how many tails (or you can do it electronically via Excel "random" numbers if you like instead). Do it again, and add the number of heads and tails to the first set of numbers - check what percentage have been heads so far. Keep going until you've done this 150 times and you will notice something amazing. While the early percentages may not be anywhere near 50%, by the time you get to 150 throws (=1500 coins) you might have 52% heads or 49% heads but it will not be 60% or 30%; it will pretty much without fail be pretty close to 50-50. Random sampling works at getting rather close to a population average efficiently, but to get very close requires luck or a huge amount of sampling.
We can also tell that this works by looking at how final polls perform at elections. If poll numbers based off a few thousand voters had no relation to reality then polls would on average be all over the place, but even in bad years they are still mostly within a few percent.
In reality polls are not as simple as random sampling as there are all kinds of factors that make them not truly random (see also: "Margin Of Error" Polling Myths), but some of these (like respondent targeting for those polls that do it, or like the use of last-election preference flows to model two-party preferred) can actually in effect reduce the random factor.
A common misconception is that a sample can't be effective if it is a tiny proportion of the whole. In fact, how reliable a sample is usually has far more to do with the number of people sampled than the overall population sampled. A sample of 1500 out of 15 million is more accurate than a sample of 500 out of 1000.
C9. 'I don't trust the polls, I have never been polled and nor has my 18 year old daughter!'
FACTS: Every poll covers about 0.001% of the voting population of Australia. So even if everyone had an equal chance of being polled, the average person would be waiting 10000 polls on average between being contacted - or say a few thousand after adjusting for non-responses. There are often about 100 national voting intention polls per year. Especially if one lives in a boring seat that doesn't get a lot of seat polls, one might get polled only a few times in one's voting life, if that.
But unlike the day when one had a good chance of being polled randomly sooner or later provided one actually answered the phone or the door when unknown callers dialled or knocked, these days many people's details are no longer accessible to pollsters. Polling by mobile phone only reaches those whose mobiles happen to end up on the right market research databases; there is no White Pages equivalent that lists all mobile phones. As for online polling - now the dominant form in Australia - that only reaches people who choose to join market research panels, and even then a panel one signs up for or gets invited to join (eg through some shopping loyalty schemes) might or might not have polls on it, and one might or might not get selected for them. People who don't take steps to increase their chance of being polled might well never be polled. (Some people on the other hand are poll magnets and get quite a lot of polls through various channels - for instance they might be on online survey panels where their area is poorly represented.)
An analogy is jury duty. Someone might never be called up in their life but that doesn't mean they'll start thinking that juries don't exist. Another is lotteries, just because you've never won a lottery prize doesn't mean nobody wins the lottery. Especially if you didn't buy a ticket.
C10. 'Sure Australian pollsters can poll elections, but referendums are totally different and they've never done one before. I don't think pollsters can poll referendums.'
FACTS: While there has not been a referendum as such in federal polling since 1999, there was the 2017 Marriage Law Postal Survey, which was actually harder in theory to poll for because pollsters had to figure out who would "vote" in a voluntary ballot and who wouldn't. And while currently active polling staff generally won't have polled in previous referendums, there is still material about polling methods and results available to them, especially for 1999 but also for 1988 and even snippets back to at least 1951. There is also plenty of overseas evidence that ballot measures are pollable, even if their polling might not be as reliable as for elections. If anything, such polling is somewhat prone to overestimate support for changes to the status quo, no matter what side of politics proposes it.
Of course it's still possible that something might cause polls on average to be significantly wrong in a specific referendum, in either direction. But that's also true of elections. And it often doesn't matter as much in referendums, because the outcomes are usually not close.
The 2023 Voice referendum showed that Australian pollsters could poll referendums very well indeed!
C11a. '20,000 people rallied in support of a party/cause today! The polls saying that party/cause is trailing must be wrong!'
C11b. 'The polls say people support something but its opponents had 500 submissions to a government enquiry compared to only 20 from supporters. The polls must be wrong!'
FACTS: Polls aim to measure what the population in general think - in most cases in the simple form of support for a given party or cause, though strength of support is also sometimes measured in broad terms. They do not aim to measure how committed the very most committed supporters of one side of an issue are. Large rally crowds may seem like impressive evidence of what the population in general thinks, but it's rare to get even 2% of the population of a mainland state to a rally for a cause. Political history is full of cases where huge rallies were held in support of causes that proved to be unpopular - Gough Whitlam rallies during the 1975 election one of the more famous examples. Overseas, other election-losers to have attracted huge crowds of enthusiastic faithfuls include Donald Trump and Jeremy Corbyn. The rally claim has been especially prominent in the Voice referendum leadup, but a Redbridge poll has found that Yes supporters were far more likely to rank the Voice as one of the most important issues for them at that moment than No supporters were. No duly won the referendum easily.
A similar thing applies for submission processes - they tend to only attract people who are both heavily invested in an issue and willing to devote time to it. It's common for them to attract submissions organised through activist groups who encourage supporters of their cause to put in submissions, and often then seek to make claims based on the numbers. But the great majority of voters are not even aware that these processes exist, let alone motivated to express a view through them.
C12. The polls for this election/referendum/leader are all over the place and moving in different directions! How can I trust anything they say?
FACTS: Different polls will often give somewhat different readings because of differences in methods, differences in dates in field, differences in where they get respondents from and also random sample noise. In referendums and plebiscites (and also in voting intention in the case of Essential) some of the differences are also driven by different ways of treating "undecided" voters.
The important thing is not to pay too much attention to any one specific poll (especially if it has a history of favouring one side much more than others) and to consider the spread of different results. A spread of values is on average a good sign because polls are acting independently rather than copying each other. When there is an unusually wide range of results, someone is going to be wrong, but the polls as a whole are more likely to be somewhere near the outcome. This is why polling analysts make aggregates of data from different polls. We should be more concerned when, as in 2019, all the polls are getting very similar results, as this may be a sign that pollsters are letting themselves be influenced by things outside their data (other pollsters' results, their own previous results, general expectations etc). In general if a specific election poll is an outlier compared to others it is more likely to be wrong, but that doesn't necessarily apply to referendums or issue polling.
When regular polls move in different directions, often the movements are individually not statistically meaningful. Frequently when this occurs it's safe to treat them as cancelling out because nothing is actually changing.
-----------------------------------------------------------------------------------------------------------------------
(More will be added as I see them often enough to be included.)
Thanks for the excellent writeup Kevin.
ReplyDeleteQuick note, there seems to be a cut-off sentence under Section B, Part B3 at the end of the second paragraph ("in the end Labor very narrowly won a majority, but won the election comfortably because ..." then cuts off).
Here's one that could come under C: "The published polls must be wrong because party/organisation X has private polling showing something different."
ReplyDeletePossible responses.
1. If we don't get to see the private polling figures, the questions that were asked, the sampling methodology used, etc, we have no way of judging the likely accuracy of the private polling, and in such circumstances to believe that the claimed private polling results are more likely correct than the published polls is an act of faith in an unqualified assertion by an interested party.
2. If the private poll is something like the irregular commissioned polls produced by lobby groups, the critique of those polls provided under C6 applies.
3. Private polling by parties or candidates may be more fine-grained than the published polls. For example, they may be polling voters in particular seats or regions and thus picking up trends that nationwide or Statewide polling wouldn't. The Greens polling that picked up Max Chandler-Mather winning Griffith in 2022 could have been a case of this. In these case some of the points made in B2, B3 and B7 could apply, but it does not mean that the nationwide or Statewide polls are wrong at the level of resolution at which they operate.
4. As amply explained in the post, the published polling in recent times has generally been accurate and is becoming more so. Willingness to believe rumours about private polling entails a refusal to acknowledge this fact.
Bob Ellis was such a notorious and persistent crank on this issue that I think you could him as a separate independent entry.......
ReplyDeleteExcellent article, many thanks. Polling denialism is an obvious case of "shoot the messenger". While most of these denialists seem to be on the Left, the "shy Tory" excuse is equally contemptible. Why stigmatise fellow conservatives as paranoid cowards? I recall how Roy Morgan became a joke through fact-to-face polling, guaranteed to skew results to the time-rich and income-poor. I once consented to such an interview, an interminable array of irrelevant commercial products with politics taking up 1.0% of the time.
ReplyDeleteC3 - the paradox is that, before Cosby Textor, Roy Morgan used to be the Coalition's pollsters (formally from 1975 to 1993, and informally before that going back to the 1950s). In Coalition campaign lore, there were suggestions it had some responsibility for some near misses.
ReplyDelete