Tuesday, May 21, 2019

The Miracle Is Over: The 2019 Australian Federal Election Poll Fail


Nice 2PP.  Shame it's for the other side ...

"I have always believed in miracles" said re-elected Prime Minister Scott Morrison very late on Saturday night.  But many (not all) of us who study national Australian polls and use them to try to forecast elections have believed in a miracle for one election too many.  The reason we believed in this miracle was that it kept delivering.  While polls failed to forecast Brexit, Trump and two UK elections in a row (among other high profile failures) Australian national polls continued to churn out highly accurate final results.  The two-party preferred results in final Newspolls from 2007 to 2016 are an example of this: 52 (result 52.7), 50.2 (result 50.1), 54 (result 53.5), 50.5 (result 50.4).  

Predicting federal elections pretty accurately has long been as simple as aggregating the polls, adjusting for obvious house effects and personal votes, applying probability models (not just the simple pendulum) and off you go; you generally won't be more than 5-6 seats wrong on the totals.  While overseas observers like Nate Silver pour scorn on our little polling failure as a modest example of the genre and blast our media for failing to anticipate it, they do so apparently unfamiliar with just how good our national polling has been since the mid-1980s compared to polling overseas.  As a predictor of final results, the aggregation of at least the final polls has survived the decline of landlines, volatile campaigns following leadership changes or major events, suspected preferencing shifts that frequently barely appeared, herding with the finish line in sight, and come up trumps many elections in a row.  This has been put down to many things, not least that compulsory voting makes polling easier by removing the problem of trying to work out who will actually vote (another possibility is the quality of our public demographic data).  But perhaps it was just lucky.


For sure, the same success has not been seen in individual seat polling (extremely unreliable for many possible reasons), and state polling hasn't been great lately, with a 3.3 point 2PP blowout in the Victorian state government's landslide re-election.  And there have been misadventures in the past, like Nielsen's 57-43 to Labor in 2007 (caused by doing their final poll too early) and Morgan's 54.5 to 45.5 to Labor in 2001 (which lost them their contract with the Bulletin).  But even so, these tended to cancel out errors by other pollsters, meaning that aggregation of the final polls hasn't been much more than a point wrong since 1993.  Even in 1993, contrary to popular myth, the final polls probably should have been taken as pointing to a slightly more likely than not Keating victory.  We have to go back to 1980 for the last "polling disaster" case (some background in here) where polls clearly predicted the wrong winner.  And at the last few elections, national polls had been excellent, if a little on the high side for Labor in 2010.  It seemed some lessons of some earlier failures had been learned.

And so, when warning signs appeared, in the form of both the Coalition moving within historical striking distance, and then a ridiculous level of herding first called out in an excellent post by Mark the Ballot, I went through the motions of warning that there was a realistic if slim chance that the polls were all baloney. Based on historic projections, there was maybe a 25% chance that the Coalition would win anyway (a la Trump vs Clinton).  But after so often flogging the poll failure risk horse and having my fears proven groundless, my heart wasn't totally in it.  After seeing blowouts compared with final polling in five of the last six state elections around the country, and in seat polling where clusters of 50-50s and 51-49s were often followed by lopsided margins, it looked more likely that Labor would win by more against a government that nine months ago had set itself on fire.  To the extent that this was a widespread view (especially after the death of Bob Hawke) it was another incidence of Nate's First Rule: polls can be wrong, but trying to second-guess which way they will be wrong if so nearly always backfires.  

A Failure In Two Parts

To outline the nature of the failure briefly, especially for unfamiliar audiences overseas,  this is how the final polled primary votes and two-party preferred (2PP) compare with the current counting towards the final result. 


(* Average excludes Essential, which did not provide a breakdown for UAP.)  

The primary for the Liberal-National government was underestimated by about three points and the opposition Labor Party was overestimated by two points.  Errors on the other parties were minor.  The two-party preferred (2PP) figure on the far right hand side, however, is the key figure in Australian polling, because which of the Liberal-National and Labor parties wins the nationwide two-party-preferred vote will usually form government (with some exceptions when it's very close).  The 51.6% is my estimate based on swings in the ABC's election night projections; currently the live count is at 50.9% but this live count excludes several seats where the Government and Opposition are not the final two candidates, or where they were wrongly expected not to be.   

Depending on exactly where the 2PP ends up, a small part of the circa 3 point miss on the 2PP may be caused by modelling error regarding how the preferences of the minor parties would flow.  Nearly all of it, however, has been caused by getting the primary vote for the major parties wrong.  (The exception is the Ipsos poll which has a long-unfixed habit of getting its Green primaries a few points too high, and also tends to get Labor low compared to others - which cancels out as 82% of Green preferences flow to Labor.)

So that's the first part of the error - that, on average, pollsters had the 2PP wrong by about three points.  This by itself might be dismissed as part of the "house effect" of polling at this election.  But there is more: the final seventeen published polls (and the Galaxy exit poll as well) all had a rounded 2PP for the government of 48, 48.5 or 49.  The sample sizes of these polls varied from several hundred to around 3000, so if the average had really been around 48.5 the smaller polls would have had about a one in three chance of landing in this band randomly, while the larger polls would have done so about 60% of the time.  The chance of all seventeen doing so - if they are independent and purely random samples - is a little under 1 in 200,000.  But in fact polls aren't purely random and the use of weighting in the polls should increase their margin of error and make the streak even less likely.  As Professor Brian Schmidt pointed out in the Guardian, the mathematics do not lie.  Some of the polls were not pure samples, some of the polls were not independent of the same pollster's other polls, or some of the polls were not independent of each other. 

The failure is amplified because not a single poll in the entire term of government showed the government leading, with the exception of five obviously dodgy respondent-allocated 2PPs in the short-lived YouGov-Fifty Acres series.  The government hadn't even tied a poll, except a single Ipsos on respondent preferences, since 2016.  The old saw about the only poll counting being the one on election day held true, except that in Australia now, election day goes for three weeks, robbing the pollsters of the usual excuse for this sort of thing (that people simply changed their minds.)

Is this like Trump or Brexit?

Recent major failings of polling like Trump and Brexit have created an impression that polling is getting much worse, when actually, worldwide, it isn't - it's as mediocre as it always was.  It just happens that there have been failures on some particularly momentous and close contests that have cast polling in a bad light.

But the failure that has happened here is actually worse than the Trump failure.  In the Trump case, national polls were actually quite accurate - they projected that Donald Trump would lose the popular vote, which he did, though not by quite as much as they projected.  It's the equivalent of a 1 point 2PP failure in Australia, which in this instance would probably have seen Bill Shorten in the Lodge but without a floor majority. The main failure in the USA was in the polling in a few particular states crucial to Trump's win in the Electoral College.

It is also slightly worse than Brexit.  The average error in the Brexit case was very similar, but the polls were not herded - two pollsters' final polls had No winning, albeit by less than it did.

Why were the polls, on average, wrong?  

One of the problems in saying why the polls were, on average, so wrong is that Australian pollsters, especially YouGov-Galaxy which also administers Newspoll, don't tell us very much about how they are doing their polling.  This frustrating opacity is a big contrast to many UK pollsters whose released results come with lengthy and detailed reports (example).  For instance, we know that when Galaxy took over Newspoll it commenced augmenting phone polling with online samples, but the breakdown of the two methods isn't published.  Another example is that in late 2017 the pollster changed the way it dealt with One Nation preferences in constructing its 2PP estimate.  The change was well justified, but the pollster did not make it publicly known until psephologists had gradually detected the issue over the coming five months, during which time the Australian had continued to claim that the poll was using last-election preferences.

Getting accurate samples in polling is increasingly difficult.  No major Australian pollster still uses purely landline-based sampling, nor has since 2015, but one still often sees claims that they do.  Nowhere near everyone has a landline, answers it, or takes a recorded-voice call.  Live phone calls are expensive and prone to social-desirability bias (as is face-to-face polling), a possible source of Ipsos' constant inflation of the Greens vote.  Pollsters lack access to a complete database of mobile phone numbers, so some people can be reached by mobile phone and some can't.  Online polling is another solution, but not everybody likes spending their time filling out surveys on a computer for trivial returns.  This particular failure has cut across all of these polling modes.

Here is one explanation that has been offered that is definitely false:

* Margin of error: People casually familiar with margin of error are claiming that the failure was within the margin of error of the polls.  But margin of error applies to the results of a single poll, not a string of them.  One poll might be wrong at the outer edges of its margin of error, but if even two polls in a row by the same company do this in the same direction then there is already a problem.  (This is a variant of error 3 in my list of margin of error myths).  Also the failure was outside the margin of error of the largest polls taken.  The 17 clustered polls collectively had a sample of about 23,600 on which the margin of error would be 0.6%.  Four or five times that margin puts you nine standard deviations from the expected sample if you were correct - in other words, extremely improbable.

Here are some explanations that have been advanced that in my view are non-starters:

* Late swing: the idea here is that those making up their minds on the day swung to the Coalition but made that decision too late to be included in the sample.  The problem with this is that prepoll voting was going on more or less throughout the period of these wrong polls, and prepolls have actually showed a greater swing to the Coalition (2 points 2PP) than election-day booth voting (0.8 points) - figures as of Sunday morning.  Of course, the prepoll voting mix has changed a lot with the massive increase in prepolling but even so to expect that to make 4 points of difference the other way seems a bit much.

* Rolling late swing: the idea here is that some voters were intending to vote Labor but chickened out because of scare campaigns on the way to the ballot box (or post box) whether they voted before polling day or on it and voted Coalition instead.  This theory avoids the problem with the prepoll and booth swings mentioned above.  However if this was the case the polls should have shifted to the Coalition through the campaign by about a point as these voters reported back their actual vote. This didn't happen.  All else being equal this should have also led to a larger gap than last time between the votes of those who had already voted and those voting on the day in the few polls that provided a breakdown of this.  In the one I saw, Ipsos, it didn't.

* "Shy Tory effect": the idea here is that conservative voters are afraid of telling the pollster they vote Liberal because they think the interviewer will think they are a bigot.  But unless a respondent is very paranoid, they're hardly likely to care about admitting they vote Coalition to a robopoll or an online survey.  Also, there is no systematic skew to Labor in recent Australian polling - the Victorian election with its 3.3 point skew to Coalition in final polls being an example.  If ever Tories had a campaign to be shy about, that one, with its "African Gangs" beatup, would surely be it.

Here are some explanations that in my view are plausible (at least as partial explanations):

* As with overseas polling failures, pollsters may have been oversampling voters who are politically engaged or highly educated (often the same thing).

* Connected to the above, some pollsters may have been underweighting (or failing to set quotas for) other important information.  There is too little information available about what Australian pollsters actually adjust their samples for, but what is available often refers just to age, gender and location.  Pollsters should not be expected to declare their exact formulae, but a list of everything a pollster weights for would be useful in picking cases where the polls might be missing something.

* There could well have been an unusual skew to the Coalition among politically disengaged voters who are basically unreachable by any method by pollsters, but are still required to vote (a way in which compulsory voting can make polling more difficult not less).  Pollsters just have to trust that the unreachables behave like those they could reach with the same demographics

I may add others as I see them mentioned. 

Herding, smoothing etc

Even if some explanation can be found for the average skew of the polls at this election, it doesn't explain the run of seventeen polls in a row with the same wrong result.  This sort of thing is often referred to in polling studies as herding.  Nobody wants to be the lone pollster with the completely wrong result while all the others are right (a la Morgan in 2001) so the myriad subjective choices that can be made in polling may result in struggling pollsters being more likely to get results similar to better pollsters.  But if the normally good pollsters then make mistakes, the whole herd is dragged off course.  In reducing the risk of being the outlying pollster, herding pollsters increase the risk that everyone is wrong.  An appearance of herding has been common at recent Australian elections including the 2016 federal election, and sometimes this takes the form of one or more pollsters who have had different results to Newspoll early in a campaign saying the same thing as Newspoll at the end.  

It's not clear that there was actually any herding this election.  An alternative possibility is that pollsters could be self-herding, and just coincidentally happened to do so around the same range of values as each other.  This could happen if pollsters were using some form of unpublished smoothing method to stop their poll from bouncing around and producing rogue results.  Galaxy has always had uncanny stability, and when it took over Newspoll we started seeing such things as, at one stage, the same Newspoll 2PP six polls in a row.  Essential has also in the past been prone to get "stuck", but seems to have behaved more naturally recently.  Galaxy has also displayed strange behaviour in seat polling at both the last two elections - when it repeat-polls a seat, the difference between the two polls is on average little over a point on 2PP, about half what it should be randomly even if nothing has happened in the campaign in that seat.  Also in 2016, Galaxy's seat polls had strangely underdispersed swings.

An article by former Nielsen pollster John Stirton raised concerns about the strange lack of volatility in Newspoll since Galaxy took over but the claim has been neither confirmed nor denied to my knowledge.  (There's not necessarily anything wrong with it, but if a poll isn't a pure poll, this should be declared.)

It's worth noting that the primary votes do not display the same level of herding as the 2PP, for reasons including Ipsos' trademark inflation of the Greens vote, and also Essential having One Nation too high.  But it's largely irrelevant because the 2PP is the figure that is used to forecast results and that would be the obvious concern for any pollster worried about their reputation.  In theory, herding the 2PP could also be a factor in decisions about how to allocate preferences from minor parties.  The four different pollsters (counting Galaxy and Newspoll as one) applied four different preferencing methods.  Galaxy is known to have changed theirs at least twice during the term, while what Ipsos did is unclear based on published information.

Were the seat polls better?

David Briggs of YouGov-Galaxy has referred to the many seat polls that Galaxy issued in the final week as pointing to the correct picture and has complained that when these results were released they were scoffed at because they didn't fit the narrative.

I'm aware of 16 such results from the final week.  In fact, on current figures (which won't change much) they were on average 3.2 points better for Labor on 2PP than what actually happened, so they were just as bad as the national polls in that regard.  Across the whole campaign, the average 2PP/2CP error per seat for the 21 Coalition-vs-Labor/Greens seats polled by Galaxy/Newspoll (some seats polled more than once) was about 2.7 points.   It's true that all these polls showed only two wins for Labor in Coalition seats, both of which they actually won.  It's also true that these polls showed Labor behind on average in two of their own seats, both of which they lost.  And of the 18 such seats where these polls showed one side ahead on average, they picked the right winner in 16, with two in doubt where if the wrong side gets up, it will be by a very slim margin.  So that's impressive.

But these seat polls did not reveal anything to cast doubt on the picture in the national polls, because they had the same swing as the national polls.  In the final week Galaxy had Labor at 50-50 in four Coalition seats and 49-51 in five Coalition seats.  If these were broadly accurate samples Labor would have won three or four of these seats by chance, but Labor has actually won none, has lost one of its own seats (Herbert) where the final Galaxy had it at 50-50, and is a tossup in another (Macquarie) that Galaxy had it leading in 53-47.  Moreover, only one of the nine 50-50 or 49-51 seats has ended up even remotely close.  There were massive errors in two Queensland seats (Labor held Herbert and LNP held Forde) that Galaxy had as 50-50; both are now showing the LNP over 57%.

The final week polls were also, yet again, somewhat under-dispersed for their small sample size, with a standard deviation of around two points both on the released 2PPs and the predicted swing from the actual election, compared to three points in the actual results and four points in the actual swings.  (About three points is normal.)  This repeats an issue that was seen in the last election, and raises the question of how on earth Galaxy keep getting results at both national and seat poll level that are less variable than they should be by chance.

Were the internal polls better?

When there is an unexpected result one usually hears that the losing party had this result in its internal polling all along, released only to a select list of senior people.  This is even though this is invariably news compared to what little could be gleaned about internal polls from pre-election spoonfeeding of the media and, in rare cases, leaks.  As a general rule, any claim about internal polling that is politically convenient for the source making it should be ignored unless they publish the full details.

In this case, we have got a rare glimpse under the hood of Labor's internal tracking polling, which reveals that Labor chose to contract YouGov-Galaxy for their internals even while the same company was also providing Newspoll and Galaxy polls to the Murdoch press (which was editorially hostile to Labor during the campaign).  This is quite a surprise in itself, especially following the concerns about another pollster, uComms, having union ties that were not well known to clients.  It raises the question of whether Australia has enough good pollsters to be able to avoid hiring someone who might be conflicted.  (There's an illuminating history of other such issues by Murray Goot. It is known to me that ReachTEL often declines contracts to avoid having a conflict in the market.) YouGov-Galaxy have pointed out in the article that the two contracts were "siloed", meaning that the polling teams used for Newspoll and for the Labor research were entirely separate.

The tracking poll covered 20 seats with an average pre-election 2PP of 51.4% to Coalition.  As an average through the campaign, the tracking poll showed Labor getting a 0.8% swing, enough to probably govern in minority but probably not with a majority.  At the end it kicked up to a 1.4% swing, probably enough for a slim majority government and completely consistent with the late reports from an unnamed Labor source that they expected to win about 77-78 seats (but hoped for more).

The interesting thing is that the tracking poll dipped below easy-win territory (51 and above) after only five days of the campaign, and at some points even had Labor losing based on the swing in these marginals.  If Labor were taking this polling seriously they should have noted that it was telling a different story to the national polls early in the campaign, and should have realised that they needed to cut back on risks.  Labor's tracking poll was as wrong as everything else at the end, but it wasn't always so.

There are, however, suggestions that the Liberal Crosby-Textor internal polling was rather accurate.  Again, we need to see detailed figures on this.

Where to from here?

I mentioned before that Australia's last fully-fledged polling failure in a national election was in 1980.  Commenting on that case in 1983, David Butler wrote "Nowhere in the world has a debacle for the polls diminished their use in subsequent elections".  But I wonder.  The frequency and diversity of Australian polling (especially at state level) had already declined sharply in recent years compared to the period 2013-5, though Australian pollsters had done little to deserve that.  This downturn was sometimes attributed to the Donald Trump upset casting pollsters in an unflattering light, though I think the increased dripfeeding of journalists with stories culled from internal/commissioned polling also has a bit to do with it.  Where does an already weakened industry go from here?

Obviously pollsters will have to review to try to determine what went wrong.  They need to make this review broader than just in-house and bring in independent experts (retired former pollsters, perhaps overseas pollsters, professionals from related industries) to provide a detached view of the fiasco.  The reviews must seek to explain not only the error on the averages but also why the polls clustered in the last weeks of the campaign, resulting in no pollsters releasing Coalition win results that, even if apparently stray, would have at least increased awareness that the result wasn't completely in the bag.  There is a stark contrast between the last four elections and the elections before them at which final results from different pollsters varied widely.

But I also think the Australian polling industry, and also the media who use polls, need to try to win back the trust of the public by being far more upfront about what polls are doing and how they are doing it.  This extends both to the public polls, which need to be far more comprehensively reported than they are now, and also to commissioned seat polls.   Seat poll results spoonfed to media by left-wing groups in the leadup to this election were often extremely left-skewed (much worse than the national polls), with the exception of GetUp!'s polling of former Prime Minister Tony Abbott's impending defeat in Warringah (which was quite accurate). 

The unhealthy synergy of activists and parties spoonfeeding poll-shaped objects to reporters, thus giving them free content in return for what is nearly always uncritical coverage of deeply dubious issues polling questions, must end.  Pollsters that wish to be reputable need to commit to standards alongside those of the British Polling Council, so that any poll reported in public (whoever commissions it) is available for public discussion of its methods and full results.  Ideally, media should refuse to report polls unless the source is willing to release their full results, but that's probably a bit much to hope for.  (It's encouraging that at this election some journalists were refusing to accept seat polls with very small sample sizes, but there's a long way to go.)

As serious as the issues with the public polling are, they pale besides the rottenness and shoddiness of so much of the commissioned polling being reported on, which affects the whole industry's reputation.  When I have been able to get hold of results for commissioned polls, I have often seen errors I did not expect - naming some candidates but not others, absurd breakdowns by gender, previous-vote reporting that demonstrates a biased sample, spreadsheet errors and so on.

Polling in Australia attracts an incredible volume of daft conspiracy theories.  Any Newspoll showing a benign result for the Coalition is dismissed as a Murdoch plot, even if independent polls are getting the same result or worse.  Discredited tropes created by the famous author and clueless polling crank Bob Ellis still continue to spread on Twitter years after his death.  The stupidity and evidence-aversion level of a lot of this stuff is quite staggering and pollsters could hardly be blamed if it was giving them a dose of Stockholm Syndrome.  But at the same time, the pollsters are partly responsible for the ignorance that breeds in a vacuum, because they've given the public the mushroom treatment for so long, and then they seem surprised when opinionistas and "Twitter experts" make outdated comments about how their polls work.   How they cannot see that this is self-inflicted is beyond me.

I hope this failure will lead to a better and more open polling industry.

(See also later article: Oh No, This Wasn't Just An "Average Polling Error")

40 comments:

  1. Anthony Green cited landlines as a factor.

    ReplyDelete
    Replies
    1. Yes; unfortunately an incomplete explanation because the failure occurred across every available polling mode including non-phone polling.

      Delete
    2. .. and he's the one that get's the TV gig. .. disgraceful!

      Delete
    3. In fairness to Antony, polling commentary isn't his primary job on election night and he probably wouldn't have been expecting to have to comment on poll failure in this context given how accurate past national polls have been. He might not have even had reason to look at them especially closely.

      I just do worry that the outdated nonsense about pollsters only polling landlines will be even harder to dispose of now!

      Delete
  2. Well done Kevin...........I have just posted on Poll Bludger a much more simplistic, and unsupported comment, lacking the rich data you have, a grouch along the same lines. You did warn us as did William. You sensed something was not quite right and you have been indicated. In my view, the failure of the pollsters to get it right/better is the issue. A lot of political careers have been destroyed over the last few years - on both sides of politics - based on "the polls say this/that/or something else......."

    ReplyDelete
  3. Just goes to prove, the only poll that counts is the one held on election day.

    ReplyDelete
  4. What explains the reason the exit polls were wrong too? Network Seven predicted a 13 seat majority for Labor. I presume they do face to face questions of people leaving the polling booth, in different areas? In the past, have exit polls been more accurate than the final opinion polls?

    ReplyDelete
    Replies
    1. Exit polls have been pretty rubbery in Australia in the past; they are often a few points out, and have recently skewed to Labor. The Galaxy one covered a range of marginal seats.

      Delete
  5. Did you know the November Frog circa 1989-90?

    ReplyDelete
    Replies
    1. Not the first time it's been mentioned in this place; see comments at https://kevinbonham.blogspot.com/2013/08/if-you-care-about-gay-rights-vote-below.html#comment-form

      Delete
  6. I saw somewhere that many psephologists got the sack over this. Is that true?

    ReplyDelete
  7. What do you make of the purple and white signs in Chinese language right next to the purple and white AEC signs instructing people to "Vote 1 Liberal"; No Liberal logo or colours. Oliver Yates considering taking it to Court of Disputed Returns. This occurred in a few Victorian seats.

    ReplyDelete
    Replies
    1. I'm unsure whether signs like that are illegal (it might depend on the translation!) and would be interested to see a test case. There's no electoral law against having a sign in AEC colours. There's no law against generally misleading material - the issue is misleading the voter in relation to the casting of their vote. This has usually been interpreted as relating to mechanics, but if it could be argued that the sign implied that any other vote was informal, then it might have legs, but I couldn't say.

      Whatever, such signs should be banned if not found to be already covered. Ditto for NSW Labor's "Important: Just Vote 1" rubbish at the NSW state election. It should be illegal to distribute material that gives a false impression of being electoral material and this should be specifically covered under electoral law.

      Delete
  8. Thanks for this excellent analysis.
    Don't want to be too tinfoil hat brigade here, but the possibility of political dark arts or black-ops a la Cambridge Analytica? There is evidence of targeted horror social media campaigning, and also Russian style MAGA bots trolling #Auspol discussions.

    ReplyDelete
    Replies
    1. There was quite a lot of dodgy social media stuff going on at this election (eg see Alice Workman's article at https://www.theaustralian.com.au/inquirer/death-by-a-thousand-posts-in-social-media-wild-west/news-story/11e146b344c5d795eef05f8555ecabaa ) But no matter where it comes from, if someone's succeeding in influencing voting intention with it, that should be reflected in the polling.

      Delete
  9. Thanks for the article.

    re. Shy Tory Effect. What evidence do you have that robocalls decrease, if not eliminate, this effect? As a starter, what proportion of people even realise they are talking to a robot?! Even of the answer to that is close to 100%, it's not obvious there is no cause for embarrassment in this situation...

    ReplyDelete
    Replies
    1. I'm not aware of any evidence on the proportion of people who mistakenly think a robopoll is a real person, save that I haven't encountered any examples of this anecdotally and can't find any on Twitter with the obvious search terms. Most people seem to realise immediately that it's a recorded message. However there are some scam robocalls that do try to trick people into thinking they are a real person (eg by using emergency language or responding if the recipient says something).

      To the extent that any form of social-desirability bias exists, differences between automated calling and live polling were often seen in polling for US ballots on same-sex marriage, and I also saw strong differences in this via ReachTEL in the early days of the debate (but by the time of the same-sex marriage vote the evidence for a difference by mode was unconvincing). Live polling in Australia also is more prone to getting the Green vote too high than automated voting, but we have only a small number of companies to compare in that regard, so it may just be coincidence that those who do so are live pollsters.

      It's also worth mentioning that the Shy Tory Effect is vastly overrated in terms of actual evidence for it. In the case of UK 1992 (the polling fail that coined it) it was only, in the end, considered to explain a small portion of the error (equivalent to a 1% 2PP miss in Australia.) In other UK cases where it has been hypothesised it has often ended up being discounted by reviews, with unrepresentative sampling being found to be the main culprit instead.

      Delete
  10. Is the herding problem due to pollsters not reporting 'rogue' or unwanted results? If polls are regularly published, an absent result would be easy to spot.

    ReplyDelete
    Replies
    1. In most cases there is no way a pollster would not report a rogue result in its national polls as it would mean the commissioning source has wasted their money on the sample, quite aside from the reputation damage if it was obvious a pollster was in the field then didn't publish. So the question is rather whether any pollsters employ adjustments that would make rogue results less likely.

      In the case of Morgan specifically, while it was releasing polls on a regular schedule during the campaign, previously it has released its poll intermittently and sometimes retrospectively, eg you only find out the result of the previous week's Morgan when it releases a new one. It has been very difficult to determine whether Morgan is regularly conducting its poll but only irregularly releasing it (at least for free) or if it is regularly releasing it to paying customers how you pay to subscribe to it. (That said, I haven't asked them directly; it's their responsibility to make this clear on their website.) If Morgan is indeed conducting polling that it only sometimes releasing the question is on what basis it makes this decision (though I suspect avoiding rogue results isn't it.)

      Delete
  11. Are you familiar with Prof. Bela Stantic?

    ReplyDelete
    Replies
    1. Vaguely. Although his method did apparently predict Brexit, Trump and this one it also failed enormously in the same-sex marriage postal survey where it predicted a narrow win for No and Yes got over 61%. (https://www.weeklytimesnow.com.au/news/national/samesex-marriage-twitter-says-no-university-study-shows/news-story/75a1d3781e6de72b554be4a84ff9e874) I can't see how a method can fail by so much in one contest and yet work reliably and consistently at others. I'd be interested to know more about what data are available to allow them to weight their sample, eg how can they tell how old a social media user is.

      Delete
    2. Yes, would be interesting. It seems to be the future of psephology.

      Delete
  12. Thanks for a terrific analysis. Like your suggestion that sampling and re-weightings associated with overeducated and disengaged could be an explanation. But why wasn't this an issue in the previous election. Have polling techniques radically changed in the last 3 years. I was also wondering if how they treat undecided responses could be an issue. Rather than Shy Tories I was wondering whether we were dealing with Conflicted Socialists. Traditional ALP voters may have been conflicted over ALP Tax proposals and therefore truly undecided when polled but voted Coalition at the ballot. That would be a one off effect given the unique tax issues at this election that weren't present in previous elections. Accuracy in a Poll therefore called for re-weightings associated with share and investment ownership rather than geography and demography.

    ReplyDelete
    Replies
    1. At the previous election both sides ran very middle-class inner-city style campaigns that would have appealed to educated voters so any over-sampling of them would have been unlikely to make a difference.

      Delete
  13. I like to track the Morgan poll. Its drawback is that it suffers from social-desirability bias and perhaps misses Sunday churchgoers. The advantage is that it is door-to-door so doesn't suffer any of the issues experienced using phone or preselected groups. Deducting 2 percent from the stated TPP for the incumbant from their final face-to-face (ignoring any final telephone) has proven interesting. Here are the figures going back a number of years. In years where they did nt publish TPP I have made assumptions eg 80% of Greens go to Labor.
    First is Morgans prediction, second is actual third is year and fourth is date they did their polling (you probably have better figures).
    52.2 52 1972 November
    50.4 50 1974 May
    41.6 44 1975 December 6
    51.8 47 1977 November 19/20
    51.8 50 1980 October 11/12
    56 53 1983 February 26/27
    56 52 1984 November 24/25
    52.6 50 1987 July 4/5
    49.6 49 1990 March 21/22
    51 50 1993 February 27/28
    50 46.4 1996 February 28/29
    55.5 51.4 1998 September 19/20
    54.5 49 2001 November 3/4
    51.5 47 2004 October 2/3
    53.5 52.7 2007
    57.50 50 2010 7–8 Aug 2010
    48 46.5 2013 September 3
    50 51 2016 June
    52 49 2019 12-May

    The three newspolls for NSW Vic and this election all also understated the support for the incumbant by 2-3%.

    ReplyDelete
    Replies
    1. Final NSW Newspoll only understated the incumbent's 2PP by 1 point. Also final Qld Newspoll overstated the incumbent's 2PP by nearly 2 points (but that was a preference modelling issue), and final Newspolls also overstated the incumbent's 2PP in WA and would have done so in SA had they bothered to publish one.

      Delete
  14. Maybe we shouldn't even have a polling "industry". Does it serve any useful social purpose other than giving a few statisticians a job? A bit of material for endless mainstream newspaper articles might be something we'd be better without.
    You (Kevin) would have to go back to snails and chess, of course, which would be a pity as I do enjoy reading your stuff!

    ReplyDelete
    Replies
    1. If you don't have polls (even if they may now and then fail badly) then the space gets filled by pundits, and that's worse. (Well it may get replaced by social analytics or fundamentals models, but both of those have their issues too.) However pollsters will need to make this sort of thing a rare event for their product to remain useful.

      As for me there is plenty of other psephology work that I could do without polls. I see all the following as more important:

      * covering election post-counts and doing live on the night cover
      * election guides for Tasmanian elections
      * system reform, eg to get rid of Group Ticket Voting

      It seems my readers agree. Only five of the top 50, and none of the top 20, articles in my site's history ranked by unique visitor numbers have been about polling.

      Delete
    2. What is the likelihood of above the line voting being eliminated? Electoral reform has to happen, surely? E.g. being no1 on a Labor or Liberal ticket in Tasmania in the Senate means you never really face the people.

      Delete
    3. Not high, I would think. If you get rid of ATLs then for it to mean anything you have to bring in Robson Rotation which would increase the cost of ballot printing. You'll also get people numbering more boxes which means more time on data entry and more costs in programming the processing of ballots. It would be interesting to know what all this would cost but I suspect it would be quite a lot.

      The other issue with it is political literacy. It's fine for Tasmania where we're used to voting in Hare-Clark elections and could easily survive without ATLs but other states are not used to it. Some electorates have over 10% informal in the House of Reps, and that's even with voters being given HTV cards that they can copy. There is also the question of how practical it is to run individual-candidate campaigns with candidates campaigning against each other across whole states like NSW.

      I suspect the major parties would also never agree to a national abolition of ATL because they would be terrified of losing preferences to leakage.

      Perhaps individual states should be allowed to abolish ATL. However I suspect only Tasmania, ACT and SA could cope with it.

      Delete
    4. (Allowing individual states to abolish ATL would require a constitutional change - Section 9 requires uniform methods.)

      Delete
    5. I've no formal political or pseph background but I'd hate to see a 'reform' which was likely to raise the number of informals votes or act to selectively disenfranchise a section of the electorate. If people can't number in their selected sequence the handful of candidates for the HoR then removing ATL senate voting is an issue.

      I think Kevin more or less just said that.

      Delete
  15. There is something polls can't take into consideration. My personal experience as a typical Labor voter was I found it unexpectedly difficult for vote for a leader I didn't like. If I was asked prior to the election I would have dutifully said Labor in the lower house Greens in the Senate. I had a very strange experience in the booth though where I found it necessary to switch my brain off to vote for Bill. I had no confidence in what I was doing and could not envisage him as PM, though I really did try. I was never going to vote Liberal 1, but with only a little less conviction my vote may have ended up preferenced there. If I had been questioned in an exit poll I may well have lied feeling guilty about letting the side down. I think Bill Shorten's unpopularity likely played a bigger roll than is being mentioned, and may have caused traditional Labor voters to mislead polsters and themselves about their voting intentions.

    ReplyDelete
    Replies
    1. That is a very interesting comment. I suspect (but cannot prove) that the errors in the polling flowed through to the leadership polling, and if the polling hadn't had these issues it would have been showing Scott Morrison with net positive ratings rather than net zero, Bill Shorten with higher net negatives than he was polling at the end, and Morrison with meaningful (20+ point) Better Prime Minister leads.

      Delete
  16. Why do the faulty polls always favour the progressive side?

    ReplyDelete
    Replies
    1. They don't! For instance in both Victoria and WA the polls underestimated Labor's victory margin, in Victoria seriously so. In the UK in 2015 the polls greatly underestimated the Conservatives but in 2017 they greatly underestimated Labour. Australian seat-by-seat polling skewed strongly to the Coalition in 2013 and did so somewhat again in 2016. In the US you get misses on either side.

      Delete

The comment system is unreliable. If you cannot submit comments you can email me a comment (via email link in profile) - email must be entitled: Comment for publication, followed by the name of the article you wish to comment on. Comments are accepted in full or not at all. If you submit a comment which is not accepted within a few days you can also email me and I will check if it has been received.