Wednesday, July 29, 2020

Essential's New 2PP Plus - What Is It And Does It Make Any Sense?

Following the 2019 polling failure, Essential Research has taken a very long time to return to voting intention polling.  Two weeks ago in an action that some might think was trolling the pollster (but I couldn't possibly comment) I took the unweighted voting intentions data from over a year of Essential polls and converted them to a pseudo-poll series.  I showed that this unweighted data series followed some of the patterns seen in the public polling by Newspoll and the sometimes-public-at-the-time polling by Morgan, with all these series showing ALP leads during the summer bushfire disaster, switching to Coalition leads as the COVID-19 situation came to dominate politics.

I showed that in the unweighted data Labor had had rather large leads briefly during the bushfires, switching to substantial and steady Coalition leads from late April onwards, but I suggested that there were various reasons why these large leads might not survive the application of weighting.  Nonetheless the broad patterns in the data seemed worth keeping an eye on.

It's presumably coincidence that just two weeks after I released this piece, Essential have finally returned to the voting intentions fray, but they have done so in a unique manner.  This article discusses what they've done and why, and their just-released results.  I think what they have done is interesting but I disagree with nearly all the reasons they have so far advanced for doing it.


2PP+

The 2PP+ concept is fairly simple.  Instead of releasing a 2PP that adds to 100% because undecided voters have been excluded, Essential will release a 2PP that includes an undecided component.  For instance this week's is Labor 47 Coalition 45 undecided 8.  The same numbers would normally be released as a 2PP of 51-49 to Labor, though if the pre-rounded figures were, say, 47.4-44.6-8.0 they could come out at 52-48.  

Because several percent are usually undecided, it will be rare for either party to break the magic 50%, and Essential's graph (shown at 30 mins in in the video here) shows no case of a party breaking 50% in the 22 readings this year so far.  The idea is that in a reasonably close election, the poll will paint a picture of the result not being in the bag and the party that is leading still needing to rely on (and convince) some share of the undecided vote in order to win government.  In the context of 2019, this would have meant that with Labor ahead by, say, 48-46, the Coalition would still have been in a position to win the 2PP if it could corner two-thirds of the undecided vote.  (It turned out the Coalition didn't actually even need to win the 2PP, but that's another story.)

A heavy break one way of the undecided vote is a much-used excuse for polling failures, because it allows the pollster to say that their poll was basically OK and that Labor really was ahead but it was just those pesky undecided voters who ruined it.  Essential are not (in this explanative piece) saying that their poll was otherwise perfect, but they are placing a lot of weight on undecided voters as a reason why journalists should be wary of poll-driven certitude in the final days of a close campaign, when there are two other relevant reasons.  Firstly, voters who had a clear voting intention could change their minds (actual late swing) and secondly polls could simply have technical errors such that they were never getting voting intention right in the first place.

There is very strong evidence that the latter is the primary cause of the poll miss in 2019.  Explanations involving late swing (whether in the form of undecided voters breaking one way, voters changing their minds or both) fail available empirical tests that might have supported them.  For instance, the extent to which voters voting before the day were more likely to support the Coalition compared to those voting on the day changed little, whereas had there been a large late swing to the Coalition, the Coalition's on-the-day numbers would have been closer to their combined prepoll and postal numbers than normal.  If anything the evidence is the opposite: the difference was slightly larger than in other recent years despite the increased pool of pre-day voters. 

I think that the new method will do a good job of stressing that a close lead does not mean a guaranteed win, but I suspect it will lead to yet more of an already over-abundant trope of election coverage - articles claiming that undecided voters hold the key, and attempting to find out who undecided voters are and what makes them tick.  However there seem to be few things that either journalists, or for that matter pollsters (when picking panels for leadership debates), are worse at than finding genuinely, representatively undecided voters.  It's not for want of practice down the years, either.  The recent release of the Palace Papers showcased a historical example of abject failure in this genre - the 1975 ANOP Swingers Survey which "found" that swinging voters mostly thought Labor deserved to win the election and that the biggest issue in the election was the Dismissal stuff that Labor was still maintaining its rage about but that the voters were rapidly moving on from.  

I wonder also, in the context of election prediction, whether pollsters are really getting the right handle on which undecided voters really matter.  Some undecided voters won't even vote.  A voter who will definitely preference the Coalition but is undecided whether they will vote 1 Coalition or 1 One Nation is relevant to the primary votes but their 2PP vote is locked in.  Treating them as genuinely undecided deflates the Coalition's score.  A voter who will definitely vote Greens but has no idea who they'll preference, on the other hand, is undecided about who they want to win the election. Essential's approach (and ditto for all the other pollsters I'm aware of, effectively) is to treat a voter as undecided on 2PP if they are undecided (after prodding) about primary voting intention.  

Preferencing

Essential has also unleashed a new approach to preferences.  Normally, last-election preferences are the most reliable way to predict the preference flow at an election, while respondent preferences are less reliable, both because they increase the volatility of the 2PP result and because they tend to skew to Labor.  However, the 2019 election was one of three out of the last 14 at which preferences have shifted enough to alter the result by around a point.  One of the reasons for this was a shift in One Nation preferences from around even in 2016 to 65-35 in 2019.  Another was the emergence of the United Australia Party, whose preferences flowed to the Coalition by a similar amount.

Essential's rationale for this change is:

"Last election, this created a distortion because One Nation (which had allocated preferences against all sitting MPs in 2016) decided to preference the Coalition. We will now be asking participants who vote for a minor party to indicate a preferred major party."

But if party preferencing decisions are driving preference shifts then respondents will not anticipate these decisions in advance, which is the perennial problem with respondent preferencing.  Only if a party is going to change its preferencing decisions and respondents shift their own decisions from the last election before this happens should we expect respondent preferences to beat last-election preferences for the reason stated.

Anyway so far this year, the Coalition's share of Essential's respondent preferences is running at the same level as at the 2019 election, which isn't great news for Labor as 2019 was Labor's worst performance on preferences in a very long time.

In fact, preferencing assumption errors were a very minor part of the 2019 polling failure, but that is partly because the then YouGov Galaxy moved away from last-election preferences in correct expectation of them failing.  In Essential's case, their 6.06 point miss on the Coalition minus Labor 2PP result came from primaries that underestimated the Coalition's margin over Labor by 5.80 points, so the error in Essential's 2PP came almost entirely from the primary votes.

Batched releasing

Essential has decided to get out of the horse-race routine of trotting out a new poll result every couple of weeks and trying to explain it in isolation, with all the tedious speculation over frequently meaningless 1-2 point fluctuations that goes with it.  They will be publishing batched results for the previous three months every quarter, so that each set of results will include a run of half a dozen or more polls.  It will be interesting to see whether this is modified as the next election approaches, and at what point.  

This means Essential will have the advantage in being able to present a longer-term narrative at any given point, because it is easier to pick patterns in hindsight with the advantage of more data.  On the other hand, expectations may have already become cemented by the more regular Newspoll results, and this may result in Essential's results being taken less seriously.  There was a fair bit of this on social media today, because people generally believe the Liberals are ahead, so a 47-45 result to Labor had people asking what was wrong with the poll.  Had Essential and Newspoll been coming out together and getting different results, expectations might have been more muted.

Anyway this will make life very difficult for anyone trying to do an ongoing polling aggregate of the sort I've done in the past, since retro-releases of data by both Morgan and Essential will mean that data have to be back-entered and the aggregate retrospectively redone.  (I may attempt such an aggregate soon but I'm going to think about it for a bit first).

Finally ... The Results!

Essential has released findings for the last six polls since the start of June, in which the Coalition has averaged 46.5 2PP+ and Labor has averaged 46.  This is the equivalent of a threadbare lead around 50.3-49.7 in normal 2PP polling, and Labor has led the last two.   By comparison in this time Newspoll has had an average 51.7-48.3 lead, albeit from only three polls.

The recent Eden-Monaro by-election is the only thing remotely approaching a real-world test of the polls we have had, and the Coalition slightly exceeded historic expectations, so I was not too surprised when Newspoll moved from 51 to 53.  At the time of the by-election, both Newspoll and Essential had (effectively) 51.  Essential's readings with Labor in front in the last two polls seem unlikely, but as we learned at the last election if the polls are always saying the same thing as each other, there's a problem, so divergence (up to a point) is good.  

Essential has also issued a graph (in a couple of different versions) which shows the 2PP+ pattern through the year.  Labor's biggest lead comes out at what would have been a 53-47 or 54-46 by normal methods, and the Coalition's comes out at what would have been a 54-46, but the Coalition doesn't have the same consistently large leads as in the unweighted data that I graphed.  Indeed, the weighting in the previous six polls has consistently swelled Labor's primary, increasing it by 3.5% on average.  This clearly does not hold for the full run of results Essential presents, as it shows lower leads for Labor during the bushfire months than some of its unweighted data had.  One or other of the reasons for caution re the unweighted figures that I flagged in the previous article may explain what is going on here - from time to time, the extent to which a particular party's voters are underpolled may vary.  

Converting Essential's graphed 2PP+ results to traditional 2PPs in order to compare them with Newspoll, I find that Essential's series and Newspoll's series have displayed house effects relative to each other.  Comparing the ten released Newspolls for the year so far with the nearest matching Essentials I find that on average Newspoll has been about a point higher than Essential for the Coalition, peaking at 4 points higher in the current poll.  


Such differences between pollsters develop from time to time and do not necessarily mean anything; we will have to keep an eye on this to see how the comparison between the two goes as the year unfolds.

2 comments:

  1. Interesting analysis - do you think that the apparently "normal polling practice" of simply excluding undecideds instead of evenly divvying them up between the two majors in 2pp (as American psephologists apparently do with the Republicans and Democrats) might cause a slight tendency to overestimate the 2pp of whoever's leading?

    e.g. if the pollster had 48-40, excluding undecideds would result in a reported 2pp of 55-45, while dividing up the undecideds evenly (hence keeping the margin the same, but changing the relative Coalition:Labor 2pp ratio) would produce a 2pp of 54-46, resulting in 2 points of error.

    Furthermore there may or may not be tendencies for undecided voters to break one way or the other which may help to produce a more accurate breakdown.

    ReplyDelete
    Replies
    1. Sorry, I've only just seen this comment - sometimes the notification emails for comments don't arrive. To answer the second question first, pollsters generally prod "undecided" voters first to see if they will even say if they are leaning one way or the other, and those who do express an intention when prodded are included in the headline figure (except in most cases for ReachTEL, uComms and some lower profile outfits). Whether "hard undecideds" break a certain way predictably is hard to say but I think there needs to be more probing of why they are undecided.

      As for the first question, if Newspoll had split undecideds evenly on a 2PP basis rather than excluding them, that would have made its final federal polls since it started slightly more accurate (because there is a pattern at federal elections for the party that is leading to on average underperform.) However at state elections the same thing doesn't hold. Generally it's only likely to make a difference after rounding when one side is leading by a lot.

      Delete

The comment system is unreliable. If you cannot submit comments you can email me a comment (via email link in profile) - email must be entitled: Comment for publication, followed by the name of the article you wish to comment on. Comments are accepted in full or not at all. Comments will be published under the name the email is sent from unless an alias is clearly requested and stated. If you submit a comment which is not accepted within a few days you can also email me and I will check if it has been received.