------------------------------------
The Australian polling landscape following the 2019 polling failure has been very sparse on national voting intention results. Newspoll has continued to appear regularly, but Ipsos has not reappeared following the loss of its contract. ReachTEL is now very inactive as a pollster (and indeed was for some time before the election). Morgan has continued polling, but only releases voting intention results close to the time they were taken when it feels like it, and has back-released a confusing mess of results with some detail and results that are just dots on graphs, with difficulty reconciling information between the two. The subject of this article, Essential, has continued polling all the sorts of items it previously polled, except for voting intention.
Quite why Essential has stopped polling voting intention, and whether it is at their own initiative, that of their client the Guardian, or a mutual decision, remains unclear to me. In July 2019 this sounded like a temporary thing:
"However, over the next few months we are working to improve our two-party preferred modelling. In the interim we won’t be publishing voting intentions, however we will still report on issues of contemporary political interest."
This was no reason not to publish voting intentions in the first place (one could just publish primary votes, or use preference flows from the election before, or estimate preference flows, etc) but more importantly it has been over a year now and voting intentions have not returned to Essential. This means that if Essential issues an unusually strong leadership rating, for instance, it's not possible to look at the voting intention results and see if it might have just been an unusually strong sample for that leader's party. It also means there is no public second opinion if someone thinks Newspoll's results are possibly unusual - has the Green vote really been 12-13% for much of this term? Is the Morrison Government really only 51-49 ahead at the moment? (etc.)
Anyway, whatever the reasons for Essential getting out of the horserace game, it is what it is. However what Essential have been doing is publishing unweighted information about their sample's voting intentions, broken down into Coalition, Labor, Greens, Others. An undecided component can also be inferred - Essential retains respondents who are unsure of their voting intention for the purposes of other questions. I thought it would be interesting to have a look at the trends in these unweighted voting intention results. They actually look quite a lot like what you might get from a poll. Especially, they look rather a lot like Morgan.
In all, since Essential started publishing this data in late July last year, I've found 29 cases where they published questions broken down by party support with the sample size of each party's supporters shown. These occurred fortnightly through to March 2020, and weekly but with some gaps of varying length since. The sample size of respondents who express a voting intention view averages 959, for an in-theory maximum margin of error of 3.2%. Such a sample would be expected to be a lot more volatile than Newspoll.
This is what the "results" of these unweighted samples look like (blue = Coalition, red = Labor, green = Greens, grey = Others):
The graph shows especially weak "results" for the Coalition just after the summer break, with a nadir of 33.9% in early February during the bushfire fallout. However there is a strong recovery in March and by the end of April (week 50) the Coalition "vote" is higher than it was at any time earlier in the sequence. At around the same time both the Labor and "others" vote decline, while the Greens vote does virtually nothing through the whole period.
The averages of the unweighted data (Coalition 40.4 Labor 34.6 Greens 10.5 others 14.4) are compellingly similar to those of Newspoll through the same period (41.3-34.4-11.9-12.4), though the Coalition average in the Essential series is boosted by the greater density of data points in the COVID-19 period than before. However, this at least suggests that the amount of scaling that would normally be needed to correct for demographic biases in Essential's samples is not large and that their raw samples may be reasonably representative on a party-support basis. Probably Others are overrepresented in the raw data given that they polled 14.8% at the 2019 election but support for the United Australia Party is generally taken to have dropped away since in the absence of further advertising splashes from the boss.
Two-party preferred
2PP estimates for Essential's unweighted data can also be calculated using last-election preferences, though this probably overestimates the Coalition's position by around 0.6% if the UAP vote has genuinely declined to a point or so. The low in this series for the Coalition is 45.2% in early February and the high is 53.8 in early May. Here is a chart showing a rolling three-week average of the 2PPs of this series (I did it this way because I don't have any really good graphing software and this was the easiest format to convince Excel to draw lines in!)
In this graph the parties swap leads from week 10 (July 2019) to week 30 (December 2019). There is a blowout in Labor's favour during the bushfires, until we get the familiar COVID-19 effect which now sees the Coalition well in front.
This only broadly looks like Newspoll, which has had much lower peaks and troughs, with no leads exceeding 52-48 either way since August. Newspoll does however have the Coalition retaking the lead due to COVID-19 at virtually the same time. And it looks considerably more like Morgan, except for the dive in Morgan's most recent poll:
(The above for comparison purposes is a stretched portion of the relevant part of the Morgan graph from here). Morgan claims its samples to be a "representative cross-section" and it is unclear whether it applies weighting at all or what criteria it applies. I have observed in the past that Morgan seems unusually responsive to news-cycle events compared to other polls.
While the Essential unweighted series is quite bouncy, it isn't unnaturally so for its small sample size, and it does what samples of this size should in terms of sometimes bouncing around for no reason.
All else being equal, variations in how representative Essential's raw sample is from poll to poll shouldn't make much difference (it could be just another source of random error). If the results are so poll-like anyway, should we just treat these unweighted numbers from Essential as if they were a real poll? I wouldn't do so, for the following three reasons:
1. Party supporters could be more enthusiastic in taking polls when they perceive that their party is doing well. To the extent that this corresponded with demographics, weighting would control for it, and this might lead to less severe swings than those seen in the Essential unweighted sample.
2. An event like COVID-19 is likely to have led to a different set of voters to normal having the time on their hands to fill out internet surveys. If there is a party basis to this it may skew the unweighted results; again, demographics might correct for that in a weighted version.
3. There might have been changes (either sudden or drift over time) in the broader panel Essential is choosing its respondents from.
For these reasons, as tempting as it is to use Essential's unweighted data in a polling aggregate - if only to have something besides Newspoll to aggregate - I probably shouldn't do it.
All up though, the fact that Essential's unweighted voting intentions have so far displayed relatively poll-like behaviour suggests that they are worth keeping an eye on. A large poll-to-poll shift in them, in the absence of any obvious cause, could be a sign of an unrepresentative sample, which might affect other results such as leadership ratings. Furthermore, if there is something that has actually happened then a significant shift in these Essential numbers might be a precursor of a similar shift in Newspoll. Some may say who cares, when we still don't know if any of the polls have fixed the problems of 2019. However polling continues to cast a spell over politics anyway - check out Anthony Albanese's Turnbullesque commentary about Newspoll in the lead-up to the Eden-Monaro by-election.
Good stuff K, especially based on the Morgan 2PP graph, I guess next time the 2PP reverses I guess we get PM 6/ 6 since 2007, just in time for the 2022 Federal Election
ReplyDeletemaybe just like a seat level poll can be very inaccurate.... maybe in this era of close results and fractured votes..... accuracy is not possible
ReplyDelete