This week finally saw the long-awaited return of voting intention polling to the field following the great 2019 election opinion polling failure. Newspoll returned ten weeks after the election, its longest break between in-field dates ever, and its longest break between releases except for two eleven-week summer recess breaks very early in its history. The poll, which had the Coalition ahead 53-47 two-party preferred, was the first voting intention poll by anyone since the election. The ten-week gap without any published attempt to measure voting intention by any pollster was the longest such gap since at least the early 1970s.
The first poll to poke its head over the parapet was of course pelted with eggs on social media. The strong prior accuracy records of the Newspoll brand, Galaxy Research and Australian federal polling generally were suddenly no protection against charges that polling was no better than astrology. Much of the pelting came from hopelessly biased Twitter entities who have always hated and distrusted Newspoll because of its Murdoch connections, who used to insist the poll was Coalition-skewed, and now hate it because it got their hopes up for an election their side lost. So that aspect is a moveable feast of complaint. But is there any reason for confidence yet that YouGov-Galaxy has identified and fixed whatever went wrong with its polling earlier this year? Given that it underestimated the Coalition by three points at the election, is there any evidence for confidence that it isn't still doing so?
Well no, there isn't really (though that doesn't mean we should read this poll as really 56-44). At this stage, alas, YouGov-Galaxy and the Australian have done very little that should restore public trust or to even convince us that they care whether their poll will be trusted or not.
Transparency Failures Continue (Mostly)
We shouldn't expect the much-needed improvement in the transparency of Australian opinion polling to happen overnight, but the lack of progress on this front more than two months since the election has still been disappointing. The Australian's reporting of the latest Newspoll was very much as if the failure had never happened, and was as usual riddled with overconfidence. The print edition contained no new information concerning how the poll works, of the sort I have called for in more detail in my transparency wishlist here. We still don't know such things as:
* the ratio of online to phone responses
* the list of attributes used in scaling
* the list of parties in the readout
* the formula used to estimate the 2PP from the primaries
* whether or not any form of stabilisation mechanism is used
and so on.
Readers (yes people actually pay to read The Australian) are also entitled to a clear statement of whether or not any methods changes whatsoever have been made in response to the 2019 failure, and if so what they are. They didn't get one, though a clear statement that no changes had been made yet because processes were being reviewed with the review to be completed by date (blah) would have been perfectly understandable. The nearest I can find to this is a Daily Telegraph interview that reports:
YouGov would be “finessing” its processes ahead of Queensland’s state election in October 2020 but would not make major changes.
For the time being, Newspoll remains a magic box as opposed to an externally reviewable scientific process.
If more details were spelled out in a separate web release that would be fine; there is no need to put everything in the media version. But at this stage online documentation of the YouGov-Galaxy Newspoll is woeful. With the old Galaxy Research site now taken down (and its archiving of Newspoll data was primitive anyway), current Newspoll archiving is now limited to The Australian's page, which still for instance falsely states that preferencing is based on the 2016 election (a statement that was also false for much of the 2016-9 term). YouGov's Australian page does not archive its hot Australian polling property at all, meaning that the most recent federal poll result on there is from the weird and shortlived YouGov-Fifty Acres series. Embarrassingly, that final Fifty Acres poll flapping in the breeze there since December 2017 had a more accurate 2PP than all 32 Newspolls released after it, though off primary votes that had the Coalition 6.4 points too low.
Over at Essential (the only other pollster to emerge from polling) things are not all that much better, but there has been significant improvement on one front.
To start with the problems, Essential has stopped publishing voting intention for the time being, and is currently publishing leadership ratings and issues results only. The Guardian reported this as "Guardian Australia is not currently publishing measurements of primary votes or a two-party preferred calculation," raising the question of whether the pollster was supplying these measurements and the Guardian was choosing not to publish them, or whether the pollster just wasn't supplying voting intention measurements at all. It turned out to be the latter, with Peter Lewis of Essential stating:
"However, over the next few months we are working to improve our two-party preferred modelling. In the interim we won’t be publishing voting intentions, however we will still report on issues of contemporary political interest."
This is all rather mysterious. The problem of not having the exact two-party preference flows in the first few months after every election is a standard one, easily overcome by saying that one is working off an estimated preference flow and that 2PP figures may be a little rubbery until the new breakdowns are available. What is odd about the comment is (i) 2PP modelling issues are no reason not to release primary vote estimates (ii) conversion of primary votes to 2PP was a relatively minor aspect of a polling failure mainly caused by error on the major party primary votes (iii) Essential has not held back from releasing voting intention figures for this reason before.
So this looks like an excuse, and there is need for more public information on whether the call to not release any voting intention figures for a while was made by the pollster unilaterally or whether the Guardian had any say in it. We should not be pleased when any part of an industry experiencing a transparency crisis responds to an accuracy failure by removing the information (an estimate of voting intentions) that we need to set a baseline for all the other data it is releasing, and to compare its results on what it is measuring with others. The idea seems to be to encourage more focus on non-voting-intention related results, but I think there is actually even less reason to focus on them because there is no way to judge how the samples stack up compared to any other polls polling the same thing.
A further transparency issue I have recently noticed with Essential is that it often lists multiple responses to an issues question in the order of which options were most agreed with, when these options should be listed in the order they were offered (or if it was randomised, a statement to that effect should appear.) This is especially important as Essential now and then offers issue question options that are very "leading" in nature and that look like "message testing" style polling (especially in a left-wing direction), where one option could well influence responses to others depending on the order.
On the other hand, Essential has taken one very welcome step towards greater transparency in that its latest reporting now includes unweighted base sample size breakdowns for the parties, and also by gender and by age in three brackets (18-34, 35-54, 55+). It is also clear the pollster is now polling income. So although Essential did not publish voting intention results as such, it appears that a sample that was representative on gender and only very slightly (<2%) skewed to mid-age-range voters compared with younger voters comes out at 40.8% Coalition, 34.6% Labor, 9.6% Green and 15.0% other - very similar to the election result - after reallocating 12.2% unsure. The tables also show that the don't-knows are retained for the leadership questions, which may provide more of a hint about Essential's persistently high don't know rates on a range of polling issues.
If Newspoll Hasn't Changed, Is It Still Wrong?
The Daily Telegraph interview suggests YouGov-Galaxy are dismissing the 2019 fail as a "one off" because of the general accuracy of the same methods at other polls. (Just don't mention Victoria 2018, a failure of a slightly greater size). On the other hand, the interview also has David Briggs admitting to being “not as objective” as he should have been and that he “did not interrogate the data" enough. This does raise the question of what data basis YouGov-Galaxy would have had for realising that they should have done anything differently.
Newspoll's methods having failed by three 2PP points at the election does not mean they are necessarily still doing so. US and UK elections especially point towards an experience of every election having its own house effect. Aspects of the issue and candidate mix at given elections may cause or contribute to general polling failures when the same methods succeed at other elections. (As a result, when reacted to the 2015 UK general election fail, they mostly failed in the opposite direction and by almost as much in 2017). Especially, if polls are oversampling politically engaged voters, it may be that elections where ambitious oppositions make easy scare-campaign targets of themselves are unusually prone to polling errors. With Labor's failed election policies being rapidly retired and a new leader installed, it may be that forces driving the polling failure have ceased to apply. It may also be that the cross-industry polling error was magnified by the herding and/or self-herding seen in the final few weeks.
On the other hand, it's possible that the polling failures seen in 2019 are specific to the prime ministership of Scott Morrison, or to a shift in voter attitudes (such as the way scandals and internal chaos seem to have stopped resonating at the ballot box in the age of Trump). If that's the case, it could be that polls are still skewed against the incumbents and the government is really way ahead. But without much more detailed investigation of how and why the polling failure unfolded, we probably won't know for a while.
In any case, we should treat polling with more caution than historical error rates (even those including the 2019 election) suggest in the leadup to the next federal election. Pollsters could overreact as in the UK, and a failure in the opposite direction could occur (which is not to say that it will. Pollsters might even be reluctant to make changes because of the risk of overreacting, and repeat the same errors instead!) The conditions for assuming that we can apply error bars from previous elections to polling-based predictions for the next election have been broken by the polling failure and the unusual heat on the industry over it. For this reason, I'll continue to produce poll aggregations and conversions of poll results to seat tallies, but I'm not going to be doing national polling-based federal predictions in this cycle unless the polling ends up very lopsided indeed. Rather I'll just say that the polls point to such and such if correct, but they may be right or wrong.
Newspoll Results And Records
Without asserting that Newspoll is predictive at the moment, I'll continue looking at how its results compare with the history of the series.
Firstly, the ten-week post-election gap means that past comparisons with immediate post-election polling and the ratings of newly installed opposition leaders are all a bit dubious. First polls taken a few weeks after an election and a first poll taken ten weeks after aren't the same thing.
We can see this with voting intention. A 1.5 point increase in 2PP might not sound like much of a bounce from the election result (and in margin of error terms there might not even be a bounce at all), but in fact this increase is half a point above the historical average bounce at this stage. The historical Newspoll pattern (which is weakly statistically significant) has been for the bounce for a re-elected government to be around three points at the start (with a lot of variation between different years), but the bounce tends to last only a few months:
(Note: For pre-2005 results, converted 2PP estimates have been used. Weeks since election have been averaged and rounded down for early multi-weekend polls)
Because the government is still in the bounce period, not too much would be readable into this result even if there had not been issues with Newspoll accuracy this year. And also, it is only a single result, and we're yet to get a picture from any other pollsters.
The current Newspoll primaries are Coalition 44% Labor 33% Greens 11% One Nation 3% others 9%. Since the election this is basically a swing of a few points from Others to Coalition. The Australian reports this as being likely to represent United Australia Party voters moving to the Coalition, but it is not stated - and should have been stated explicitly - whether the UAP (which Newspoll overestimated during the campaign) was polled separately to establish this. In any case it is unlikely that all the UAP voters including those preferencing Labor would suddenly jump in this manner without any other shifts, and the real churn pattern is bound to be more complex, as it generally is.
The 33% is Labor's equal lowest primary ever in opposition. The same figure was recorded twice under Simon Crean (once immediately after the 2001 election loss and again in April 2003) and once under Mark Latham (shortly after the 2004 defeat). Given that the party has just polled its worst election primary vote in opposition, this record is hardly a surprise. The unflattering comparison with two failed leaders should be seen through the filter of declining major party primaries generally.
Scott Morrison has recorded a net satisfaction of +15 (51-36, his highest so far). Anthony Albanese is at +3 (39-36). As per tables posted by William Bowe, this is actually the worst debut netsat by an Opposition Leader since the much-hated (but almost election-winning) second rising of the Andrew Peacock souffle. Taking into account that Albanese has been there for ten weeks and hence missed out on that early-career phase where a new leader has very low negatives makes some difference, since Alexander Downer and Bill Shorten were both worse than +3 in their first poll taken after at least ten weeks or more. Brendan Nelson was about to be so and Tony Abbott had by that stage been below +3 by one point once. However, Albanese's net rating is still at the lower end of the Opposition Leader pool for his time in office. This may reflect dashed expectations. Lefties thought Albanese would be a more principled leader than Shorten but so far have seen more of the same lukewarm, overcalculated, tactical-games approach from the party on the issues that they care about.
The Better PM result looks odd. Scott Morrison leads Anthony Albanese only 48-31 as Better PM, which is a very modest lead given that Morrison's party is so far in front and that Morrison is himself popular. (This actually gives Albanese the third highest debut BPM rating for a first-time Opposition Leader, trailing Kevin Rudd (36) and, of all people, Alexander Downer (38)). It's the sort of lead wto be expected if the parties were even.
It turns out that, compared to historic results, Morrison has always underperformed on Better PM once the 2PP and the netsats of the leaders are taken into account - and this applies even for those 2019-campaign Newspolls for which we now know that the 2PP was wrong. His average underperformance on Better PM lead compared to what should be expected has been eight points, and this latest poll (a lead of 17 points compared to an expected 25.7) is very typical. This doesn't seem to be a factor of the change in Newspoll administration to YouGov-Galaxy, because Malcolm Turnbull overperformed by an average of 2.4 points against the same projection.
The only more or less comparable and prolonged period involved John Howard vs Kim Beazley from just after the 1998 election to the end of 1999; perhaps Beazley's strong performance in the 1998 election boosted him on the Better PM comparison. In Morrison's case something (and your guess might be as good as mine as to exactly what) seems to be causing voters to be more willing (compared to other PMs) to approve of him, say they would vote for him or both despite not picking him over Shorten or Albanese in the beauty contest. Or maybe something has been done to take some of the rampant skew out of the Better PM indicator, and we just haven't been told yet.
(For those interested, the current historical regression is:
Better PM lead = 0.881*Govt 2PP+0.606*PM Netsat-0.369*LO Netsat -29.03 +/- 5.14)
PSO Watch: Religion In Schools
A PSO is a poll-shaped object, by analogy with piano-shaped object. I will class issue polls as poll-shaped objects if either their design is poor or if released details of the poll are inadequate to justify claims being made about it, two things that often (but not necessarily) go together.
At one stage PSOs were so common I was considering setting up a resource page to just list them all with a checklist for standard attributes of failure and a small space for any further comments. However, the nationwide poll failure seems to have sent the nationwide PSOs scurrying for cover even more than the more reputable brand of polling. Good.
My attention was drawn to a "survey" claimed by Sky News to have found that "the majority of Australians support religious lessons in schools." However the Sky report provided no direct evidence that this was the case at all. It did report the survey as finding that "80 per cent of people surveyed believed schools should be a safe place for students to [..] study faith and religion [..] across the board including Christianity, Islam and Buddhism" and that "87 per cent of people agreed it was important students “make their own decisions about spirituality and faith”."
However the first response says nothing on the information provided about whether respondents believed this sort of study should be compulsory, or whether it should also include non-religious and anti-religious positions. The second response is comparable both with support for comparative religious classes (with or without non-religious positions) but also with outright opposition to religious classes of any form.
We know that the survey was conducted by McCrindle Research, but not whether this was a unilaterally conducted survey or one commissioned, and if the latter by whom. We get no information about the survey methods, the order and wording of the questions, or anything else, and although the reporters have obviously seen the survey, it doesn't seem to be online. The most recent McCrindle survey about religion that I can find on the company's website is this one from 2017. The 2017 survey is very fair from a question design perspective and seems to be a genuine exercise in information gathering, but the report says only that an "online panel and completed by 1,024 Australians, who were representative of the national population by gender, age, and state" was used for the survey component - no information on how the online panel was recruited. It sounds like a poll in effect, but maybe it isn't, since even Sky says it isn't "clinical research".
With some difficulty I was able to find an infographic from an apparently earlier edition of the current survey which suggests this is also online panel polling, and provides some information about the range of questions asked, but oddly doesn't include a question specifically about "religious lessons in schools". And one would think if such a question had been asked in the latest question, the result would be quoted, since a direct result would be much more compelling than indirect evidence. So pending evidence otherwise, I'm assuming that no such question existed.
It may be too much to expect but I'll go on expecting it anyway; somebody has to. Media should not act as "deeply credulous" (that link is well worth a read) gatekeepers for polls that they lack the information, insight or inclination to analyse properly. If such polls on contentious issues are not published in full and in public in an easily findable place, they just shouldn't be reported on at all. Not even by Sky or its left-audience counterparts.
See Also
William Bowe in Crikey - possibly paywalled, includes another debunking of the idea that Galaxy's seat polls were OK, which I also dealt with here.
ELECTORAL, POLLING AND POLITICAL ANALYSIS, COMMENT AND NEWS FROM THE PEOPLE'S REPUBLIC OF CLARK. IF YOU CHANGE THE VOTING SYSTEM YOU CHANGE VOTER BEHAVIOUR AND ANYONE WHO DOESN'T UNDERSTAND THAT SHOULDN'T BE IN PARLIAMENT.
Tuesday, July 30, 2019
3 comments:
The comment system is unreliable. If you cannot submit comments you can email me a comment (via email link in profile) - email must be entitled: Comment for publication, followed by the name of the article you wish to comment on. Comments are accepted in full or not at all. Comments will be published under the name the email is sent from unless an alias is clearly requested and stated. If you submit a comment which is not accepted within a few days you can also email me and I will check if it has been received.
Subscribe to:
Post Comments (Atom)
could argue 51/49 from those primary votes
ReplyDeleteI'm waiting for the 2019 preference splits on that, but using what I understand to be the pre-2019 election splits used by Newspoll (which appear to have worked pretty well) I get about 52.5. Perhaps they have altered them slightly, perhaps not.
DeleteThe numbers are out now; I get 52.7.
Delete