Thursday, November 24, 2016

Poll Roundup: Are Australian Voters Grumps?

2PP Aggregate: 52.1 to Labor (-0.3) - updated after Ipsos
Labor would win election "held now"

Another four weeks on since my last update on national polling and not much has changed.  We still have only two regularly active pollsters and their results continue to show a very gradual shift away from the returned Turnbull Government since the July election.  I'm expecting some improvement on the former front shortly, and may update this article should that occur.  As for the smoothed 2PP tracking, it looks like this:


Since last time there's been little variation in the released polls, all from pollsters that don't do a lot of variation anyway: two 53-47s to Labor from Newspoll, two 53s and two 52s from Essential.  I consider these two polls between them to have had a bit of form in skewing to Labor during the Turnbull phase of the previous parliament and so the current aggregate comes out at 52.4 to ALP. Before house effect adjustments, I've aggregated the recent polls at 52.7 and 53.0 from Newspoll and 52.5, 53, 53.4 and 52.6 from Essential.  However, the 53.4 currently isn't in the mix, because of the way I use only alternate Essentials at any one time.  Those who also follow BludgerTrack, which should be everyone reading this, may have noticed my aggregate is about 0.4 of a point more Coalition-friendly at the moment.  A fair slab of this is because I assume that all the pollsters know what they are doing with their 2PP calculations, though a lot of the time I have my doubts.

Since Essential started using the 2016 preference distributions for its 2PP figures, I have its average published 2PP as 52.2 for Labor, but the average 2PP derived from the published figures without any knowledge of decimals or state breakdowns would be 52.72.  So the Coalition has done better on the published 2PPs than would be expected from its primary votes, by about half a point.  In contrast, Newspoll has had a published average of 51.72 for Labor but its derived average would be effectively identical to the published figures at 51.84.  The differences in Essential's case could be caused by rounding, but rounding differences should be randomly distributed.  If this continues for another, say, ten polls it will be strong evidence that something unusual is going on - not necessarily an error, as state breakdown factors could be at play.

We're still not seeing any panic-stations 2PP polling, and the rate of change is slower than the last time the Coalition lost support in a more or less steady fashion over a few months (in late 2014).



Leaderships

Prime Minister Turnbull finally recorded an improvement in his Newspoll net rating for the first time since the election - it rose eight points from -28 (30-58) to -20 (34-54).  This hasn't been matched by any improvement in voting intention so it might be driven by Coalition supporters (perhaps following some tough talk on borders); on the other hand, as noted last time, Turnbull's rating has less impact on the 2PP than has been the case for past Prime Ministers.  Meanwhile Bill Shorten at -16 (36-51) became the third Opposition Leader (following Kim Beazley in 1997 and Tony Abbott in 2013) to turn in exactly the same satisfaction and dissatisfaction figures three Newspolls in a row.  Better-PM scores are also stable, with Turnbull leading by ten points for the third Newspoll in a row (43-33 this week) .

Indeed, Shorten's net ratings lately have been remarkably static, varying across a range of only three points in the last ten polls.  The previous all-time record low across a similar time scale, six points, was set by John Howard as PM in the leadup to the 2007 election.  A large number of leaders have recorded a seven-point spread over ten polls.

The Australian got quite excited about a finding that Turnbull held only a 48-32 lead over Shorten as leader best able to handle the economy and touted it as a suggestion he was losing economic credibility.  There's not that much to see here either.  John Howard as PM averaged only just over 48% on this question in his first two terms in office, which didn't stop him winning both elections, and a 55-33 lead over Kevin Rudd didn't save him in 2007.  It's pretty obvious also that this question is contaminated by factors driving general voting intention - Howard didn't suddenly become a better economic manager in 2001 just because some planes were crashed into buildings on the other side of the world.  Overall Howard was below 50% on this question in one-third of the 45 times this question was asked while he was PM, and even as low as 37% in one case.

Essential had Turnbull's net approval back down to where it was in September at -8 (36-44) with Shorten taking a presumably random hit to -9 (34-43), his worst rating since the first half of the year.  Turnbull's lead stayed more or less steady at 41-28.  However, Essential's polling of preferred leaders within parties had a large downturn for Turnbull (to the benefit of Julie Bishop and to a small extent Tony Abbott) while Shorten is also falling back.  Both remain the preferred leaders of their own party's voters by a large margin, but to be preferred by less than 40% of their own party's voters isn't a great look for both.  That is, if Essential's ratings mean anything at all, on which point we turn to ...

Essential's Panel of Grumps

Two things happened at more or less the same time this week.  Firstly, Essential released a result suggesting that Australian voters are a bunch of xenophobic, nostalgic, protectionist, nationalist populist grumps.  This was all then reported as fact by Crikey, without any reference to the results of other pollsters on similar questions.  Secondly, the Scanlon Foundation released a Mapping Social Cohesion survey (64 page PDF results here) which gave Essential's recent finding of 49% opposition to Muslim immigration a pretty thorough going-over (see pp 40-44 of the PDF).  Scanlon Foundation employs live-interview random-sample phone polling of both landlines and mobiles and the detail in their report on their polling methods is much more than we normally get from Australian pollsters.

The core finding of the Scanlon report as concerns Essential's results is that for all the noise about ISIL, Trump, One Nation, Brexit or whatever, concern about immigration levels isn't increasing (see summary here). Andrew Markus, author of the Scanlon report, also points out that the rate of voters reporting negative attitudes about Muslims, while high, is around a quarter, rather than Essential's half.

In my experience, Essential nearly always gets results to issue questions like its current set.  The issue results don't come across as consistently right-wing or left-wing, but they do come across as very frequently inward-looking, populist, narrow and afraid.  Essential's respondents come across as all a bit One-Nationy, and indeed in one recent poll gave Pauline Hanson some quite impressive ratings.

There are four main possibilities here (noting that some combination of 2, 3 and 4 might well be true.)

1. Essential is right, Australians are like that, and live phone pollsters are producing biased results because respondents won't admit their prejudices to another person.

2. Essential is wrong because its panels, not being selected by random-sample means, are unrepresentative.

3. Essential is wrong because something about its question design causes respondents to answer misleadingly.

4. Essential is wrong because something about online polling causes respondents to answer misleadingly even if those respondents are representative and the questions well designed.

I don't think Essential is right, and I also hope they're not right.  But it's not too easy to get a handle on which of the above is true.  Markus quotes most of the following passage:

"Computer administration yields more reports of  socially undesirable attitudes and behaviours than oral interviewing, but no evidence that directly demonstrates that the computer reports are more accurate."

... from this AAPOR review.  However, the review is also equivocal in places about whether it can be said that online surveys are skewed as opposed to just statistically less accurate.  It gives suggestions that they might be, but also suggestions that online panels are less prone to "satisficing", in which a respondent does the minimum necessary to answer the question - a kneejerk response to get to the end as fast as possible.  (I found that interesting because my perception of Essential is the reverse - that some of its respondents are reward-driven and would hence "satisfice" to get their voucher reward as quickly as possible.)  And the report gives plenty of examples of things about which respondents would be most likely to be honest in an online setting.  Why lie to a computer by telling it you give less to charity than you do, for instance?

It's also risky to apply conclusions about tendencies from the AAPOR report to any single pollster using online panel polling (any method has good and bad pollsters) and perhaps risky to apply conclusions to the Australian polling landscape at all.  But there's at least the possibility that Essential's survey methods either bring out the respondent's inner troll, or that the panel format happens to appeal to populist whingers.  The Scanlon Foundation report gives some credence to explanation 3, based on its own testing of differences between live-phone and online surveys (the AAPOR review also covers this).  However these differences don't alone tell us which method is right.

The report also suggests the following:

"Surveys do not simply identify a rock solid public opinion, they explore with the potential to distort
through questions asked. Essential chose not to present respondents with a range of options on Muslim immigration, rather a yes/ no choice: ‘Would you support or oppose a ban on Muslim immigration to Australia?’ Results were placed in two categories, one of 49% and one of 40%.

The product was easy to understand copy for the media, but arguably also a gross simplification. Public opinion on social issues defies binary categorisation, it is more accurately understood in terms of a continuum, with a middle ground on many issues that comprises more than half the sample."

The most effective illustration of this is as follows:

"The impact of question wording is illustrated by polling on asylum seekers. Nine surveys between 2001 and 2010 using various methodologies [sic*] asked respondents if they favoured or opposed the turning back of boats. The average for the surveys was 67% in support. In contrast, the 2010 Scanlon Foundation survey tested opinion by offering four policy options, ranging from eligibility for permanent settlement to turning back of boats, which in this context was supported by a
minority of just 27%."

What may be going on here is as follows.  The question makes the respondent aware of something that the respondent may think is a problem.  The question proposes a single solution.  This makes it hard for the respondent who agrees there is a problem to say they oppose the proposed solution, because that is tacit support for doing nothing.  But if the respondent is presented with a range of possible solutions they would be more likely to say they prefer something else.

I don't really agree there is a contradiction between high support for ending Muslim immigration and much lower self-reporting of negative attitudes about Muslims.  Along the lines of "I'm not a racist, but ..." some people would (probably dishonestly) claim they were actually cool with Muslims but still opposed either multiculturalism or immigration to Australia involving them.

I also wonder if Essential's respondents are just too suggestible to kneejerk responses to statements like "Free trade has gone too far".  I am reminded of Uni of Tas student politics, in which (contrary to the national pattern) you could put a referendum on just about anything to students and they would say yes to it, unless they thought they didn't understand the question. They would also say yes to the reverse question a few years later,  It would be interesting therefore to see what would happen if opposites of some of the cliches most recently put to the Essential panel were tested.  Perhaps the respondents would agree with them too.

If I see a response from Essential to the Scanlon Report comments I will report it here.

---------------------------------------------------------------------------------------------------

* Use of the word "methodology" where "method" would be perfectly sufficient is one of my pet peeves, and one regarding which I am clearly fighting a lost battle.

----------------------------------------------------------------------------------------------------

Ipsos Update (Nov 28)

As hinted, Ipsos has finally re-emerged for its only post-election national poll of the year, and it's an odd one.  The headline 2PP of 51-49 to ALP isn't all that strange (especially given the relative leans of the various polls through much of the last term) but the primaries of Coalition 36 Labor 30 Greens 16 Others 18 are way off the scale of what Newspoll and Essential have been getting.  Labor polled 34.7% at the election and every poll since has had them between 36 and 38.  The Greens polled 10.2 and every other poll since has had them between 8 and 11. The Others vote is also the highest so far although other polls have had it as high as 17.

There has been strong evidence in past elections that Ipsos overestimates the Green vote and underestimates Labor. In its final 2016 poll this was by 2.8 points and 1.7 points respectively, in the NSW state election 2.7 and 2.1 and in the Victorian state election 3.5 and 3.1.  So rather than this result really representing six points of ALP voters running to the Greens, I suggest this result is a combination of Ipsos' house effects and, probably, sample noise.  A small proportion of it may be real, but I would want to see some sort of evidence of this from other pollsters first.

On the question of the Others vote, the evidence on whether Ipsos overestimates this is inconsistent and I wouldn't write off the combined 18% too quickly.

In terms of the 2PP, Ipsos' use of batched preferencing means that its last-election 2PP figures are often more friendly to the Coalition than its primaries imply.  I've aggregated this poll at 51.5% to Labor, which brings the aggregate reading down for now from 52.4 to 52.1.  (I've also toned down the "global house effect" which was arguably too high anyway.)

Ipsos' leadership figures have Malcolm Turnbull on a net approval of zero (45-45), down from +8 before the election, with Bill Shorten on -16 (37-53), down from -8, and Turnbull leading Shorten 51-30 (an increase in the margin of six points) as preferred Prime Minister.  However the likelihood is that this particular poll has under-polled Labor voters more than Ipsos usually do and so I regard these leadership figures as unreliable.  There are also leader attribute figures which you can read in the full Ipsos report.

The state breakdowns (which are prone to small sample size effects) seem especially odd in WA where the Coalition is credited with a 41-23 lead.

I should also mention that the latest JWS True Issues survey suggests perceptions of the government's performance are improving (summary here).

5 comments:

  1. I guess we need to think about the type of person who voluntarily completes an online poll - I think they're the same type of people who spend a lot of time sending links on facebook to their relatives....

    ReplyDelete
  2. What is happening with George Brandis at the moment potentially may be very damaging to the Turnbull leadership. What do you think of Dutton as a replacement for the current PM and medium-term polling prospects in that scenario?

    ReplyDelete
    Replies
    1. Brandis is indeed an ongoing liability and having to discard him would be painful for Turnbull. I think Dutton often comes across as verging on nutty and lacking the political depth to be a good leader across a range of issues. If they were going to make him PM they may as well bring back Abbott instead.

      Delete
  3. Installing Dutton as PM might negate some of the leakage of the vote to One Nation, he is in a Queensland marginal seat and would probably prop up the vote in his home state where many marginal LNP seats are at risk. Also palatable to the majority conservative wing of the Coalition and the nightmare scenario for the Left.

    ReplyDelete

The comment system is unreliable. If you cannot submit comments you can email me a comment (via email link in profile) - email must be entitled: Comment for publication, followed by the name of the article you wish to comment on. Comments are accepted in full or not at all. If you submit a comment which is not accepted within a few days you can also email me and I will check if it has been received.