In Monday's Crikey subscriber email, Ben Oquist of The Australia Institute (paywalled) took issue with some comments I made about TAI's recent polling in a piece entitled Polling And Penalty Rates. While I could have just added my reply as an update to the original article, some of Oquist's comments are too cheeky by half - in a way that typifies the general rottenness of commissioned-poll-spruiking in Australia - and I think that dealing with these issues deserves a fresh article. Peter Brent has also replied and my reply is quite similar.
Oquist's comments concern objections I raised about the use of forced-answer methods in an issue poll conducted by robopolling rather than allowing a don't-know option. That said, of the two statements he says that I "confidently state", one (“a ‘don’t know’ option would certainly have changed the numbers considerably’’.) was in fact stated by Brent!
It is true that my initial response (on Twitter) that most voters who went for the "stay the same" option would actually have no opinion was overconfident and probably incorrect, but I'd already said that in my article which Oquist links to, so here Oquist is flogging a horse that has already run away, which must be convenient for him. Oh, except that anyone with enough attention span to read my article that he links to will see that this is so! The problem remains that some substantial number of respondents would have had no actual view, and that these were forced to give an answer (or hang up) and then claimed (by TAI) as supporters of the existing system. I add that when questions like this have an available "meh!" option ("stay the same"), it is likely some voters would take it when they really had no clue, even with an undecided option included.
This is what Oquist says about the don't-know rate:
While artists imagine, scientists measure, which is what the Australia Institute did in June 2015 when we first asked voters a similar, but not identical, question about penalty rates. What we found was that only 8% people chose "don’t know" when asked about cutting penalty rates. That is a fair bit less than the 50% imagined by Brent.
I am a professional scientist as well as a psephological commentator, and I think I have some idea what scientists do! Firstly, if good scientists believe a question they have asked is "similar, but not identical", then they publish the question so readers can judge for themselves whether it was relevantly similar. They don't expect readers to take their word for it.
Secondly, good scientists when publishing their research, anticipate any major objections to their methods and publish defences of them at the same time. If they don't do that, reviewers will frequently demand the issue be addressed before the paper is published. Oquist, on the other hand, thinks that anyone who thinks his poll is fishy should be contacting him to ask him if he can justify it, after it is published! He writes:
"While artists, and bloggers, often like to beaver away in their basement, scientists, and indeed journalists, prefer to check with the source. I'd have happily shared if they asked."
No, we scientists expect other scientists (and people claiming to be scientists!) to put relevant information to defend an apparently dubious and unusual research method on the public record at the time a claim is made. (And when it comes to polls of this kind, too many journalists don't check or question anything much, either because they don't think it's worth their time or because they want to pat the activist's back for giving them an easy story.)
But since Ben Oquist, scientist, says he would happily share if I ask, I hereby save the time of sending potentially dozens of requests and publicly ask him to send me all polling TAI ever conducts, or at least all polling (full details of all questions asked) that supposedly provides supporting evidence for any method decisions TAI makes in its polling. And while he's at it, don't just share it with me, share it with everybody; publish the lot! (Let's see how happy he is about that sharing request!)
I choose to do things in the open, criticising polling claims on the basis of the material actually published, partly because I want to see a lot more openness about polling details, and want to get away from the damaging culture of people who comment on polling becoming too cozy with those who pay for it.
There is also this question: if TAI were so confident that the "don't know" option wasn't going to score very highly, why did they omit it? Aside from the claim that it didn't matter much, no defence of the irregular exclusion of the undecided has been offered. We're entitled for now to suspect the worst: that consciously or unconsciously it was realised that this would beef up the percentage who could be claimed to have a view against changes.
This situation arises so often with commissioned polls - I point out a suspect method, I point out that the choice of the suspect method potentially benefits the commissioning group's motives, and then defenders of the poll try to say "oh, but it wouldn't have made much difference, we could have done it the normal way and the result would have been much the same". In that case, why on earth didn't you? Why leave a poll open to objections when it is so easy not to, and if you really think it will make no real difference anyway?
The Causes of Howard's Defeat
There is more to play for here than just Oquist's pretence that sitting on a poll result for six months then releasing extremely vague details in order to defend irregular method decisions in another poll has anything to do with science. At stake is a view of why the Howard government got the boot in 2007 and what this might say about the fate of the Turnbull government.
The argument as put by Oquist goes like this: "Moving on from polling to political strategy, we can observe similar misplaced confidence when Brent states "the fact of majority voter opposition to a proposal doesn’t necessarily mean much politically". While I don't have another opinion poll to kill that conclusion off, I would suggest that John Howard once bet his party's future on exactly that opinion. He lost."
In fact, this isn't a refutation at all. Brent is saying A doesn't always mean B and Oquist is saying it sometimes does. The two statements are completely compatible with each other. But while WorkChoices was a part of the picture of the Howard defeat in 2007, the Coalition under Howard had always had a radical IR agenda. It only bit them because their 2004 winning of control of both Houses removed the constraint on their power that the Senate had represented, and stopped the Senate from saving the government from itself. Scare campaigns about supposed intentions (effectively what the Left is gearing up for for 2016) are not the same as campaigns about lived realities as a result of changes delivered by governments with full force. If they were, the Howard government would have been crushed in 1998.
But there were many other possible big-picture causes of Howard's loss. These included long incumbency, the lack of a viable succession plan for Howard (who was 68, with hypothetical polling suggesting switching to Peter Costello would make no difference), positive reaction to the installation of Kevin Rudd as a relief from Labor's usual hackery in Opposition, and voter awareness that a crunch was coming and that Labor would be the more caring alternative. (The government tried to tell voters the election was about economic growth!) If the campaigning Left in Australia really thinks that beating Turnbull in 2016 will be as simple as rolling out a repeat of Your Rights At Work then it is hopelessly deluded.
TAI would have us believe that their polling in general shows Australians to be innately left-wing across a wide range of issues. In this case their poll shows simply that an outright cut to penalty rates with no offsets or compensation would be at least controversial. However, the chances of that getting through the Senate if even moved must be very low indeed. On other issues, I think the TAI polls often really say more about the way voters react when confronted with an issue that may be novel to them and where they are not familiar with both sides of the debate.
Salience
I also want to comment on this:
"What we found was that only 8% people chose "don’t know" when asked about
cutting penalty rates. That is a fair bit less than the 50% imagined by
Brent. [..]
The fact that the proportion of voters who chose "don’t know" in response
to a question about cutting penalty rates is so much lower than Brent's best
guess suggests that the issue is of "high salience" to voters. Salience is a
fancy pollster word for an issue people care about."
Firstly, this is a massive strawman of Brent's position - he was suggesting that 50% of those who said "stay the same" might be don't-knows, not 50% of the whole sample!
The idea that you can judge the "salience" of an issue by the don't-know rate is simply spurious. Taking a look at a bunch of ReachTEL issues polling on their blog (all polls listed there for 2015) one poll with a very low don't-know rate is support or otherwise for a Bell Bay pulp mill in the electorates of Lyons and Bass. Yet this issue which once caused massive swings in individual booths has more or less disappeared off the electoral radar - the remaining committed views on it are legacies of a long-running but largely expired debate. On the other hand, it's hard to imagine anything more salient than the replacement of Tony Abbott with Malcolm Turnbull, which has caused one of the largest polling surges ever seen. Yet undecided rates for approval of the change have been in the mid-teens, not atypically low.
I suspect the don't-know rate has much more to do with the range of answers offered.
No comments:
Post a Comment
The comment system is unreliable. If you cannot submit comments you can email me a comment (via email link in profile) - email must be entitled: Comment for publication, followed by the name of the article you wish to comment on. Comments are accepted in full or not at all. Comments will be published under the name the email is sent from unless an alias is clearly requested and stated. If you submit a comment which is not accepted within a few days you can also email me and I will check if it has been received.