I call these things poll-shaped objects (by analogy with piano-shaped object, an instrument that looks like a piano but plays horribly, whether because of poor quality or subsequent abuse and/or decay). At times these PSOs appear faster than I can take them to the tip. After briefly becoming scarce or rebadging themselves as "surveys" following the 2019 federal election polling failure, the new year sees PSOs back in force, with several doing the rounds already on issues including Australia Day, climate change, coal mines in Tasmania and apparently (though I haven't seen it yet) forestry.
There's a well-known saying about how you can catch someone a fish and feed them for a day, or teach someone to fish and feed them for a lifetime. To hopefully reduce my effort load in reeling in all these fishy polls, I hereby present a list of questions you can use to capture and fillet your own whoppers. This guide will hopefully help to screen out issues polls that shouldn't be taken seriously (which in fact is nearly all of them), leaving those that just might be OK. The media should be doing such due diligence on issue polls before reporting them, but for various reasons they usually don't (there are exceptions), so I hope this guide will help make up for that.
The Curse Of Skew Polling
Issues poll design is difficult and full of traps when pollsters are even trying to do it properly, but worse than that, most of the groups commissioning issue polls want anything but a poll that is done properly. A poll that is done properly might not find a strong enough result for their side of the issue, and might either embarrass their own position or at least be boring enough that the media wouldn't be interested in reporting it.
I call polls designed (intentionally or otherwise) to get a bogus result through unsound question design "skew polls". Other terms used overseas include advocacy polling and hired-gun polling. Someone who commissions a skew poll either isn't trying to conduct valid research or, in rare cases, doesn't understand why their research is invalid. The aims of deliberate skew-polling are:
* to create a media hook in order to get coverage for a claim made as part of the poll question or about the poll results.
* to attempt to influence stakeholders, such as governments, by convincing them that a certain position is popular or unpopular. (This has a low success rate, and where one side in an election actually buys the claim, it is often to that side's detriment and in turn to the detriment of the cause in question.)
* to attempt to create a bandwagon effect by convincing people that a view they might have been in two minds about expressing is in fact popular in the community. (I'm not aware of any evidence that this works either.)
Skew-polling gets coverage because of an unhealthy synergy between lobby groups and journalists. Journalists work under tremendous time pressure and many unfortunately develop a tunnel vision that seeks out easy-to-write stories given to them on a platter for free without having to chase down sources. The tacit deal is frequently that the journalist, in return for the easy poll story, agrees to publish it uncritically, allowing themselves to be used by the lobby group in return for the lobby group having made their job easier. The credibility and transparency of the reporting on the poll, and the public understanding of the true state of public opinion, are the losers.
Skew polls are not the same thing as push polls. Push polling is a technique in which the "pollster" is not collecting data at all (not even unsound data) but is simply distributing false claims in the guise of a poll. Push polling is sometimes challenging to distinguish from message testing polling, in which a pollster tests how respondents will react to a dubious claim that supports a party's position. See Tas Labor Push-Polling? Not As Such, But ... for my most detailed discussion of the difference on this site.
Here is a list (not exhaustive nor in order of importance) of the sort of questions that one can ask in considering whether an issues poll is any good. Often a media report will not include enough information about a poll but a quick Google search or a search of the commissioning organisation's website will find more detailed information about it that can be used to answer some of these questions.
1. Is the pollster stated?
If it is not stated who conducted the poll, then it is possible it has been conducted by a suspect or inexperienced outfit, or worse still it may be an unscientific and often unscaled opt-in survey (such as the simple "reader polls" often conducted by newspapers). If it is stated who conducted the poll, then I have information about the track record of major pollsters in my field guide to opinion pollsters.
2. Is the sample size stated?
Issue polls often have lopsided margins compared to voting intention polls. If a perfect sample of a few hundred respondents finds one party leading another 52-48 in a seat, that's not a statistically significant lead. But if it finds that 70% of respondents agree with a statement and 20% don't, that's very significant in terms of establishing which side is leading on the question, assuming the poll has no other issues. It still isn't accurate concerning how much that side is leading by.
In practice, the effective error margins for many polls (especially seat polls) are much larger than the in-theory values (see "Margin of Error" Polling Myths) and issues polls with sample sizes below, say, 500, should be viewed with great suspicion even when the margins on questions are very large. Polls with a sample below 500 should generally not be the substantive focus of a media article, though it can be acceptable to report them when the issue is very local and there are no other data on opinion on the matter. For issues polls with samples above, say, 1000, there tend to be far more important design questions than sample size. However, often detailed claims will be published about the attitudes of, say, One Nation voters or voters over 75 years old within the sample, and these sorts of claims are very unreliable because the sample of those kinds of voters may be very small.
3. Is the period of polling stated?
The major issue here is that lobby groups will sometimes hold onto polls for a month or more before releasing them. A poll may reflect an outdated result on an issue on which there has been movement since. It may also have been taken during a period of unusually strong feeling about an issue because of an event at the time, or because of respondents taking a guilt-by-association approach to an issue (eg disliking a proposal because it is a government proposal at a time when that government is badly on the nose.)
4. Who commissioned the poll?
If a poll was commissioned by a lobby group, then media reporting should state this clearly and prominently. Not all issues polls are commissioned by outside forces. Some are commissioned by media sources themselves, but this is relatively uncommon compared to commissioned polls.
You'll almost never see a lobby group release a poll that doesn't suit its purposes. This means that what you're seeing on a given question is more likely to be the favourable end of polling for that sort of lobby group, if it is the sort of issue multiple polls might be done on. If they commissioned a poll that didn't support their claim, they might just bury it. That's one reason for treating commissioned polls generally with a bit more caution than regular questions in regular polling series. Another is that pollsters may feel pressure (consciously or otherwise) to design the poll so that it helps the client get the result they would be happy with. Pollsters tend to be confronted with many methods choices for which there is no clear right or wrong answer and there is a lot of scope for this sort of thing to happen.
Claims about internal party polling on issues should be treated with extra caution. Sometimes these are simply completely fabricated, and at other times they may be based on extremely small sample sizes. Parties are the most selective of all commissioning sources about what they release and any details that do come out about party polling on issues are usually far too limited to usefully comment on the claimed polling.
5. What was the polling method?
Even if the normal polling method of a pollster is well known, it may use a different method for a commissioned issue poll, so the method used should always be stated. All methods have their weaknesses, but some to be especially aware of include the following:
* Both face-to-face polls and live phone polling in Australia seem prone to "social desirability bias", where the respondent is more likely to give a feelgood answer that they don't think will offend the interviewer and hence may disguise their true intentions. This seems to create skew to the Greens and against minor right-wing parties in voting intention questions, and is also likely to create skew on environmental, minority rights and refugee issues where a certain answer might appear to be mean, heartless or bigoted.
* Robopolling struggles to capture accurate samples, especially of young voters, because of low response rates. The degree of scaling required to fix this means that the poll may be affected by scaling up of very small and unrepresentative samples in certain age groups. Also, robopolls may be overcapturing politically engaged voters simply because almost everyone else hangs up on them. Robopolls can be especially unreliable in inner-city seats (see Why Is Seat Polling So Inaccurate?)
* Online polling typically polls people who are happy filling out surveys about themselves for points towards very small voucher rewards. No matter how much quality control is applied, this is likely to exclude people who are very busy and may also produce skews on internet/technology related questions.
6. Have the voting intentions been published?
Even if the poll is not about voting intentions at all, voting intentions are a critical quality control. If the voting intentions in a sample are unusually strong for some party compared to what would be expected, then that will flow on to issues questions as well. A poll with an implausibly high vote for the Greens, for example, is more likely to find strong opposition to forestry and mining or support for new national parks. There has been an increased tendency to drop voting intention results from polls following the 2019 federal election polling failure, as if the voting intention results alone might have been the problem while the rest of the polls were just fine. In fact issues polls would have been affected by the same skews as voting intention questions, and issues polls were a major part of the 2019 polling failure to the extent that they (mis)informed Labor's failed campaign strategy.
7. Has a full list of all questions asked, and the order they were asked in, been released?
Question order is critical in issues polling. If a respondent is asked a question that makes them more likely to think about an issue in a certain way, then their response to subsequent questions may be affected. A person is more likely to agree that the date of Australia Day should remain January 26 if they have been primed by being first asked whether they are proud to be an Australian (yes the IPA actually does this) than if they are primed by being first asked whether they think Australians should be more sensitive to the concerns of Indigenous Australians. Priming was spoofed in a classic Yes Minister skit but the only thing that has changed since is that some commissioning sources now don't even bother throwing away the lead-in questions. If there has not been a full and clear list of all questions asked published, then it may be the case that there were discarded lead-in questions that could have polluted the response to the main question.
8. What is the wording of the key question(s) and does it (/do they) go in to bat for one side?
The commonest tactic in commissioned polling is to get a skewed response by using a skewed question. A skewed question will do one or more of the following:
* make debatable claims about an issue that the commissioning group would agree with but their opponents might not
* make claims about an issue that are clearly correct, but that are more likely to be seen as supporting evidence for one side of the debate than the other, while ignoring claims that could be made the other way
* describe issues or actions in a way that might imply approval or disapproval
An almost infallible rule is long question, wrong question. Polls that provide one or more sentences of preamble prior to asking a question that could have simply been asked in isolation are very likely to produce a skewed result that does not reflect public opinion on the issue.
Why is this, even if the preamble is completely factual? Because actual public opinion on an issue at any time consists of people who are not aware of any arguments on an issue, people who are aware of one side of the issue but not the other, and people who are aware of both sides of the issue. Those aware of one side are more likely to support that side than those who are aware of neither. Those aware of both sides are more likely to support a given side than those who are only aware of the other side. Arguing a case before asking the question means that suddenly nobody in your sample is either aware of no side or only aware of the other side, and this makes the sample unrepresentative. And even if the person was already aware of both sides, they may also be influenced by being reminded of one side of the debate rather than the other.
All this applies especially where the information supplied in a lead-in statement is obscure, which it frequently is. A well-designed question about support for something in the electorate (such as, for instance, a specific proposed development) will simply ask straight out "do you support proposed development <X>" without any lead-in information. Examples of very bad polls with unsound or skewing lead-in information can be seen, for instance, on my rolling review of polling about a proposed cable car near Hobart.
Bias in poll design can sometimes be subtle. Even unnecessarily stating that a concept is proposed by the government may cause respondents who like the government to support it and respondents who dislike the government to oppose it.
9. What range of answers is allowed?
Sometimes respondents are asked which of a range of options they favour or which view is closest to their own position. But these questions may disadvantage one side or another if a respondent wanting to express one view has to sign up to something else they might not agree with. Double-barrelled response options are especially hazardous here. For instance a question about how we should respond to an issue might offer an option of avoiding a proposed response because the issue isn't genuine. But a respondent might think the issue is genuine, yet not support the proposed response for some other reason (they might think it will be ineffective or harmful or that there is a better solution).
Answer options can also taint outcomes if they smuggle in claims of fact that might then cause the respondent to agree with them. Beware also of cases where a vague option is pitted against a detailed one. While some aspects of the detailed response may turn off a respondent, it may also be that the detailed response seems more convincing and therefore more attractive.
Answer options can also taint outcomes if they smuggle in claims of fact that might then cause the respondent to agree with them. Beware also of cases where a vague option is pitted against a detailed one. While some aspects of the detailed response may turn off a respondent, it may also be that the detailed response seems more convincing and therefore more attractive.
10. Does the poll ask about likelihood of changing vote over an issue?
Polls often ask respondents how likely they are to change their vote if a given party adopts a given policy or chooses a given leader. However, these polls are in general useless. When a voter decides their vote, they are not thinking about one issue in isolation, especially not if it is an obscure one. They will have been bombarded by campaigning across a range of issues by a range of parties and may also have been exposed to more debate about an issue than occurs in a poll setting. As a result, what respondents predict about how an issue might affect their vote is frequently inaccurate. Furthermore, even if the responses given to these questions are accurate, they don't say how much more likely a voter would be to vote in a given way. Finally, respondents often give non-credible responses to this sort of question. They may, for instance, say something would make them less likely to vote for a party they dislike even when they would never vote for that party anyway, or that it would make them more likely to vote for their preferred party even though the issue is clearly a negative one for that party. These sorts of questions often try to scare a party or candidate away from a certain position, but they have no predictive value whatsoever.
11. Are breakdowns provided, and if so what do they show?
Breakdowns at least by age, gender and party of support should be provided for issues questions. The main thing to keep an eye on here is whether any of the breakdowns are weird, as this may be evidence of an unreliable sample or sampling method or even a spreadsheet error, though it could also be just random sample noise. As a general rule younger voters are considerably more left-wing than older voters (especially voters over 65) and women are slightly more left-wing than men (although this has only become the case within the last 20 years). If a poll series consistently finds otherwise (as with the oddly conservative 18-34 age group breakdowns in many uComms polls) it is probably doing something wrong. A common trope in media reporting is weirdly conservative breakdowns in younger age groups - this almost always results from the sample size being tiny, from the sampling method being unsuited to young voters (eg landline robopolls) or both.
12. Does the poll use an agree/disagree format?
Commissioned polls often ask respondents if they agree with a statement the sponsor would like them to agree with, and then report that a high percentage of respondents do in fact agree. However agree/disagree polls are problematic because they are prone to acquiescence bias, a tendency to agree with whatever is offered. Tests find that voters will sometimes agree both with a statement and its opposite. There is nearly always a way to reword an agree/disagree question as a question about how someone should respond to a situation (with a list of options offered) or as a yes/no/don't know question.
13. Does the poll provide information voters may not know?
Even if information in the preamble to a polling question is accurate, balanced and correctly described, this information can still distort the outcome of a poll. The reason is that naturally there are voters who will know nothing about the issue in question, voters who will have limited knowledge of the issue, and voters who will hold beliefs about the issue that are false. Giving them information before asking the question makes the sample unrepresentative because now the respondents all know that information (assuming they believe it) and may have different views as a result to the current view of the general population. At best the survey then tests what people would believe if everyone in the general population had access to the same information, but in fact they don't, and therefore the poll only tests an irrelevant hypothetical, and not what people out there actually believe.
A simple test is to think "would the average voter have actually even heard of the issue being canvassed in this poll?" (a good example where the answer is obviously no being a proposed moratorium on salmon farming in Tasmania, in a poll for a national audience). If most voters would not have been familiar with the issue then any claim based on the poll that "most Australians think" a certain thing (or similar) is obviously false. This point applies even when the poll does not provide any information at all but just asks voters to express a view on an obscure issue.
More questions may be added later. The lack of transparency surrounding many commissioned polls is part of a broader problem with transparency in Australian polling - see How Can Australian Polling Disclosure And Reporting Be Improved?
No comments:
Post a Comment
The comment system is unreliable. If you cannot submit comments you can email me a comment (via email link in profile) - email must be entitled: Comment for publication, followed by the name of the article you wish to comment on. Comments are accepted in full or not at all. Comments will be published under the name the email is sent from unless an alias is clearly requested and stated. If you submit a comment which is not accepted within a few days you can also email me and I will check if it has been received.