Saturday, June 1, 2019

How Can Australian Polling Disclosure And Reporting Be Improved?

Australian national opinion polling has just suffered its worst failure in result terms since 1980 and its worst failure in margin terms since 1984.  This was not just an "average polling error", at least not by the standards of the last 30+ years.  The questions remain: what caused it and what can be done (if anything) to stop it happening again.

A major problem with answering these questions is that Australian pollsters have not been telling us nearly enough about what they do.  As Murray Goot has noted, this has been a very long-standing problem.

In general, Australian pollsters have taken an approach that is secretive, poorly documented, and contrary to scientific method.   One notable example of this was Galaxy (it looks like correctly) changing the preference allocation for One Nation in late 2017, and not revealing they had done this for five months (in which time The Australian kept wrongly telling its readers Newspoll preferences were based on the 2016 election.)  But more generally, even very basic details about how pollsters do their work are elusive unless you are on very good terms with the right people.  Some polls also have statistically unlikely properties (such as not bouncing around as much as their sample size suggests they should, either in poll to poll swing terms or in seat-polling swing terms) that they have never explained. 



At the 2019 election this has all come to a head with a combination of a major error (by recent Australian standards) on the result and a bizarre display of clustering, in which 17 polls by four different pollsters produced two-party preferred results within a 1% band in the last three weeks of the campaign.  Although this is overwhelming evidence that either someone was being influenced by someone else (herding) or else someone was letting the tires out of their own poll (or both), none of the pollsters have explained how this could have happened.

More transparency won't necessarily prevent more problems.  Polling in the UK is much more transparent than here, but that doesn't stop frequent poll misses.  However, a clearer understanding of what different polls are doing would make it easier to detect and comment on possible causes of error in polling in advance.  It would also make it easier for pollsters that were making good decisions to market themselves for a competitive advantage, compared to pollsters that might be using cheap methods and then herding at election time.  In the event that all pollsters are making bad decisions, it would not only make it easier to spot this, but also it would encourage new entrants to try new ideas.

There are two major and distinct opacity problems with Australian polling that this article addresses:

1. The lack of adequate methods information concerning regular national polling sequences.

2. The (often uncritical) reporting by media of selected results from commissioned and internal polls for which data are almost never published in full.  Often these polls are severely defective.

I call the second problem "the unhealthy synergy".  Journalism is an insanely time-stressed job and even many of the better journos are suckers for being handed a free story on a plate and will reward the source of the free story with uncritical coverage of findings.  Yet the design of commissioned issue polls, and even commissioned voting intention polls, is often very sloppy and far too often unethically deceptive and misleading.  Journalists allow themselves to be used because it makes it easier to do their job.  This isn't a universal trend and I am aware of some commendable displays of restraint.  But it seems that no matter how bad a poll is, some outlet will probably publish it.

I suggest that the best fixes for this are as follows:



Pollsters


1. Any poll release in a public polling series should be accompanied, within 48 hours (or immediately if very close to an election) by a detailed technical report posted on the pollster's website.

2. All Australian pollsters should agree to self-regulation along the lines of that practised by the British Polling Council members, such that where any results from a poll commissioned from them enter the public domain, the full poll details will typically be published within 48 hours on the pollster's website or on the website of the group commissioning the poll.


Media


1. Initial media reports of any poll should always include the full headline primary votes, two party-preferred, name of the pollster, sample size and commissioning source, even if the focus of the report is something different.

2. Media reporting of issues questions should always include the full wording of the question asked and the wording of (or at least reference to the existence of) any prior questions on the same matter, and should always seek comment from the other side of the debate and at least one independent analyst.

3. Media should refuse to publish reports of issues polls where there is evidence that questions asked prior to the question sent to them have been omitted.  (For instance if they have just been sent the results of questions 3 and 7.)

4. Media should never publish any party's (or "party source's") claims about internal polling without requesting comment from all other parties that are competitive in the contest and at least one independent analyst.

5. Media should never publish internal polling claims or rumours without knowing and stating the sample size and seeing a report that at least gives some credence to the idea that the poll even exists, and should never make any poll with a sample size of less than 500 a substantial focus of an article.

6. Media should never employ the word "leaked" to refer to internal or commissioned polling without having firm evidence that the polling was provided to them against the sponsor's will and publishing a statement to this effect.

7. All reporting of individual seat polling should contain a warning that seat polling has often been unreliable at recent elections.  (And yes this does include 2019).

--------------------------------------------------------------------------------------------------------

I would like to see the Australian Press Council guidelines expanded to take into account points like the above reduce the ease with which media can be wilfully used by interest groups with an agenda in promoting dodgy polling.

The following are examples of things that I would like to see Australian pollsters report on in a detailed technical report for each poll that enters the public domain in any way.   I have doubtless omitted many things that others would like to see - feel free to add suggestions in comments.  But I hope it's a useful start.  My aim is not necessarily to get all of these things included (it's a long list!) but to see enough of them included for us to know a lot more about how Australian polling is done.  It's a long way to the next election and I don't expect transparency reform to happen overnight or be rushed, but by the end of 2019 I would hope all surviving or new Australian pollsters will have greatly improved the industry's disclosure practices.

UK pollsters publish extremely detailed technical reports so if you click on some of the PDF icons in a list of UK polls you may get some ideas for some things you would like to see here.  Strong disclaimer: the omission of any item from the list below does not mean I think including it would not be desirable.  

Public Polling Series

Methods - General

* The sample size.

* The sampling method(s) in broad terms, including whether landline phones or mobile phones are called (if telephones are called at all).

* If multiple sampling methods were used, the number of respondents for each, and any weighting applied to the samples for the different methods in comparison to each other.

* If a mix of landline and mobile phones were used, the proportion of the phone sample that consisted of mobile phones.

* In the event that the poll chooses respondents from a panel, the number of members of the panel and a statement regarding precautions (if any) against excessive repeat sampling of the same respondents.

* In the event that the poll uses market-research lists of mobile phone numbers, an estimate of the number of Australian mobile phone numbers included.

* The dates of sampling.

* Any large skew in the dates on which respondents were sampled, eg if a poll samples 1500 with 100 on Thursday, 600 on Friday and 700 on Saturday, that should be disclosed.

* The theoretical maximum margin of error on voting intention results, but qualified with a term such as "theoretical", "in theory" or "notional" to at least hint that normal concepts of margin of error do not apply to polling.

* If the poll discarded incomplete responses (where the respondent exits the poll after answering at least one question usefully), a statement about the rate of incomplete responses that were discarded, including whether the poll used any forced-choice questions that did not allow a don't-know option.

* The breakdown of the respondents by age groups, preferably including two divisions for under-40 (eg 18-24, 25-39).  A common problem is scaling up of very small samples of younger voters.

Voting Intention

* The wording of the voting intention question, including the full list of party or candidate options from which the respondent was asked to choose.

* If any parties were included in this "readout" for some seats but not others, which parties and what seats.

* In the case of seat polling, whether the candidates were explicitly named or not, and if a mix, which candidates were explicitly named.

* If voting intention was not the first significant question asked, this needs to be clearly stated and explained.

* The results after scaling/weighting (see below) for all parties/candidates/categories named in the readout, with the presentation to include a headline figure in which "undecided" voters who have a leaning to a party are redistributed to that party.  This should be done in such a way as to reduce the risk of media reporting primary votes in which "undecided" voters are a category, as this leads to misleading swings when compared to election results.

* Whether undecided voters who will not name a preferred party no matter how much they are prodded are retained in the voting intention results, and if so whether they are reallocated proportionally or on some other basis (if so, what basis).

* The pollster's two-party or two-candidate preferred estimate after scaling/weighting (see below).

* The method by which the two-party or two-candidate preferred estimate was derived.  In the case of a formula, whether based on last-election preferences or not, pollsters should publish the exact formula.

* Pollsters using respondent preferences should explain themselves if they ask voters to distribute Nationals preferences between Liberal and Labor.  In general this is a bad idea, with exceptions in three-cornered seats or elections with many such contests.

* If respondents were asked for their vote at the last election, the results for that in both primary and two-party/candidate terms by the same method of weighting/scaling as the main result.

* Breakdowns by age and gender, and at least periodically (in aggregate across multiple runs of a poll) by state, urban/rural and other useful factors (eg education level and income, if asked).

* Ideally I would like to see both primary and 2PP results published in a technical report to one decimal place.

Quotas/Scaling/Weighting

* If respondent quotas are used (eg age, gender) a list of the quotas that are used.

* If responses are scaled/weighted to make some responses carry more value in the results than others (eg to compensate for under-represented voter types), a list of all factors used in scaling/weighting. Examples could include age, gender, education level, reported income, location, past vote (controversial) etc.  (I don't expect pollsters to divulge the exact formula as that might cost them a competitive advantage.)

* If responses are scaled to avoid volatility or outliers in any way (eg by using previous polls, or other polls, or results of previous elections), a complete statement of the mechanism involved including full detail of all formulas used.  Any such scaling is effectively manipulating the data away from being a pure poll and as such should require complete disclosure.

* Ideally, if weighting/scaling is used then the British Polling Council practice of publishing raw data as well as the final outcome should be followed.  This will give an idea of how much force has had to be applied to the raw numbers to produce the final outcomes.

Leadership Questions

* The wording of all leadership questions and the range of answers available (including whether there was a don't know option).

* Results on the above basis, with breakdowns by party supported.

* Whether any undecided voters who were removed from the voting intention results are retained in the leadership questions.

Commissioned Issues Polls From Which Results Are Published

* All of the above (this means that if a pollster polls voting intention as part of a poll on issues commissioned by a client and results from that poll are later published, they should release the voting intention figures and make it clear to the client that this is necessary for transparency and quality assurance), plus:

* The exact wording of all questions asked (including available responses) in the order that they are asked, up to the last question that has been referred to in public reporting.

* Breakdowns by age, gender, party of support and any other relevant polled factors for all issues questions asked.

* The identity of the source(s) that commissioned the poll.

Further suggestions are very welcome in comments.

6 comments:

  1. Thanks for the framework, this is great.

    ReplyDelete
  2. wonderful stuff Kevin.

    We all owe you a great debt of gratitude

    ReplyDelete
  3. I envisage a scenario in which a pollster claims their data 'manipulations' are commercial in confidence etc. In this case I still could not see a reason why the raw data and basic mathematical/methodological approach to data transformation/normalisation was not published.

    ReplyDelete
  4. And now the true confessions start: https://www.brisbanetimes.com.au/politics/federal/embarrassed-pollster-ripped-up-poll-that-showed-labor-losing-election-20190604-p51u9v.html I notice you've already commented on it, KB. The report also mentions a Newspoll in the final week that showed Labor behind - funny, I don't remember it being mentioned at the time.

    ReplyDelete
  5. That reference to Labor being behind in the final week in Newspoll was Queensland-specific. (And they were only 48-52 behind which would have been a swing to them in that state.)

    ReplyDelete
    Replies
    1. Ahhhh yes, so it was. Praps I did see a report of that.

      Delete

The comment system is unreliable. If you cannot submit comments you can email me a comment (via email link in profile) - email must be entitled: Comment for publication, followed by the name of the article you wish to comment on. Comments are accepted in full or not at all. If you submit a comment which is not accepted within a few days you can also email me and I will check if it has been received.