Wednesday, May 5, 2021

Almost Everything In West Media's Polling Video Is Wrong

Got my right hand high, with the thumb down ...


I'm writing a long washup piece re the Tasmanian election and its outcomes but my attention has been diverted by clueless Twitter praise of a YouTube video by The West Report, called "Who really is the preferred PM?"  Strangely the video doesn't actually talk that much about preferred PM scores or say who the preferred PM actually is, but it says a lot of other things that are just not accurate.  It's possibly the worst thing I've seen about polls on a vaguely prominent platform since Bob Ellis.  

The video claims at 0:31 that the questions in the polls aren't public while highlighting a statement from Resolve Political Monitor that says "All questions are designed to be fair, balanced and accurate, e.g. voting questions emulate the actual presentation and ranked preferences of ballot papers as closely as possible". The words from the second "questions" on are highlighted.  But in Resolve's case, what appears to include at least some of the questions is published.  It's also perhaps a bit early to judge Resolve on its transparency approach since it has so far done one national poll.  The Newspoll standard questions, however, are not only regularly published verbatim in The Australian but have existed in stable form for just over 35 years.  I can vouch that the questions as published by Newspoll are their questions, exactly, as I have been polled by them a few times going back to the early 1990s.  Granted, it would be beneficial for pollsters to openly publish exact details of the ballot forms they offer voters (including being clear about Resolve's exact voting intentions question), but in Newspoll's case and for some questions for Resolve, we do see what they are asking.  Ditto for Essential, by the way, although Essential unfortunately doesn't usually release the order (if there was one) in which options for issue questions were asked. 


The video at 0:36 highlights a sample size of 2,000 and claims that the sample sizes are tiny.  In fact if 2,000 voters is an accurate random sample and questions are well designed then it is a very adequate sample for most purposes - in at least 95% of cases results will be within 2.2%.  When polls are wrong by more than that, the cause usually isn't sample error but rather unrepresentative sampling or incorrect weighting, which is why almost everything said about margin of error and polling is incorrect.  Polls can also be wrong for other reasons such as "herding" or late voting shifts, though the latter are a lot less common than some people think. 

West then goes on to say "we don't even know how they find their target audiences" and gives as evidence for this "I've never been polled for instance".  This used to be like saying you don't know how a lottery works because you've never won a lottery prize.  These days it is more like saying you don't know how a lottery works because you haven't even bought a lottery ticket.  Both pollsters have made it entirely clear that they use online panels, and in Newspoll's case anyone who wants to be in the broader panel can just go and sign up to the YouGov website.  That doesn't, however, mean they will be picked for any given poll, since YouGov uses targeted respondent selection to attempt to get a balanced sample, so any given person who signs up might be picked rarely or maybe not at all.

The next cab off the rank is that polls are effectively clickbait (oh but like this video if you've never been polled and throw us a few bucks).  We then get a massive jump into "Newspoll, in the Murdoch press, is controlled by News Corp, so we know where their voting intentions are."  Aside from the fact that Newspoll is a brand administered on contract by YouGov (so the poll operations are not controlled by News Corp, which only indirectly owns the brand name and related intellectual property), the track record of Newspoll at federal elections is of a slight average skew towards Labor.  After an election where Newspoll, in common with other pollsters, underestimated the Coalition by 3% 2PP, why are we still getting these claims?  

On the Resolve side we get an almost Campbell's Web level dot to dot where Resolve are apparently supposed to be biased to the Liberals because Jim Reed used to work for C|T who have worked with the Liberal Party oh and so did Yaron Finkelstein who now works for the PM (never mind that the video shows he left C|T in 2018).  Somehow this misses a point that I would have thought such a piece would have been all over - the size of Resolve's recent COVID government contract and any possible conflict of interest that might generate.  

West then goes on (1:48) to talk about Resolve's reference to "ad hoc" questions and suggests this means that they are "off the cuff" or "unscripted".  But he's just hypothesising wildly because he doesn't have a clue.  This is simply a reference to the fact that major pollsters will sometimes ask once-off questions to all respondents (or respondents in certain states) about current issues alongside their regular suite of voting intention and similar questions.  Anyone who has observed major pollsters for any length of time will have noticed that they are sometimes asked to do this. 

"Who was best PM?  We don't even know if that question has been directly put to the survey respondents"  Of course it has - that's why there is a graphic for a "Preferred PM" question that has breakdowns of how respondents answered by state and gender.  The problem with Better PM or Preferred PM questions isn't that respondents haven't been asked - it's that the question itself is skewed no matter how you ask it.  The question is skewed because it asks respondents to compare a person who has done a job with a person who hasn't.  It effectively asks respondents if they would sack the PM and replace them with an untried opponent, and respondents are cautious about saying that they would.  Perhaps especially now during a pandemic when saying that the Opposition Leader should replace the PM might seem kinda unpatriotic.

After a hilarious moment at 2:15 where West says the questions are secret at the same time as showing a screenshot of some of the Resolve questions, eg "How would you rate Anthony Albanese's performance as Opposition Leader in recent weeks?", we then move on to rabbit-hole territory with some stuff about "bandwagon effect", in which, supposedly, polls condition public opinion in favour of whoever's winning.

No-one with their eyes open to evidence could possibly be citing "bandwagon effect" in Australian federal elections after sitting through a 2016-2019 term in which the Coalition trailed in virtually every published poll, mostly badly, but not only closed down Labor's lead as the election approached, but won.  It is among the most emphatic demolitions of "bandwagon effect" in Australian House of Reps elections ever seen, though it's hardly the first.  Indeed, the history in Australia is that the leading party in the final federal polls tends to underperform its lead, especially on the Labor side.  

So where is this bandwagon thing coming from?  It's coming from the article on West's site, which is written by Michael Tanner, who is "completing a Doctor of Medicine/Doctor of Philosophy. His writing explores the intersection of economics, the media and public health."  Recently I saw a COVID-19 paper spoof using a current meme format that said "We are two economists and a pollster, and here is why all your virologists are wrong".  Tanner's article reads like that in semi-reverse.

The bandwagon reference in Tanner's article comes from an article in The Conversation, which will generally publish any old junk provided that an academic wrote it.  The article in The Conversation draws its evidence about bandwagons from - get this - a study on voting in the UK between 1979-1987!  Not only is the UK's electoral system completely different to ours (there is a reason for voting for parties that are polling strongly in the UK, since otherwise your vote may be wasted) but plenty of UK elections since have shown anything but a bandwagon.  1992 and 2017 especially.

The West video also quotes snippets from a very recent online study of voting behaviour but that study is from the US (another first past the post country) and involved organisations, not political parties.  And, as the study states, its results are unlike those of other experiments in the same field.

That's just about all for the video, so on to the Tanner article.  The Tanner article starts off badly with "Pollsters got it wrong in the 2016 elections in Britain" (UK's general elections were in 2015 and 2017; 2016 was the Brexit referendum). It then goes on to talk about phone polling, but apart from this being a minor component of the first Resolve sample, major national-level polls in Australia have stopped using telephone calls and are overwhelmingly online.  

Having claimed that polls cause bandwagon effects even though Australian experience shows that they usually don't (Tasmania with its majority/minority dynamic sometimes excepted) Tanner then has to explain why the 2019 election showed exactly the reverse.  The excuse here is a handwave about "media coverage" - supposedly the Coalition won although "bandwagon effect" says they should not have, just because of biased media giving the Coalition friendly coverage.  He then refers to other writing that gives examples of such friendly coverage.  Sure, but there are other media articles that give the left friendly coverage and the right unfriendly coverage.  Heck, there are also popular YouTube channels and "alternative media" websites devoted to doing so!  And what he doesn't explain is why media coverage would have caused a 3% jump to the Coalition from the final polls to election day when this effect was not present to the same degree or at all in other previous elections.  Not even in 2013 when the Coalition had both bandwagon effect and right-wing media on its side and by the same argument should surely have won 56-44 at minimum.  Instead, the national polls in 2013 were excellent.

This sort of "alternative media" with it's "sssh, I'll let you in on a big secret" manner is rotting the brains of its subscribers.  Yes polls are still much too opaque - almost the only thing that West has got right here - but they are still not as opaque as the video claims.  More research and a lot less repetition of cliches used by "Twitter brokens" would have gone a long way here.

I may have some more comments later.  A bad YouTube video gets 2300 likes before the truth can get its pants on, so I wanted to get this one out quickly.  

3 comments:

  1. Thank you for watching this video, so I don't have to. As well as your thoughtful deconstruction and demolition of it. As a rule, when I get linked a video like this, the first thing I do is check what else the channel has uploaded and sure enough, this channel basically peddles one-eyed ideological/partisan videos. Of course, that doesn't necessarily mean they can't do good poll analysis but it is a big red flag.

    Unfortunately, there is a big cottage industry of snake oil salesmen that have entered the realm of political analysis. If you just watch their video or listen to their podcast, they will explain how, despite all evidence to the contrary, they can confirm all of your political priors. Don't forget to show your gratitude by signing up for their monthly patreon, of course.

    ReplyDelete
  2. Kevin, whether you praise or damn a poll I always know you are making a totally independent and informed analysis. So thank you. You make good points about the transparency of the polls and it is something we have been thinking about at The Sydney Morning Herald and The Age. I included in my news reports and analysis the fact that questions were rotated to avoid donkey vote responses, etc. As you point out, we did show questions.
    David Crowe
    The Sydney Morning Herald and The Age.

    ReplyDelete
  3. Thanks David, I have found the bit in your article that talks about random rotation on issues questions so I've assumed this is also applied to voting intention and removed that comment from mine. Regarding the voting intention question it is given in the Resolve report as "Which party would you number ‘1’ on the ballot paper?" but in the methodology statement there is the following: "All questions are designed to be fair, balanced and accurate, e.g. voting questions emulate the actual presentation and ranked preferences of ballot papers as closely as possible." On this basis it is not clear to me whether voters are just being asked to give a first preference or are being asked to fill out a full ballot paper as if voting, so any clarification on this would be welcome. That is why I hedged a bit in this article regarding whether all of the Resolve questions as published are necessarily verbatim although it's obvious to me that at least some of them are.

    ReplyDelete

The comment system is unreliable. If you cannot submit comments you can email me a comment (via email link in profile) - email must be entitled: Comment for publication, followed by the name of the article you wish to comment on. Comments are accepted in full or not at all. Comments will be published under the name the email is sent from unless an alias is clearly requested and stated. If you submit a comment which is not accepted within a few days you can also email me and I will check if it has been received.