Thursday, February 13, 2020

Queensland 2020: Currumbin By-Election and YouGov Poll

Queensland is heading for at least one unexpectedly interesting by-election early in another state election year.  Also, a new YouGov poll has come out that has been the subject of incorrect reporting concerning the Premier's unpopularity.  I thought it would be useful to have a post up covering these two issues in detail.

Currumbin (LNP, 3.3%)
By-election March 28

Currumbin is in Queensland's far south-eastern corner and includes the border town of Coolangatta (now a Gold Coast suburb) and surrounding southern Gold Coast suburbs and rural hinterland to the west of them.  It has been held by the retiring member, Jann Stuckey, since 2004, but before that was held by Labor's Merri Rose for 12 years.  From 1992 (when Rose first ran) until 2001 the seat was more Labor-friendly than the state average, but this ended with Rose's fall from grace and Cabinet in 2004 and since then it has reverted to being slightly LNP-leaning compared to the state average.  It is possible, as the departing incumbent Jann Stuckey suggests, that Currumbin is an electorate where perceptions of the candidate matter more than elsewhere.

An Opposition vacancy in a by-election against a Government showing plenty of wear and tear would not normally be an interesting event and in many cases Governments would be tempted to give this sort of contest a miss.  But many things about the Currumbin contest are unusual.

Firstly, Labor had an already preselected candidate, former Parents and Citizens Association President Kaylee Campradt, who was already running for the seat, ostensibly for the main election, having been announced in October.  It appears Labor had actually been tipped off that Jann Stuckey might not go full-term in the seat.  Irrespective of this, having Campradt already running for the main election Labor had no real choice but to run her for the by-election.

Secondly there has been acrimony between Stuckey and her party after Stuckey was one of three LNP MPs to vote in favour of decriminalising abortion in a 2018 conscience vote.  Stuckey has cited within-party responses to this decision as among the stress factors causing her to bring forward her intended retirement, though she has also attacked Labor.  The attacks from within on LNP members who voted for abortion reform were indeed extraordinary and smack of an attitude that the institution of the conscience vote is the preserve of moral conservatives and nobody else is authorised to use it.

Stuckey has also criticised the preselection of her replacement, and has alleged that prosecuting solicitor Laura Gerber has been chosen as a token female.  Although Gerber is clearly well qualified, Stuckey has also strongly criticised her degree of connection to the electorate and has suggested that Currumbin voters won't support a "blow-in".  Gerber has denied being a blow-in, saying she has been living in the electorate for a year and citing a childhood background in the general area.

Thirdly, the by-election is the first electoral test for Opposition Leader Deb Frecklington.  Frecklington's polling so far has been lacklustre and she was widely criticised for attacks on the Premier's image in which she, among other things, implied that the Premier was not "grounded" because the Premier wore expensive clothes didn't have children.   Social media photos of Frecklington wearing designer labels soon appeared. There is sometimes speculation about former minister David Crisafulli as an alternative leader, and in the event of a disaster on March 28, the LNP would have enough time to install a new leader before the election if it wanted to.

Fourthly, the by-election occurs against a backdrop of federal turmoil on the Nationals side of the LNP.  At this stage I do not see this as likely to be a significant factor in Currumbin, but I could be wrong, particularly if it affects the apparently already weakened popularity of the federal government as a whole.

The Greens (Sally Spain) and One Nation (Nicholas Bettany) are contesting.  The Greens have polled reasonably strongly in Currumbin recently, including 11.7% at the 2017 election, but are unlikely to be contenders in a by-election where both majors are running hard.  One Nation did not contest Currumbin in 2017.  They did contest the seat in 1998-2004 and 2015 but have never made the top two in the seat and polled below their state average there in 1998.  They also polled below their state average in the seat in last year's Senate election - figures compiled by Alex Jago showed they had a Senate 4PP of 11.8% in the seat compared to a statewide average of 16.1% (and compared to the Greens' 16.8% in Currumbin.)  Support levels for both the Greens and One Nation might be inflated at a by-election by dissatisfaction with the major parties; nonetheless, it is difficult to see either making the top two.  Perhaps surprisingly, only four candidates have nominated.

Under normal circumstances, Labor would be thinking twice about even contesting Currumbin, since by-elections normally produce swings against incumbent governments, and a large swing could be embarrassing.  But preselection disunity, the loss of a 16-year incumbent, Labor's preparedness for the by-election and Labor's recent targeting of the Gold Coast area have all led to speculation (mostly from unnamed LNP sources) that the LNP might actually lose this by-election.

Government seat gains at by-elections

The last time a government gained a seat at a state by-election I wrote a review about the history of such events.  Government gains occur in perhaps 10% of state by-elections for non-government vacancies.  Government gains are more likely to occur when the government is in its first term (often this represents the exodus of heavy hitters from the previous government) and when the federal government is not of the same party.  In this case the state government is in its second term, but it is of the opposing side to the federal government, so federal drag factors are in its favour.  (Federal drag factors against the Abbott government seem to be a major factor in the Fisher result in SA in 2014, especially since the Weatherill government was not even able to retain office at the end of its term.)

However, if we look at the 14 (including Fisher) cases where either the state government was past its first term or the state government was the same party as the federal government (or both), only half of them are government wins from oppositions rather than from crossbenchers.  The government wins from oppositions occurred under exceptional circumstances - original result voided, an ex-Premier resigning, massive federal popularity pull or push factors (including in two cases wartime), and the 1971 apartheid protest crackdowns.   Currumbin is none of this so a loss would be an extraordinary result, and LNP speculations to that effect may be just a combination of panic and expectation management in anticipation of a no-swing win.

I like to have a go at setting pre-by-election guideposts as insurance against post-result spin from parties, so here's my attempt at this (which may be modified before the by-election):

ALP or other non-LNP win (any margin): Very bad result for LNP, likely to result in serious leadership speculation if not change.
LNP win 0-3%: Bad result for LNP and good result for Labor, to some degree explainable by special factors, suggests Labor leadership not a problem in this area.
LNP win 3-8%: Not a great deal to see here, result could be explained by some combination of normal by-election swings and special factors that may have reduced them
LNP win 8-10%: Good result for LNP, suggesting state Labor may be a little on the nose, and that federal factors are not causing significant damage.
LNP win >10%: Outstanding result for LNP, terrible result for Labor, suggesting greater risk that Labor will lose the state election.

For the Currumbin by-election see also Tally Room guide, Poll Bludger guideABC guide

The same day will see the massive Brisbane City Council elections.  Brisbane Council is dominated by the LNP but any sizeable swing in any direction will be seen as a portent for the state election.  The different schedules of BCC and state elections in the past have made it more difficult to measure this rigorously but looking at recent results there is some relationship.  Perhaps it's not a super-strong one, eg, as the swings to the LNP from 2008 to 2012 were relatively modest.

YouGov Poll

The recently released YouGov poll had the parties square on 50-50 2PP, a result which with anything like a uniform swing would result in a hung parliament.  The primary votes were LNP 35 Labor 34 One Nation 15 Greens 10 KAP 3 others 3.  Except for others, none of these are significantly different to the 2017 results, and others are probably being under-polled.  However this is the second poll in a row with the government slightly below its 2017 result.  Following the size of the polling error at the 2019 federal election, not too much should be read into that difference.

The poll mostly attracted attention for the leader ratings, including a poor net rating for Premier Palaszczuk of -15 (29-44).  For instance, Sky declared that "The popularity of Queensland Premier Annastacia Palaszczuk has plummeted below those of former Premiers Campbell Newman and Anna Bligh" and the Courier-Mail had "ANNASTACIA Palaszczuk has plummeted to become the most unpopular Queensland premier in recent history".  But these comments result from a misleading comparison of results from different pollsters.  Leadership approval polling for premiers prior to Palaszczuk was conducted by the old Newspoll using live phone polling.  The current poll is conducted by YouGov using exclusively online polling.  The YouGov polling has far higher "undecided" rates than the old Newspoll, and as a result the approval and disapproval rates for leaders tend to be lower.  The undecided rate seems to have even increased compared to the previous YouGov-Galaxy polling, which is odd because when it comes to federal polling, the undecided rate for the Prime Minister has dropped sharply.

Thus while Palaszczuk's 29% positive satisfaction rating does compare poorly with Newman's worst of 33%, the latter figure was recorded alongside a dissatisfaction rating of 57%, giving Newman a net -24%.  You just can't compare an online poll with a 27% undecided rate with phone polls with undecided rates in the range 6-11%.  So the "recent history" in which Palaszczuk is the most unpopular Queensland premier is actually just her own five years in the role - she is the least popular Premier since the one before her!  When it comes to Bligh, Sky's comment isn't even accurate if positive approval is the yardstick, since Bligh at one stage sank to 24% on that measure (Oct-Dec 2010) before partly recovering following her handling of the 2010-11 floods.

To put Palaszczuk's performance in a fairer historic context, here are the worst net ratings polled by each Queensland Premier since the old Newspoll started:

Bligh -43
Bjelke-Petersen -40
Newman -24
Cooper -19
Ahern -19
Palaszczuk (so far) -15
Borbidge -13
Beattie -7
Goss +16

However, while Palaszczuk's -15 isn't historically bad, it's still bad.  It's bad because once state premiers become significantly unpopular they nearly always fail to win the next election.  See Unpopular State Premiers Have Dire Historic Fates.  Since that article was written another five Premiers who polled ratings worse than a net -10 have all gone (Baird resigned, Weatherill, Giddings, Newman and Barnett all defeated), while three of four State Premiers who never polled such ratings (Andrews, Berejiklian and Hodgman) were re-elected (Napthine lost).  Perhaps this pattern does not hold for YouGov online polling as it did for Newspoll, but I wouldn't bet too much on that being the case.  If YouGov net ratings are comparable to other polls then Palaszczuk's Premiership is probably in trouble if the LNP can get its act together.

(A footnote: Wayne Goss is one of only four Premiers in the Newspoll era to lose office at the ballot box (albeit at a by-election) without ever polling a negative net rating.  The others are John Fahey (NSW), Carmen Lawrence (WA) and Rob Kerin (SA) - there was very little polling for the last two named.)

Deb Frecklington also polled poor ratings, but it isn't totally clear how poor.  The Courier-Mail's article said she had 23% satisfied and 44% dissatisfied (33% uncommitted) for a net rating of -21.  However the graphic had the dissatisfied and uncommitted results the other way around.  I have been waiting for a detailed YouGov release (as was the case with their previous poll) to clarify this, but to this stage that has not been seen.  If it is indeed -21, then only Rob Borbidge (-45 just before the 2001 Queensland election) and Tim Nicholls (-27 before the 2017 election) have polled worse.

Palaszcuk has also increased her Better PM lead to 34-22, but there's not much to see there really. Palaszczuk led Tim Nicholls 45-31 at the previous election, and the loss of numbers to undecided on both sides is likely to be largely a result of the new polling methods.

At this stage if the polls are accurate the 2020 Queensland election is shaping up as a repeat of the 2017 election in which there was virtually no 2PP swing and relatively little seat transfer.  The difference at this stage is the lack of the exotic factors that made 2017 so uncertain (One Nation boom and partial bust, redistribution, switch to compulsory preferences.)  However polling in Queensland has been extremely sparse, the new YouGov methods make sense but have been relatively little tested, and there is still time for anything to happen.  Queensland is also a state where federal polling has been persistently badly wrong and some day state polling might be so too (for reasons other than the extreme preferencing shift seen in 2015).  At the moment both government and opposition appear to be in lacklustre shape, but nobody else is consistently taking advantage.

Friday, January 24, 2020

Do-It-Yourself Issues Polling Due Diligence

Note Jan 28: Hobart recount today is being covered here.

You've probably seen the kind of thing before.  A poll commissioned by a group with an obvious stake or bias on an issue has found, surprise surprise, that their position is widely shared by the population in general.  This "result" is reported by a newspaper which reports the group's claims uncritically as if the group has "found" some fact about public opinion, without any cautionary note or comment from anyone experienced in analysing polls.

I call these things poll-shaped objects (by analogy with piano-shaped object, an instrument that looks like a piano but plays horribly, whether because of poor quality or subsequent abuse and/or decay). At times these PSOs appear faster than I can take them to the tip.  After briefly becoming scarce or rebadging themselves as "surveys" following the 2019 federal election polling failure, the new year sees PSOs back in force, with several doing the rounds already on issues including Australia Day, climate change, coal mines in Tasmania and apparently (though I haven't seen it yet) forestry.  

There's a well-known saying about how you can catch someone a fish and feed them for a day, or teach someone to fish and feed them for a lifetime.  To hopefully reduce my effort load in reeling in all these fishy polls, I hereby present a list of questions you can use to capture and fillet your own whoppers.  This guide will hopefully help to screen out issues polls that shouldn't be taken seriously (which in fact is nearly all of them), leaving those that just might be OK.  The media should be doing such due diligence on issue polls before reporting them, but for various reasons they usually don't (there are exceptions), so I hope this guide will help make up for that.  

The Curse Of Skew Polling

Issues poll design is difficult and full of traps when pollsters are even trying to do it properly, but worse than that, most of the groups commissioning issue polls want anything but a poll that is done properly.  A poll that is done properly might not find a strong enough result for their side of the issue, and might either embarrass their own position or at least be boring enough that the media wouldn't be interested in reporting it.

I call polls designed (intentionally or otherwise) to get a bogus result through unsound question design "skew polls".  Other terms used overseas include advocacy polling and hired-gun polling.  Someone who commissions a skew poll either isn't trying to conduct valid research or, in rare cases, doesn't understand why their research is invalid.  The aims of deliberate skew-polling are:

* to create a media hook in order to get coverage for a claim made as part of the poll question or about the poll results.

* to attempt to influence stakeholders, such as governments, by convincing them that a certain position is popular or unpopular.  (This has a low success rate, and where one side in an election actually buys the claim, it is often to that side's detriment and in turn to the detriment of the cause in question.)

* to attempt to create a bandwagon effect by convincing people that a view they might have been in two minds about expressing is in fact popular in the community.  (I'm not aware of any evidence that this works either.)  

Skew-polling gets coverage because of an unhealthy synergy between lobby groups and journalists.  Journalists work under tremendous time pressure and many unfortunately develop a tunnel vision that seeks out easy-to-write stories given to them on a platter for free without having to chase down sources.  The tacit deal is frequently that the journalist, in return for the easy poll story, agrees to publish it uncritically, allowing themselves to be used by the lobby group in return for the lobby group having made their job easier.   The credibility and transparency of the reporting on the poll, and the public understanding of the true state of public opinion, are the losers.

Skew polls are not the same thing as push polls.  Push polling is a technique in which the "pollster" is not collecting data at all (not even unsound data) but is simply distributing false claims in the guise of a poll.  Push polling is sometimes challenging to distinguish from message testing polling, in which a pollster tests how respondents will react to a dubious claim that supports a party's position.  See Tas Labor Push-Polling? Not As Such, But ... for my most detailed discussion of the difference on this site.

Here is a list (not exhaustive nor in order of importance) of the sort of questions that one can ask in considering whether an issues poll is any good.  Often a media report will not include enough information about a poll but a quick Google search or a search of the commissioning organisation's website will find more detailed information about it that can be used to answer some of these questions.

1. Is the pollster stated?

If it is not stated who conducted the poll, then it is possible it has been conducted by a suspect or inexperienced outfit, or worse still it may be an unscientific and often unscaled opt-in survey (such as the simple "reader polls" often conducted by newspapers).  If it is stated who conducted the poll, then I have information about the track record of major pollsters in my field guide to opinion pollsters.  

2. Is the sample size stated?

Issue polls often have lopsided margins compared to voting intention polls.  If a perfect sample of a few hundred respondents finds one party leading another 52-48 in a seat, that's not a statistically significant lead.  But if it finds that 70% of respondents agree with a statement and 20% don't, that's very significant in terms of establishing which side is leading on the question, assuming the poll has no other issues.  It still isn't accurate concerning how much that side is leading by.

In practice, the effective error margins for many polls (especially seat polls) are much larger than the in-theory values (see "Margin of Error" Polling Myths) and issues polls with sample sizes below, say, 500, should be viewed with great suspicion even when the margins on questions are very large.  Polls with a sample below 500 should generally not be the substantive focus of a media article, though it can be acceptable to report them when the issue is very local and there are no other data on opinion on the matter.  For issues polls with samples above, say, 1000, there tend to be far more important design questions than sample size.  However, often detailed claims will be published about the attitudes of, say, One Nation voters or voters over 75 years old within the sample, and these sorts of claims are very unreliable because the sample of those kinds of voters may be very small.

3. Is the period of polling stated?

The major issue here is that lobby groups will sometimes hold onto polls for a month or more before releasing them.  A poll may reflect an outdated result on an issue on which there has been movement since.  It may also have been taken during a period of unusually strong feeling about an issue because of an event at the time, or because of respondents taking a guilt-by-association approach to an issue (eg disliking a proposal because it is a government proposal at a time when that government is badly on the nose.)

4. Who commissioned the poll?

If a poll was commissioned by a lobby group, then media reporting should state this clearly and prominently.  Not all issues polls are commissioned by outside forces.  Some are commissioned by media sources themselves, but this is relatively uncommon compared to commissioned polls.

You'll almost never see a lobby group release a poll that doesn't suit its purposes.  This means that what you're seeing on a given question is more likely to be the favourable end of polling for that sort of lobby group.  If they commissioned a poll that didn't support their claim, they might just bury it.  That's one reason for treating commissioned polls generally with a bit more caution than regular questions in regular polling series.  Another is that pollsters may feel pressure (consciously or otherwise) to design the poll so that it helps the client get the result they would be happy with.   Pollsters tend to be confronted with many methods choices for which there is no clear right or wrong answer and there is a lot of scope for this sort of thing to happen.

Claims about internal party polling on issues should be treated with extra caution.  Sometimes these are simply completely fabricated, and at other times they may be based on extremely small sample sizes.  Parties are the most selective of all commissioning sources about what they release and any details that do come out about party polling on issues are usually far too limited to usefully comment on the claimed polling.

5. What was the polling method?

Even if the normal polling method of a pollster is well known, it may use a different method for a commissioned issue poll, so the method used should always be stated.  All methods have their weaknesses, but some to be especially aware of include the following:

* Both face-to-face polls and live phone polling in Australia seem prone to "social desirability bias", where the respondent is more likely to give a feelgood answer that they don't think will offend the interviewer and hence may disguise their true intentions.  This seems to create skew to the Greens and against minor right-wing parties in voting intention questions, and is also likely to create skew on environmental, minority rights and refugee issues where a certain answer might appear to be mean, heartless or bigoted.

* Robopolling struggles to capture accurate samples, especially of young voters, because of low response rates.  The degree of scaling required to fix this means that the poll may be affected by scaling up of very small and unrepresentative samples in certain age groups.  Also, robopolls may be overcapturing politically engaged voters simply because almost everyone else hangs up on them.  Robopolls can be especially unreliable in inner-city seats (see Why Is Seat Polling So Inaccurate?)

* Online polling typically polls people who are happy filling out surveys about themselves for points towards very small voucher rewards.  No matter how much quality control is applied, this is likely to exclude people who are very busy and may also produce skews on internet/technology related questions.

6. Have the voting intentions been published?

Even if the poll is not about voting intentions at all, voting intentions are a critical quality control.  If the voting intentions in a sample are unusually strong for some party compared to what would be expected, then that will flow on to issues questions as well.  A poll with an implausibly high vote for the Greens, for example, is more likely to find strong opposition to forestry and mining or support for new national parks.  There has been an increased tendency to drop voting intention results from polls following the 2019 federal election polling failure, as if the voting intention results alone might have been the problem while the rest of the polls were just fine. In fact issues polls would have been affected by the same skews as voting intention questions, and issues polls were a major part of the 2019 polling failure to the extent that they (mis)informed Labor's failed campaign strategy.

7. Has a full list of all questions asked, and the order they were asked in, been released?

Question order is critical in issues polling.  If a respondent is asked a question that makes them more likely to think about an issue in a certain way, then their response to subsequent questions may be affected.  A person is more likely to agree that the date of Australia Day should remain January 26 if they have been primed by being first asked whether they are proud to be an Australian (yes the IPA actually does this) than if they are primed by being first asked whether they think Australians should be more sensitive to the concerns of Indigenous Australians.  Priming was spoofed in a classic Yes Minister skit but the only thing that has changed since is that some commissioning sources now don't even bother throwing away the lead-in questions.  If there has not been a full and clear list of all questions asked published, then it may be the case that there were discarded lead-in questions that could have polluted the response to the main question.

8. What is the wording of the key question(s) and does it (/do they) go in to bat for one side?

The commonest tactic in commissioned polling is to get a skewed response by using a skewed question.  A skewed question will do one or more of the following:

* make debatable claims about an issue that the commissioning group would agree with but their opponents might not
* make claims about an issue that are clearly correct, but that are more likely to be seen as supporting evidence for one side of the debate than the other, while ignoring claims that could be made the other way
* describe issues or actions in a way that might imply approval or disapproval

An almost infallible rule is long question, wrong question.  Polls that provide one or more sentences of preamble prior to asking a question that could have simply been asked in isolation are very likely to produce a skewed result that does not reflect public opinion on the issue.

Why is this, even if the preamble is completely factual?  Because actual public opinion on an issue at any time consists of people who are not aware of any arguments on an issue, people who are aware of one side of the issue but not the other, and people who are aware of both sides of the issue.  Those aware of one side are more likely to support that side than those who are aware of neither.  Those aware of both sides are more likely to support a given side than those who are only aware of the other side.  Arguing a case before asking the question means that suddenly nobody in your sample is either aware of no side or only aware of the other side, and this makes the sample unrepresentative.  And even if the person was already aware of both sides, they may also be influenced by being reminded of one side of the debate rather than the other.  

All this applies especially where the information supplied in a lead-in statement is obscure, which it frequently is.  A well-designed question about support for something in the electorate (such as, for instance, a specific proposed development) will simply ask straight out "do you support proposed development <X>" without any lead-in information.  Examples of very bad polls with unsound or skewing lead-in information can be seen, for instance, on my rolling review of polling about a proposed cable car near Hobart.

Bias in poll design can sometimes be subtle.  Even unnecessarily stating that a concept is proposed by the government may cause respondents who like the government to support it and respondents who dislike the government to oppose it.

9. What range of answers is allowed?

Sometimes respondents are asked which of a range of options they favour or which view is closest to their own position.  But these questions may disadvantage one side or another if a respondent wanting to express one view has to sign up to something else they might not agree with.  Double-barrelled response options are especially hazardous here.  For instance a question about how we should respond to an issue might offer an option of avoiding a proposed response because the issue isn't genuine.  But a respondent might think the issue is genuine, yet not support the proposed response for some other reason (they might think it will be ineffective or harmful or that there is a better solution).  

10. Does the poll ask about likelihood of changing vote over an issue?

Polls often ask respondents how likely they are to change their vote if a given party adopts a given policy or chooses a given leader.  However, these polls are in general useless.  When a voter decides their vote, they are not thinking about one issue in isolation, especially not if it is an obscure one.  They will have been bombarded by campaigning across a range of issues by a range of parties and may also have been exposed to more debate about an issue than occurs in a poll setting.  As a result, what respondents predict about how an issue might affect their vote is frequently inaccurate.  Furthermore, even if the responses given to these questions are accurate, they don't say how much more likely a voter would be to vote in a given way.  Finally, respondents often give non-credible responses to this sort of question.  They may, for instance, say something would make them less likely to vote for a party they dislike even when they would never vote for that party anyway, or that it would make them more likely to vote for their preferred party even though the issue is clearly a negative one for that party.  These sorts of questions often try to scare a party or candidate away from a certain position, but they have no predictive value whatsoever.

11. Are breakdowns provided, and if so what do they show?

Breakdowns at least by age, gender and party of support should be provided for issues questions.  The main thing to keep an eye on here is whether any of the breakdowns are weird, as this may be evidence of an unreliable sample or sampling method or even a spreadsheet error, though it could also be just random sample noise.  As a general rule younger voters are considerably more left-wing than older voters (especially voters over 65) and women are slightly more left-wing than men (although this has only become the case within the last 20 years).  If a poll series consistently finds otherwise (as with the oddly conservative 18-34 age group breakdowns in many uComms polls) it is probably doing something wrong.  

More questions may be added later.  The lack of transparency surrounding many commissioned polls is part of a broader problem with transparency in Australian polling - see How Can Australian Polling Disclosure And Reporting Be Improved?

Wednesday, January 15, 2020

Will Hodgman Resignation And Recount

Retiring MP: Will Hodgman (Liberal, Franklin)
Recount from 2018 election for remainder of 2018-22 term
Nic Street expected to win recount if he contests, otherwise Simon Duffy
Replacement will be a Liberal
Peter Gutwein/Jeremy Rockliff to be elected unopposed as leader/deputy after Michael Ferguson/Elise Archer withdrew

----------------------------------------------------------
Monday Jan 20 updates

Today's the day, but there has been remarkably little news about the expected ballot and a lot of speculation.  No Liberal MP has publicly endorsed either ticket.  The belief among a few journalists I've spoken to over the weekend is that Gutwein appears to either have the upper hand or at least have enough to tie, but these things can change or can be unreliable.  Some outlets have reported Mark Shelton and (perhaps surprisingly) Joan Rylah as undecided votes.  Gutwein has been firming on the Sportsbet market (1.36 vs 2.90, having at one stage been only just ahead) but this is the same firm that had the Liberals at $15 to win an outright majority six weeks out from the election.

11:45 Ferguson/Archer withdraw: The Ferguson/Archer team has withdrawn, presumably because they did not have the numbers.  Gutwein/Rockliff will be elected unopposed.

Monday, January 13, 2020

Newspoll Roasts Morrison / 2019 Polling Year In Review

Newspoll has come out of hiding early this year, and that warrants a quick post about the unusual nature and results of this week's early Newspoll, to which I am also attaching a belated annual roundup for 2019.

In the past, the history of Newspoll has tended to show that national security related incidents have big impacts on polling, but natural disaster incidents generally don't.  (An exception was at state level, where Anna Bligh's doomed Queensland Premiership received a large but temporary bounce from her perceived good handling of the 2010-11 floods disaster.) However, this natural disaster is somewhat different, both because of the scale of its many impacts and the extent to which lines of criticism of the federal government have immediately opened up.  Prime Minister Morrison has been criticised for taking a holiday during the crisis, for insisting on shaking the hands of bushfire victims who didn't want their hands shaken, over the level of federal preparation for the crisis, and over the government's climate policies and degree of recognition of realities of climate change.

Some of these criticisms, especially the last, are coming mainly from people who did not support this government anyway, and so it was hard to say what the impact on the government's standing might be until we had some numbers on it.  Even then, we should treat these numbers with some caution, not only because of the relative failure of polling in last year's election, but also because it is unusual to see polling at this time of year.  In fact, the polling schedule (8-11 Jan) was the earliest out-of-field date in Newspoll history by two days.  Furthermore, it has been unusual in recent years to get Newspolls in January in non-election years at all.  So it does look like the interest value of the bushfire situation could have resulted in Newspoll going back into the field earlier than normal.

Wednesday, January 8, 2020

Hobart City Council Tanya Denison Recount

Jan 28: Recount today, once I have seen the results and the scrutiny sheet I will update this article.

Result: COATS WINS.  Coats defeats Bloomfield by 1.77 votes

Analysis:

In something of an upset result (unless you are Simon Behrakis who was the only one who suggested to me that Coats might win!) Will Coats, the youngest of the several Liberal candidates running has been elected.  He has defeated Louise Bloomfield by the precarious margin of 1.77 votes, the closest margin in a Hobart election to my knowledge (which goes back to the mid-1980s).

The recount started with Coats in 4th place on 12.0% behind Mallett (14.7%), Bloomfield (13.7%) and Alexander (12.8%).  I have never seen a candidate win a recount from 4th place.  Merridew was on 5.6%, suggesting that without the bug he would have started fairly close to the leaders.  Christie was on 2.8% and definitely wouldn't have won anyway, and Andy Taylor (5.5%, also disadvantaged by the bug but not as much as the others) also wouldn't have won.

As the recount progressed Coats gained on the leaders on the exclusion of minor candidates (so these are basically random votes 1 for some minor non-Liberal 2 Denison or the other way around, for example).  He passed Alexander for third on the preferences of Brian Corr and passed Mallett for second on the preferences of Andy Taylor.  Taylor was excluded ninth with Fiona Irwin eighth.

Merridew was excluded in seventh, at which point he was over 100 votes behind Alexander.  This gap suggests to me that without the impact of the recount bug Merridew would probably have finished fifth just behind Alexander.  However I cannot be sure about this; what is clear is that the bug has turned what looks like it would have been a slim chance into no chance.

Female candidates Bec Taylor (Greens) and Cat Schofield (Ind) had polled reasonably well in the recount off gender voting and were excluded sixth and fifth, and as they were cut out Bloomfield's lead grew to 108.48 votes (also gender voting) with only Bloomfield, Coats, Mallett and Alexander left.  However now Bloomfield was the only female candidate remaining.  Coats gained 21.7 votes off Alexander leaving Bloomfield 86.78 votes ahead with 415.6 Mallett votes to throw.

44.14 Mallett votes exhausted, so Coats needed 61.7% of the non-exhausting Mallett votes to win (bear in mind these could be Mallett votes that went to Denison in the original count or Denison votes that could have gone to Mallett).  However Coats actually got 61.9% and won by 1.77 votes.

Effectively, the gender advantages to each of Bloomfield and Coats at various stages of the preference flow cancelled out and Bloomfield's biggest problem was not quite having a large enough share of Denison's vote at the start.   That said I would not have expected Coats to be the one to catch up!

As a result, if someone voted, say, 1 Denison 2 Mallett 3 Coats 4 Bloomfield, then that individual voter's decision to put Coats ahead of Bloomfield made the difference - but this could also apply to many other voters deciding who to put way down the list.

Of course, positions being decided by a single voter's decision is a mockery when 2021 ballot papers were ruled informal in the original count, most of them because of clerical errors by the voter that should not have prevented their vote being counted.  This very close result further underlines the critical need for informal voting rules to be reformed before the next election.

Close Result

It's important to bear in mind that this recount is not a fresh count of the ballot papers; it is just a computer calculation of ballots that were already all entered in 2018.  The original ballot process involves two data entry operators independently using computer keyboard to key in what they see on each ballot paper.  If the two operators get exactly the same result, then that is accepted as the correct vote.  If they differ then a supervisor is called to check the vote; the same happens if the data entry indicates that the vote is informal.

It is possible (but rare) for a vote to be entered wrongly twice by two different operators.  In a 2014 report that I did for the TEC I noted that a trial of the system had found seven incorrectly double-entered ballots out of 12,000.  My report notes that actions were taken to make the errors that had happened less likely, but not what they were.

If errors occurred at such a rate in this count they would have mostly affected ballot papers that had no impact on the margin, or impacted them at a point that didn't matter, but it's always possible that there could be a wrong ballot that would have made all the difference.  In the case of a very close election, further data entry of at least some ballot papers might be considered to ensure the result was correct, but this didn't occur (for example) with the very close 2014 Tanya Denison result.  This recount is also an unusual case in that the original count was not super-close but the recount years later was.

The result has now been formally declared and the only recourse against it would be a court challenge to attempt to obtain a recount.  Courts are reluctant to overturn initial results or order recounts without evidence of errors in the original count.

-------------------------------------------------------------------------------------------------------------

A Hare-Clark recount (that's the official name, though "countback" would be better) is coming up on Hobart City Council for the seat being vacated by Tanya Denison.  Denison, a past federal Liberal candidate for the unwinnable seat then also called Denison (now called Clark), was in her second term on the Council.  She was first elected in 2014 after surviving exclusion at one point by 3.6 votes, and then re-elected comfortably in 2018, the seventh winner out of 12 elected.

This post explains the recount and considers the prospects of the possible candidates.  The recount consists solely of the votes that Tanya Denison had when she was elected.  The fact that Ron Christie missed out being re-elected to Council by 20 votes does not make him a big chance for the recount (in fact it harms his chances, for a reason to be explained below.)  All these votes go initially to the highest placed candidate on that vote who is contesting the recount (who may have been numbered above or below Denison on that ballot paper) at the value they had after Denison was elected and her total brought down to quota.  In this recount, no-one will have anything like 50% of the total, so then candidates are excluded bottom-up, like in a single-seat election, until someone wins.  All the ballot papers are already digitally stored so on the day of the recount this will all be calculated by the computer very quickly.  The main delay before the recount is held will be allowing time for candidate consents to contest the recount to be received.