Saturday, June 29, 2019

What Might 2PP Voting Intention Have Really Looked Like In The Last Federal Term?

(An update was added to this article on 2 July 2020 - scroll down.  The original text has not been revised.)

The 2016-2019 parliament saw Australia's worst failure of national opinion polling since the early 1980s, a failure that was not just a combination of normal errors and a reasonably close election.  Aggregated polling had the Coalition behind for the entire term, at no stage better than 49% two-party preferred, and yet the Coalition won with 51.53% of the two-party preferred vote.

The view that the polls were in fact right all along but voters changed their minds at the last moment (either on election day, or on whatever day each elector voted) fails every test of evidence that it can be put to.  The difference between voting intention for voters voting before election day and on election day is similar to past elections, and if anything slightly stronger for the Coalition.  There was no evidence in polling of change in voting intention through the final weeks, as would have been expected as voters who had already voted reported back their behaviour if the polls were at all times accurately capturing the intentions of the person being polled.  Also if those who had already voted had shifted towards the Coalition as they made their final decisions while those who had yet to vote were yet to do so, there would have been polling gaps of several points between those who had already voted and those yet to do so; this was not the case in the released evidence either.  

Friday, June 28, 2019

Most Tasmanian Senate Votes Were Unique

Over the last week or so I've been looking at some statistics relating to the uniqueness (or not) of Senate votes in Tasmania, and some other aspects of Tasmanian Senate voting.  At the moment I'm only doing this for Tasmania, but it can be extended to other states if anyone else wants to do so.  This article has been rated 4/5 on the Wonk Factor scale - it is obviously out and out wonkcore but the maths is not as tricky as in some of the stuff on this site.

All Senate votes are scanned by optical character recognition and the scans are verified by human data operators.  The AEC publishes files of all formal Senate preference votes that can be used by outside observers to verify that the AEC is getting the right results and computing the count correctly.  This year's formatting of these files is a lot more user-friendly than in 2016.  On downloading the files one can find all the numbers recorded as entered in the system for any vote recorded as formal.  Sometimes this includes both above the line preferences and below the line preferences (if both are formal, below the line takes precedence, an issue I will come to later on.)
One minor change is that ticks and crosses are no longer indicated by special characters, an aspect that was the source of some confusion among the easily confused at the last election.

Thursday, June 20, 2019

Senate Reform Performance Review 2019

The results of this year's half-Senate election are all in so it is time to observe how our new Senate system performed at its first half-Senate test.  Australian Senate voting was reformed in the leadup to the 2016 election to abolish Group Ticket voting and the preference-harvesting exploits it had become prone to, and give voters more flexibility in directing their own preferences either above or below the line.  In the leadup to that election, many false predictions about Senate reform were made and were then discredited by the results.  I reviewed how the new system went back then: Part 1, Part 2.  Some of the predictions that were made by opponents of Senate reform concerned the results of half-Senate elections specifically, so now we've had one, it's a good time to check in on those, as well as on how this election compared to 2019.  One unexpected issue with the new system has surfaced, concerning above the line boxes for non-party groups, but it is one that should be easily fixed.

Sunday, June 16, 2019

Seat Betting As Bad As Anything Else At Predicting The 2019 Federal Election

Advance Summary

1. Seat betting markets, sometimes believed to be highly predictive, did not escape the general failure of poll and betting based predictions at the 2019 federal election.

2. Indeed, seat betting markets were significantly worse predictors of the result than the national polls through the election leadup, and only converged with polling-based models to reach a prediction that was as inaccurate as the national polls at the end.

3. Seat betting predicted fourteen seats incorrectly, but all of its errors in Labor vs Coalition contests, in common with most other predictive methods, were in the same direction.

4. Seat betting markets did vary from a national poll-based outlook in several seats, but their forecasts in such cases were about as often misses as hits.

5. This is the third federal election in a row at which seat betting has failed to show that it is a useful predictor of classic (Labor vs Coalition) seat-by-seat results in comparison with simpler methods.  

-----------------------------------------------------------------------------------------------------

With all House of Representatives seats now declared, it's time for a regular post-election feature on this site, a review of how seat-by-seat betting fared as a predictive method.  I have been interested in this subject over the years mainly to see whether seat betting contained any superior insight that might be useful in predicting elections.  In 2013 the answer was a resounding no, in 2016 it was a resounding meh, and surely if seat betting could show that it knew something that other sources of information didn't, 2019 would be the year! Even if seat betting wasn't a very good predictor, if it was not as bad as polling or headline betting this year, that would be something in its favour.

2019 saw the first failure in the headline betting markets since 1993, but it was a much bigger failure than that.  In 1993 Labor were at least given some sort of realistic chance by the bookies, and ended up somewhere in the $2-$3 range (I don't have the exact numbers).  This year the Coalition were $7.00 to Labor's $1.10 half an hour before polls closed - just an implied 14% chance -  and Sportsbet had already besmirched itself in more ways than one by paying out early (which I think should be banned when it comes to election betting, but that's another story).  The view that "the money never lies" has been remarkably immune to evidence over the years, but surely this will be the end of it for a while.

Wednesday, June 12, 2019

Senate 2019: Button Press Thread

Intro 

Just starting a thread that will cover the button presses in the remaining Senate races including any interesting information from the distributions of preferences as they come to hand.  I haven't been putting myself in the loop concerning when exactly the button presses will occur, save that Tasmania's will be tomorrow at 10:30 am (open to scrutineers, of which I'm not one this year) with the declaration of the poll on Friday at the same time.  The ACT count is also ready to go (to be delcared on Friday afternoon) and the remaining counts are getting close to completion with relatively few unapportioned or uncounted votes still showing.  The NT button has already been pressed, which did nothing because both major party #1 candidates had a quota.  William Bowe has some comments on NT preferences.

Jim Molan's Senate Result In Historic Context

There is a lot of discussion surrounding Senator Jim Molan's below the line vote in the NSW Senate race.  Misleading arguments about it are being weaponised by some of those who would like to see Molan appointed to the Sinodinos casual vacancy, but there is also a risk that amid all this appreciation of the scale of Molan's result could be lost.

To start with, Molan absolutely is not going to win and has never even looked remotely like being in contention during counting.   But his result is still very significant - in the state in which getting a high below-the-line vote is most difficult (because of historically low below the line rates and also the sheer scale required for an individual campaign), Molan has so far polled just over 130,000 votes (2.8%).  His share should rise slightly based on remaining unapportioned votes but won't be significantly above 3%, if it even reaches that.  

Saturday, June 1, 2019

How Can Australian Polling Disclosure And Reporting Be Improved?

Australian national opinion polling has just suffered its worst failure in result terms since 1980 and its worst failure in margin terms since 1984.  This was not just an "average polling error", at least not by the standards of the last 30+ years.  The questions remain: what caused it and what can be done (if anything) to stop it happening again.

A major problem with answering these questions is that Australian pollsters have not been telling us nearly enough about what they do.  As Murray Goot has noted, this has been a very long-standing problem.

In general, Australian pollsters have taken an approach that is secretive, poorly documented, and contrary to scientific method.   One notable example of this was Galaxy (it looks like correctly) changing the preference allocation for One Nation in late 2017, and not revealing they had done this for five months (in which time The Australian kept wrongly telling its readers Newspoll preferences were based on the 2016 election.)  But more generally, even very basic details about how pollsters do their work are elusive unless you are on very good terms with the right people.  Some polls also have statistically unlikely properties (such as not bouncing around as much as their sample size suggests they should, either in poll to poll swing terms or in seat-polling swing terms) that they have never explained.