Sunday, June 16, 2019

Seat Betting As Bad As Anything Else At Predicting The 2019 Federal Election

Advance Summary

1. Seat betting markets, sometimes believed to be highly predictive, did not escape the general failure of poll and betting based predictions at the 2019 federal election.

2. Indeed, seat betting markets were significantly worse predictors of the result than the national polls through the election leadup, and only converged with polling-based models to reach a prediction that was as inaccurate as the national polls at the end.

3. Seat betting predicted fourteen seats incorrectly, but all of its errors in Labor vs Coalition contests, in common with most other predictive methods, were in the same direction.

4. Seat betting markets did vary from a national poll-based outlook in several seats, but their forecasts in such cases were about as often misses as hits.

5. This is the third federal election in a row at which seat betting has failed to show that it is a useful predictor of classic (Labor vs Coalition) seat-by-seat results in comparison with simpler methods.  


With all House of Representatives seats now declared, it's time for a regular post-election feature on this site, a review of how seat-by-seat betting fared as a predictive method.  I have been interested in this subject over the years mainly to see whether seat betting contained any superior insight that might be useful in predicting elections.  In 2013 the answer was a resounding no, in 2016 it was a resounding meh, and surely if seat betting could show that it knew something that other sources of information didn't, 2019 would be the year! Even if seat betting wasn't a very good predictor, if it was not as bad as polling or headline betting this year, that would be something in its favour.

2019 saw the first failure in the headline betting markets since 1993, but it was a much bigger failure than that.  In 1993 Labor were at least given some sort of realistic chance by the bookies, and ended up somewhere in the $2-$3 range (I don't have the exact numbers).  This year the Coalition were $7.00 to Labor's $1.10 half an hour before polls closed - just an implied 14% chance -  and Sportsbet had already besmirched itself in more ways than one by paying out early (which I think should be banned when it comes to election betting, but that's another story).  The view that "the money never lies" has been remarkably immune to evidence over the years, but surely this will be the end of it for a while.

Seat Betting Outcomes

As usual I tracked seat betting predictions at various times over the final few months, with a final check at 1 am on election day.

Five markets were split between different companies and I noted that cross-market averages had the Coalition in Reid, Farrer and Stirling and Labor in Braddon and Capricornia as favourites.  Sportsbet (the biggest market) disagreed with the latter two, and had a tie in Farrer.  In total, the markets had Labor as favourites in 80 seats, the Coalition 65 and others 6.  

The markets, at least on average, expected the following Labor wins in seats won by the Coalition:

Braddon (split - Sportsbet correct)
Capricornia (split - Sportsbet correct)
La Trobe

The markets wrongly expected the Coalition to win Indi (won by Helen Haines) and lose Cowper. (And Sportsbet fence-sat on Farrer, which the rest on average were right about.)

Overall therefore, the markets were wrong in 14 seats (Sportsbet was wrong in 12.5).  This included being wrong in 12 classic-2PP seats (Sportsbet were wrong in 10).  That's not that bad by itself; it's not clear that it's possible to reliably do any better than that (especially not when the national polls are wrong).  The problem, as with most other predictive methods at this election, was that the errors lay all in the same direction.

None of Labor's seat failures were massive surprises in and of themselves.  Only Longman was outside $3 on all markets.


My final colour-coded graph of seat betting tracking over time looked like this:

Key to colours:

Red - Labor favourite in all markets
Orange - Labor favourite in some markets, tied in others
Dark blue - Coalition favourite in all markets
Light blue - Coalition favourite in some markets, tied in others
Grey - all markets tied or different favourites in different markets
Purple - IND favourite in all markets
Pink - IND favourite in some markets, tied in others

In some ways this is the reverse of 2013.  In 2013 the markets started with a good prediction and then got worse over time, especially at the end.  In 2019 the markets started out with a worse prediction than would have been obtained from the polling at the time, and over time allowed the polling to move them so that in the end they were only about as bad as the national polls.  When Labor had large leads, a historic view of polling performance suggested those leads would probably narrow by the final weeks, so seat betting markets early in the election leadup were collectively expecting a major blowout in Labor's favour.   They were slow in moving to a position that was only as bad as the national polls - when betting should (if it is any use at all) be better at competing with polls further out from an election than close to it.

Eleven seats were expected to be Labor gains at all times, and Labor gained three of them (two of those notionally theirs to begin with).  Of Labor's five losses, the markets picked two, were lineball about a third, had at one stage predicted a fourth, and never paid all that much attention to Longman.

Betting vs polls

The test I like to use of whether betting is really worth looking at for predictiveness of classic seats is whether it can beat a very simple pendulum and polling based model - the pendulum modified by the average of the final polls from each company, ignoring bells and whistles like personal vote effects, state factors and so on.

In this case the final poll average was 51.4 to Labor (the actual result to one decimal will almost certainly be 51.5 to Coalition.) The simple pendulum model would have made eleven  2PP errors (again all in the same direction): wrongly predicting Labor gains in Capricornia, Forde, Flynn, Robertson, Banks and Petrie and missing Labor's five losses in Herbert, Longman, Lindsay, Braddon and Bass.  So it would have been one seat better than the overall market average, and one seat worse than Sportsbet.  In comparison with the simple pendulum model, the markets differed in nine classic seats.  They got Flynn, Banks, Herbert and Lindsay right (and Sportsbet also got Capricornia and Braddon right) but bought into the view that the Coalition was in trouble in Victoria, Brisbane and WA, and were hence led astray in Chisholm, La Trobe, Dickson, Swan and Hasluck.  In the cases of La Trobe, Dickson and Hasluck, the national polling failure wasn't even the culprit: it turns out that those seats wouldn't have fallen anyway.

Non-classic seats

At the 2019 election there were six crossbench seats being defended (Kennedy, Clark, Melbourne, Indi, Wentworth, Mayo), of which only Indi and Wentworth were ever in serious doubt.  There were clearly significant crossbench challenges in Warringah, Farrer, Cowper and Macnamara.  Some people took crossbench challenges seriously in some other seats such as Kooyong, Higgins, Flinders, Brisbane, Mallee, Curtin and others.

Non-classic seats should give seat betting an opportunity to do well because they cannot be modelled easily by pendulum-based polling methods, and they often don't see that much seat polling.  Where they do see seat polling, it is often internal polling that is even worse than neutral seat polls.

In this election leadup betting always had Wentworth, Macnamara and all the more dubious inclusions in the at-risk list right.  They always had Cowper wrong.  In Warringah they took a lot of convincing to move off the prior that Tony Abbott would retain (presumably based on his past margin, which was irrelevant as he had not faced a similar challenge before).  They were very uncertain about Indi and Farrer and tended to become less predictive about those seats as the campaign went on, before (on average) just getting Farrer right at the end.

Farrer was an interesting one because the feeling that the seat could be lost based on the precedent of the NSW state election involving the Shooters, Fishers and Farmers was a strong one.  However, voters may have felt they'd let off enough steam, or had their message heard, or might have had reservations about even a prominent indie that they didn't have about the Shooters.  The feel that Farrer was in big trouble (it was actually resoundingly retained) also tended to be reinforced by word on the ground type reporting in the mainstream press, but this kind of reporting provided no superior insights either.  (Perhaps it's too easy for this kind of reporting to get a wrong feel by spending too much time in major population centres.)

Is there any hope for seat betting being useful at predicting classic seats?

As a general rule, if two sources of predictive information (like poll-based prediction and seat-betting-based prediction) are similar in accuracy but produce different forecasts, then aggregating them somehow should increase predictiveness compared to either method alone.  This applies even if one of the methods is slightly worse than the other.  However, when one predictive method is much worse than another, aggregating them can make a worse prediction than the better method alone.

In the case of seat betting, at the last three elections we have had one case of it being much worse than polling-based methods, one case of it producing more or less identical forecasts to polling-based methods, and now one case of it producing forecasts that were not much worse and differed in several seats.  This year might be a case in which a combined polling/betting method would have done slightly better than a poll-based method alone, but the very marginal gain that might be possible isn't worth it given the risks shown in 2013.

In general, seat betting has shown that it is strongly influenced by the national polls.  When it does deviate from them, there's been no evidence lately of it correctly second-guessing what national polls are doing wrong.  Yet I suspect people will continue to follow seat betting odds as if they are predictive for no other reason than that the data exist and are easy to look up, talk about and construct elaborate models of.

It's going to be difficult to use models based on polling generally to predict federal elections for a while.  I'm actually intending to stop doing it, because it doesn't seem to be a core part of what I do based on visitor levels to particular articles on this site, so I'd rather focus my predictive efforts in areas where they're less at risk of being wrong.  (I will however continue to offer translations of poll readings into seat tallies on a provisional "if these polls aren't nonsense" type basis.)

The 2019 pollster failure creates two problems.  Firstly, it's very hard to model pollster house effects reliably when polls have suddenly displayed unusually large predictive errors at a given election - was this a freak event down to the character of the election, or a more systematic thing at federal level?  Secondly, if the pollsters adopt unique responses to the polling failure, those responses might cause them to be wrong in the opposite direction (as happened in the UK in 2017).

Ultimately, any form of betting-based prediction has the same problem, because the established behaviour so far is that betting markets are strongly influenced by polls.  Perhaps they will now become less so,  but if so they're at least as likely to be more wrong than right.   

No comments:

Post a Comment