Thursday, July 21, 2016
Seat Betting Improves, But Still No Miracle, At The 2016 Federal Election
1. Seat betting markets, sometimes considered to be highly predictive, returned another modest result at the 2016 federal election, predicting thirteen seats incorrectly.
2. Seat betting markets did however improve as a predictor of total seat wins compared to the 2013 election, at which they greatly overestimated the Coalition's winning seat margin.
3. Seat betting markets also performed impressively in their predictions of "non-classic" seats, getting only one (Cowper) wrong.
4. Overall the predictions of seat betting markets were extremely similar to those of poll-based projections.
5. Large swings to Labor in outer suburban and low-income seats, generally missed in public seat polling, gave seat betting markets an opportunity to show superior insight.
6. That chance was, however, generally missed, showing that seat betting displays no superior insights.
7. In terms of seat totals, individual seat markets again showed no notable insights, and were again outperformed by betting markets dealing specifically with the number of seats won.
With only one seat (Herbert) still in play and likely to remain so for over two weeks, it's a reasonable time for a review of the accuracy or otherwise of betting on individual seats as a predictor of the federal election results. Seat betting odds often get media attention as a way to try to predict election outcomes, though I didn't see all that much of that stuff this time around, its wings having been more or less ripped off last time around. Lest anyone think 2013 was the exception that proves the rule, I find that in 2016 seat betting markets were slightly better, but not miraculously so.
I also looked at this theme after the 2013 federal election. In that case, there was a massive run on seat markets late in the game that resulted in them greatly overestimating the number of Labor losses. Seat markets predicted ten Labor losses to the Coalition that didn't occur while missing only one Coalition gain. They were also wrong in three non-classic contests, for a total of fourteen incorrect predictions (plus one seat in which final markets were split).
There were two obvious reasons for seat betting to get the 2013 contest wrong. One was the difference between national polls (which pointed to a result around that which actually happened) and seat polls (which systematically implied a far more serious thumping and predicted many losses that didn't actually occur). The second was that the polls were still moving the Coalition's way in the final days, and some people thought they could end up with 55% two-party preferred or more, especially given that final polls had tended to overestimate Labor on average in 2007 and 2010.
Those problems really didn't exist this time. There was hardly any movement in aggregated national polling through the whole campaign. Neutral seat polls (those commissioned independently by media or done by pollsters themselves, rather than those commissioned by parties or lobby groups) expected an average swing against the Coalition in the marginal seats that was around or slightly less than the expected national swing (full figures on this will be tallied when all the results are in). The national polls and seat polls seemed to be in harmony - as it turned out, a little too much harmony! (I will review national and seat polling in detail when all of the votes are in.)
So how did seat betting go without the distractions of obviously skewed seat polls or rapid movements?
Seat Betting Outcomes
I tracked seat betting at various points through the campaign with a final check at 2 am on election day.
At this point the seat markets had the Coalition favourite in 79 seats, Labor in 64.5, and others in 6.5 (Batman was a split market between Labor and the Greens - who was favourite varied between different sites).
The following seats were expected to be won by Labor but have been won by the Coalition:
The following seats were expected to be won by the Coalition but have been gained by Labor:
Also the markets expected Cowper to be won by Rob Oakeshott from the Nationals, but it wasn't.
So all up, the markets were correct in 136 or 137 seats and wrong in 12 or 13 (one of these a non-classic seat), with one seat not predicted. This is a very slight improvement on 2013 (135 right, 14 wrong, 1 not predicted). In 2013 three of the errors were in non-classic seats, so in terms of 2PP seats, seat markets performed more or less exactly the same as in 2013 (but without such a strong skew in favour of one party).
The markets improved greatly, however, in their predictions of non-classic seats. In 2013 there were really only about half a dozen competitive non-classic contests (excluding Liberal-vs-National seats) and the markets got three of them wrong. In 2016 there were at least a dozen such and the markets got just one wrong with a fence-sit on another (which looks like a pretty reasonable fence-sit since the final margin in Batman looks like about 51:49). Amusingly, in 2013 all the errors were against the winning crossbenchers, but in 2016 what errors there were in their favour. Many people, me included, expected the increased third-party vote and emergence of NXT to translate into more crossbench seats. Instead, one new crossbencher won easily, while prospective crossbenchers missed out by a point in Batman, two points in Grey and probably less than a point in Melbourne Ports.
The following was my tracking graph of seat poll favourites at various stages, together with a "close seats" adjusted figure which split a seat .7 to .3 if both parties were inside $3:
Key to colours:
Dark blue - Coalition favoured in all markets
Light blue - Coalition favoured in some markets, tied in others
Grey - seat tied, or different favourites in different markets
Pink - Labor favoured in some markets, tied in others
Red - Labor favoured in all markets
Dark green - Green favoured in all markets
Orange - NXT favoured in all markets
Purple - Ind favoured in all markets
In terms of individual favourites, the markets always had the wrong favourite in eight of the classic-2PP seats (nine if Herbert falls), and had far too many Coalition wins in March and April while the Coalition was still polling strongly. In early May the seat total became more accurate as polls crashed to the 50-50 line, but the individual seat forecasts didn't immediately do so. From mid-May on individual seats bounced around but the list of favourites typically had about 11 classic seats wrong (again, potentially add one for Herbert). The list of favourites didn't become any more accurate as election day approached, but at least this time it didn't get worse.
The total number of seats in which the Coalition was expected to win was at least 2-3 seats higher than the eventual score of 76, every time I checked. But at most times the total as adjusted for "close seats" was very close to what eventually happened. This was because, through most of the campaign, there were more seats on the Coalition side that the markets expected to be won but were not very sure about. As election day approached however, the markets became more confident about a bunch of seats that had earlier been considered at severe risk. In fact the Coalition lost at least three and perhaps four seats where Labor were longer than $3 (Lindsay, Bass, Longman and Herbert), while winning no seat where they themselves were at such odds.
It's also notable that a few days out, the markets had all the non-classic seats correct, only to incorrectly flip on Cowper and partly flip on Batman in the final days.
A missed opportunity
The most striking feature of the seat betting lists of favourites was how similar they were to objective-data-based models. (This was also the case in the 2015 UK election).
To give the simplest of all, suppose you just took the average of the headline 2PPs in the final polls by the five pollsters active in the last week (50.6) and applied the swing suggested by that to the national pendulum - ignoring seat polls, personal votes, state polls and probabilities. That simple method would have differed from the seat betting when it came to predicting winners in only five seats (Braddon, Hindmarsh, Eden-Monaro by a very small margin, Bass and Macarthur) and been right about two of those. It would have had the Coalition favourites in too many seats by about four, which is a reflection of the success of Labor's marginal seats campaign. Normally such a model would underestimate a government's performance if that government had won a lot of seats the election before.
My own 2PP forecasting model (which was based on national polling, public seat polling and personal vote effects - completely ignoring any possible subjective inputs) ended up disagreeing with the seat betting in just three of the classic seats - Page (correctly) and Eden-Monaro and Macarthur (where the markets were right). A similar model greatly outperformed seat betting markets in 2013, but this time did one seat worse.
Of these, Page and Eden-Monaro had no public seat polls but both had commissioned seat polls showing the Coalition losing (incorrectly in the case of Page). Macarthur's first two public seat polls had the Liberals trailing slightly, then two more put it at 50:50, only for Labor to win 58:42. Score one for seat betting there. However, my model, unlike the seat markets, continued to show that the Coalition had a much shakier set of expected wins than Labor right until the end.
On the whole, whether the seat markets were at the end driven by the modellers employed by the bookies or by the money being thrown at them by punters, they were not telling a different story to the way psephologists were reading the national and local polls.
And yet, if there was an opportunity for them to do so, this was it. One theory about seat markets is that they should be able to capture the vibe of campaigns on the ground, and hence if the seat polls are wrong those "in the know" will place bets accordingly. Another is that seat markets are influenced by party insiders who place bets on the basis of internal polling - but if the internal polling in seats like Lindsay and Bass was saying something different to the public polling, then the odds did not reflect it.
What models and markets alike missed
Overall this House of Reps election had a surprisingly unsurprising outcome. The polls were right in terms of the national 2PP vote, the typical projection off these polls according to practically everyone modelling it was a close Coalition majority win, and the Coalition has pulled up just very slightly short in seat terms of its expected score according to models.
But there is a clear story of the seat-by-seat outcomes of this election that is told in the statistics compiled by William Bowe. The election was about the personal economy to a far greater extent than most observers or the Coalition campaign (at least) ever realised. (See also Kosmos Samaras on house price/swing relationships in Melbourne.) Swings to Labor were high in areas with low average education, young average age, high rates of mortgaging and low median income. The median income link would be even stronger if income-poor asset-rich rural seats were ignored. Combining these factors wouldn't have predicted the blowouts in some seats and the easy holds in others, but would at least have made it clear some were much more at risk than seat polling was saying.
Many commentators assumed voters were ignoring all that boring policy stuff, but the results suggest voters were paying attention to these issue differences, whether the issue was negative gearing or tax cuts or the cost of seeing the doctor. Either that or having Malcolm Turnbull as PM instead of Tony Abbott was enough to drive some of the differences in swing all by itself.
The suggestion is that much richer demographic data in our public polling might be a bigger help in predicting which seats were going to record unusual swings - at least at this sort of campaign - than public seat polling is. But with the seat polls clearly getting some of these seats wrong, the failure of seat betting to do any better shows that seat betting was not doing anything special or different to anyone else.
One possible reason for this is that nobody has yet discovered the magic formula that turns apparent 75% probabilities into 95% chances or that allows you to reliably predict, say, 145 seats nationwide. Another, however, is that even if seat markets reflected the pure views of punters (as mediated through formulae used by bookmakers to set the odds, and with no input from the bookmaker), the most informed bettors are actually excluded from the markets.
During the election a few election punters contacted me about their experiences in being blocked by online political bookmakers (or having their bet amounts restricted) for winning too often. This brings up a problem noted by @pollytics from time to time: it is silly to expect too much by way of collective wisdom from a bunch of people who are, between them, losing money.
I'm not sure whether I will pay so much attention to seat betting markets in future elections - it's fun, but the point that they won't tell us anything special seems to be well proven after the last two elections. If there is something that can greatly improve our ability to predict specific seat results, seat betting won't be it.
(31 July: Minor edits made to reflect that Labor has won Herbert).