ELECTORAL, POLLING AND POLITICAL ANALYSIS, COMMENT AND NEWS FROM THE PEOPLE'S REPUBLIC OF CLARK. IF YOU CHANGE THE VOTING SYSTEM YOU CHANGE VOTER BEHAVIOUR AND ANYONE WHO DOESN'T UNDERSTAND THAT SHOULDN'T BE IN PARLIAMENT.
Thursday, July 21, 2016
Seat Betting Improves, But Still No Miracle, At The 2016 Federal Election
Advance Summary
1. Seat betting markets, sometimes considered to be highly predictive, returned another modest result at the 2016 federal election, predicting thirteen seats incorrectly.
2. Seat betting markets did however improve as a predictor of total seat wins compared to the 2013 election, at which they greatly overestimated the Coalition's winning seat margin.
3. Seat betting markets also performed impressively in their predictions of "non-classic" seats, getting only one (Cowper) wrong.
4. Overall the predictions of seat betting markets were extremely similar to those of poll-based projections.
5. Large swings to Labor in outer suburban and low-income seats, generally missed in public seat polling, gave seat betting markets an opportunity to show superior insight.
6. That chance was, however, generally missed, showing that seat betting displays no superior insights.
7. In terms of seat totals, individual seat markets again showed no notable insights, and were again outperformed by betting markets dealing specifically with the number of seats won.
-----------------------------------------------------------------------------------------------------------
With only one seat (Herbert) still in play and likely to remain so for over two weeks, it's a reasonable time for a review of the accuracy or otherwise of betting on individual seats as a predictor of the federal election results. Seat betting odds often get media attention as a way to try to predict election outcomes, though I didn't see all that much of that stuff this time around, its wings having been more or less ripped off last time around. Lest anyone think 2013 was the exception that proves the rule, I find that in 2016 seat betting markets were slightly better, but not miraculously so.
I also looked at this theme after the 2013 federal election. In that case, there was a massive run on seat markets late in the game that resulted in them greatly overestimating the number of Labor losses. Seat markets predicted ten Labor losses to the Coalition that didn't occur while missing only one Coalition gain. They were also wrong in three non-classic contests, for a total of fourteen incorrect predictions (plus one seat in which final markets were split).
There were two obvious reasons for seat betting to get the 2013 contest wrong. One was the difference between national polls (which pointed to a result around that which actually happened) and seat polls (which systematically implied a far more serious thumping and predicted many losses that didn't actually occur). The second was that the polls were still moving the Coalition's way in the final days, and some people thought they could end up with 55% two-party preferred or more, especially given that final polls had tended to overestimate Labor on average in 2007 and 2010.
Those problems really didn't exist this time. There was hardly any movement in aggregated national polling through the whole campaign. Neutral seat polls (those commissioned independently by media or done by pollsters themselves, rather than those commissioned by parties or lobby groups) expected an average swing against the Coalition in the marginal seats that was around or slightly less than the expected national swing (full figures on this will be tallied when all the results are in). The national polls and seat polls seemed to be in harmony - as it turned out, a little too much harmony! (I will review national and seat polling in detail when all of the votes are in.)
So how did seat betting go without the distractions of obviously skewed seat polls or rapid movements?
Seat Betting Outcomes
I tracked seat betting at various points through the campaign with a final check at 2 am on election day.
At this point the seat markets had the Coalition favourite in 79 seats, Labor in 64.5, and others in 6.5 (Batman was a split market between Labor and the Greens - who was favourite varied between different sites).
The following seats were expected to be won by Labor but have been won by the Coalition:
Chisholm (gain)
Page
Capricornia
Petrie
The following seats were expected to be won by the Coalition but have been gained by Labor:
Braddon
Bass
Hindmarsh
Macquarie
Longman
Lindsay
Cowan
Herbert
Also the markets expected Cowper to be won by Rob Oakeshott from the Nationals, but it wasn't.
So all up, the markets were correct in 136 or 137 seats and wrong in 12 or 13 (one of these a non-classic seat), with one seat not predicted. This is a very slight improvement on 2013 (135 right, 14 wrong, 1 not predicted). In 2013 three of the errors were in non-classic seats, so in terms of 2PP seats, seat markets performed more or less exactly the same as in 2013 (but without such a strong skew in favour of one party).
The markets improved greatly, however, in their predictions of non-classic seats. In 2013 there were really only about half a dozen competitive non-classic contests (excluding Liberal-vs-National seats) and the markets got three of them wrong. In 2016 there were at least a dozen such and the markets got just one wrong with a fence-sit on another (which looks like a pretty reasonable fence-sit since the final margin in Batman looks like about 51:49). Amusingly, in 2013 all the errors were against the winning crossbenchers, but in 2016 what errors there were in their favour. Many people, me included, expected the increased third-party vote and emergence of NXT to translate into more crossbench seats. Instead, one new crossbencher won easily, while prospective crossbenchers missed out by a point in Batman, two points in Grey and probably less than a point in Melbourne Ports.
Tracking
The following was my tracking graph of seat poll favourites at various stages, together with a "close seats" adjusted figure which split a seat .7 to .3 if both parties were inside $3:
Key to colours:
Dark blue - Coalition favoured in all markets
Light blue - Coalition favoured in some markets, tied in others
Grey - seat tied, or different favourites in different markets
Pink - Labor favoured in some markets, tied in others
Red - Labor favoured in all markets
Dark green - Green favoured in all markets
Orange - NXT favoured in all markets
Purple - Ind favoured in all markets
In terms of individual favourites, the markets always had the wrong favourite in eight of the classic-2PP seats (nine if Herbert falls), and had far too many Coalition wins in March and April while the Coalition was still polling strongly. In early May the seat total became more accurate as polls crashed to the 50-50 line, but the individual seat forecasts didn't immediately do so. From mid-May on individual seats bounced around but the list of favourites typically had about 11 classic seats wrong (again, potentially add one for Herbert). The list of favourites didn't become any more accurate as election day approached, but at least this time it didn't get worse.
The total number of seats in which the Coalition was expected to win was at least 2-3 seats higher than the eventual score of 76, every time I checked. But at most times the total as adjusted for "close seats" was very close to what eventually happened. This was because, through most of the campaign, there were more seats on the Coalition side that the markets expected to be won but were not very sure about. As election day approached however, the markets became more confident about a bunch of seats that had earlier been considered at severe risk. In fact the Coalition lost at least three and perhaps four seats where Labor were longer than $3 (Lindsay, Bass, Longman and Herbert), while winning no seat where they themselves were at such odds.
It's also notable that a few days out, the markets had all the non-classic seats correct, only to incorrectly flip on Cowper and partly flip on Batman in the final days.
A missed opportunity
The most striking feature of the seat betting lists of favourites was how similar they were to objective-data-based models. (This was also the case in the 2015 UK election).
To give the simplest of all, suppose you just took the average of the headline 2PPs in the final polls by the five pollsters active in the last week (50.6) and applied the swing suggested by that to the national pendulum - ignoring seat polls, personal votes, state polls and probabilities. That simple method would have differed from the seat betting when it came to predicting winners in only five seats (Braddon, Hindmarsh, Eden-Monaro by a very small margin, Bass and Macarthur) and been right about two of those. It would have had the Coalition favourites in too many seats by about four, which is a reflection of the success of Labor's marginal seats campaign. Normally such a model would underestimate a government's performance if that government had won a lot of seats the election before.
My own 2PP forecasting model (which was based on national polling, public seat polling and personal vote effects - completely ignoring any possible subjective inputs) ended up disagreeing with the seat betting in just three of the classic seats - Page (correctly) and Eden-Monaro and Macarthur (where the markets were right). A similar model greatly outperformed seat betting markets in 2013, but this time did one seat worse.
Of these, Page and Eden-Monaro had no public seat polls but both had commissioned seat polls showing the Coalition losing (incorrectly in the case of Page). Macarthur's first two public seat polls had the Liberals trailing slightly, then two more put it at 50:50, only for Labor to win 58:42. Score one for seat betting there. However, my model, unlike the seat markets, continued to show that the Coalition had a much shakier set of expected wins than Labor right until the end.
On the whole, whether the seat markets were at the end driven by the modellers employed by the bookies or by the money being thrown at them by punters, they were not telling a different story to the way psephologists were reading the national and local polls.
And yet, if there was an opportunity for them to do so, this was it. One theory about seat markets is that they should be able to capture the vibe of campaigns on the ground, and hence if the seat polls are wrong those "in the know" will place bets accordingly. Another is that seat markets are influenced by party insiders who place bets on the basis of internal polling - but if the internal polling in seats like Lindsay and Bass was saying something different to the public polling, then the odds did not reflect it.
What models and markets alike missed
Overall this House of Reps election had a surprisingly unsurprising outcome. The polls were right in terms of the national 2PP vote, the typical projection off these polls according to practically everyone modelling it was a close Coalition majority win, and the Coalition has pulled up just very slightly short in seat terms of its expected score according to models.
But there is a clear story of the seat-by-seat outcomes of this election that is told in the statistics compiled by William Bowe. The election was about the personal economy to a far greater extent than most observers or the Coalition campaign (at least) ever realised. (See also Kosmos Samaras on house price/swing relationships in Melbourne.) Swings to Labor were high in areas with low average education, young average age, high rates of mortgaging and low median income. The median income link would be even stronger if income-poor asset-rich rural seats were ignored. Combining these factors wouldn't have predicted the blowouts in some seats and the easy holds in others, but would at least have made it clear some were much more at risk than seat polling was saying.
Many commentators assumed voters were ignoring all that boring policy stuff, but the results suggest voters were paying attention to these issue differences, whether the issue was negative gearing or tax cuts or the cost of seeing the doctor. Either that or having Malcolm Turnbull as PM instead of Tony Abbott was enough to drive some of the differences in swing all by itself.
The suggestion is that much richer demographic data in our public polling might be a bigger help in predicting which seats were going to record unusual swings - at least at this sort of campaign - than public seat polling is. But with the seat polls clearly getting some of these seats wrong, the failure of seat betting to do any better shows that seat betting was not doing anything special or different to anyone else.
One possible reason for this is that nobody has yet discovered the magic formula that turns apparent 75% probabilities into 95% chances or that allows you to reliably predict, say, 145 seats nationwide. Another, however, is that even if seat markets reflected the pure views of punters (as mediated through formulae used by bookmakers to set the odds, and with no input from the bookmaker), the most informed bettors are actually excluded from the markets.
During the election a few election punters contacted me about their experiences in being blocked by online political bookmakers (or having their bet amounts restricted) for winning too often. This brings up a problem noted by @pollytics from time to time: it is silly to expect too much by way of collective wisdom from a bunch of people who are, between them, losing money.
I'm not sure whether I will pay so much attention to seat betting markets in future elections - it's fun, but the point that they won't tell us anything special seems to be well proven after the last two elections. If there is something that can greatly improve our ability to predict specific seat results, seat betting won't be it.
(31 July: Minor edits made to reflect that Labor has won Herbert).
10 comments:
The comment system is unreliable. If you cannot submit comments you can email me a comment (via email link in profile) - email must be entitled: Comment for publication, followed by the name of the article you wish to comment on. Comments are accepted in full or not at all. Comments will be published under the name the email is sent from unless an alias is clearly requested and stated. If you submit a comment which is not accepted within a few days you can also email me and I will check if it has been received.
Subscribe to:
Post Comments (Atom)
"Swings to the Coalition were high" etc etc. Or swings to Labor, Kevin?
ReplyDeleteTa, fixed.
DeleteI never paid much attention to the markets until I read Simon Jackaman’s paper on their performance in 2010. Simon said that the betting markets changed within an hour or two of the release of a new poll and thus that the markets were responding to polls. Even if the markets were pari-mutuel (they aren’t), it would still be the case that the punters were responding to the polls. I tracked the markets from mid-May until Election Eve- analysing only the markets on a House majority (Bet365, SportsBet, Hill, Centrebet, Luxbet and Unibet). I then took out the Vigorish and converted the results to a % chance of a win by the two major parties. I compared this with the %chance of a win as given by the polls. This, I based on the mean and standard deviation of the relationship between the TPP% and the %age of seats won for all elections 1949-2013. In mid May both the polls and the markets gave the COAL:ALP chances as 75%:25%. However, starting in Mid-June and the markets began to move even though the polls didn’t. By Election Eve, when the polls were showing 55%:45% for a Coalition win, the markets were showing 88%:12%.
ReplyDeleteI made a number of attempts to project a historic polling-based chance of victory for the Coalition through the campaign, and whatever I did the estimate never fell below about 65%. I got similar estimates to the bookies only if I factored in the Coalition's historic tendency to outperform its polling, which wasn't actually apparent in this case.
DeleteThe main reason my models always gave the Coalition a clear edge was Labor's disadvantage in converting 2PP to seat share at this election as a result of personal vote effects (mostly those arising from sophomore effects from the 2013 result). I had Labor as needing 50.8 to 50.9 to win enough seats to govern. As it turned out Labor clearly outperformed the Coalition at winning marginal seats where personal votes were not a factor, and probably would have won 73 seats and presumably government with something like 50.6.
The other issue is that my reading of where the 2PP was actually at turned out to be a little strong for the government, because of herding and/or a small degree of preferencing shift. All the same I think that even had I known that my methods would still have had the Coalition's chances above 60% at all times.
Was Betfair running seat markets? And if they were, was there much liquidity? I think the point about dumb money is a strong one - I highly doubt Sportsbet and co. are taking much money on these markets, it's probably more of a gimmick to them. If there is any smart money it would overwhelmingly be bet on an exchange (eg Betfair).
ReplyDeleteBetfair was running seat markets but a lot of them had either no action or very little. Only a few seats seemed to have remotely serious amounts of money matched.
DeleteThis is an interesting take on the novelty type markets (which you could argue most of the election markets are). I'm not sure if I totally believe his advertising figures, but you get the idea...
ReplyDeletehttp://www.daily25.com/bookies-win-telling-lost-laurie-oakes-unwittingly-gave-sportsbet-millions-free-advertising/
Yes, there's a widespread perception that the bookies gain more benefit in advertising from running seat betting markets than they do by way of profit, though I think the bookies quite enjoy getting publicity for the idea that these are not serious markets.
DeleteI don't mind gimmicky election bet offerings but what I do take exception to is bookies paying out on elections in advance. I'd like to see that practice seriously banned.
It would be interesting to see a graph of betting probabilities vs final 2cp% (on an appropriate scale) as this would be more informative than the binary variables of favourites vs winners. I suspect that Western Sydney seats like Lindsay, Greenway and Macarthur would be some of the ones with the biggest discrepancies, although you can't really blame the markets in Macarthur as the polls were way out there too.
ReplyDeleteGreenway is an interesting one given it had the presumably poor Liberal candidate last time and there were quite a few comments on Tallyroom saying things like Greenway Liberal was the best value bet of the election! http://www.tallyroom.com.au/aus2016/greenway2016/comment-page-2#comments
Simon Jackman has graphs of this for Sportsbet final odds at https://jackman.shinyapps.io/postElection/
Delete