Marginals Mayhem 2: Labor Losing The Lot?
1. A recently released JWS Research poll of 54 marginal seats has been reported as projecting a loss of 18 Labor marginal seats.
2. The poll results should be treated with caution because the poll is commissioned by a source with links to the Liberal Party, and because it is unlikely that the Coalition would have such a large lead in all "marginal" seats while holding only a modest lead nationwide.
3. Even assuming the poll results are extremely accurate, it is not correct to interpret them as projecting a loss of that many marginal seats.
4. The reason it is not correct is that there is variation between states in the polled fate of marginal seats, which would enable Labor to better withstand the swing than if it was uniform.
5. On this basis, the poll does not show Labor as being likely to lose more marginal seats than would be expected based on national swings or other state swing data.
This article also includes brief comments about the Sportsbet seat betting odds released this week.
Oh, and pictures of wirrahs.
There are three problem areas so far as the JWS poll is concerned: (i) non-neutral commissioning (ii) apparent inconsistency with the national 2PP, unresolved by benchmarking (iii) misleading and confusing presentation of projections. But all that taken into account it seems to still say some useful things, so I have resisted the temptation to create a scale of wirrah awards and give it a line of the things, like this:
As I warned in Silly Lilleys: Is Wayne Swan Losing His Seat? among the first questions that should be asked when examining any opinion poll is: who paid for it? Polls conducted by pollsters on their own initiative, or at the behest of newspapers (especially as part of a regular series) tend to have a better record than polls commissioned by parties, interest groups, and other mysterious sources. At least with regular poll series such as Essential, Newspoll, Galaxy, Morgan and Nielsen, or even EMRS, we can become familiar over time with the pollsters' consistent foibles and account for them. Commissioned polls are always subject to doubt because even if the polling methods are impeccable, the organisation commissioning them may decide to only release the poll if it likes the results. Not surprisingly, a very high proportion of commissioned polls have findings compatible with the purposes of the agency releasing them.
In the case of this JWS Research poll, the commissioner is ECG Advisory Solutions, a corporate strategic advice unit boasting "unique access to the highest levels of government". ECG Advisory Solutions is a registered lobbyist in Victoria and, according to The Australian, both its principal operators are former advisers to Peter Costello. The director, David Gazard, ran for the Liberals for the famous "litmus seat" of Eden-Monaro last election, only to find that the litmus was still red. The poll being commissioned by a group with clear Liberal Party links does not prove that the group has any political motives at all in publishing this poll, but it provides reason for caution about the poll's findings - especially given that those findings appear friendly to the Liberal Party. Another reason for caution about the poll is this: this is a firm that presumably does well out of its monopoly on specialised political insights. So why would it pay for information of this kind and then give it away to the general public for free, instead of on-selling it to its clients? Publicity is one obvious motive, but even then a group gets better publicity out of a very striking result than a humdrum one.
The Pollster's Record
JWS Research is especially well known for its giant robopoll of over 20,000 respondents in 54 marginals conducted a week before the last federal election. The summary posted on the Poll Bludger link showed (compared with the final results) that the robopoll performed very well in Queensland (where seven of its eight predicted Coalition wins compared to notional standings occurred, as well as another two) but in other key marginals its results weren't great. In NSW only two of its six predicted changes occurred (as well as two of its six seats not predicted to change), in Victoria it predicted four changes (of which two happened), and outside the three most populous states it predicted three changes (two of which happened) while missing one.
All up 18/31 key seats as identified by Poll Bludger went with the poll's prediction. (I believe it projected all the other marginals correctly, but many of those were not in doubt). There was a late swing to the Coalition, explaining nine of the misses, while four seats projected to fall were retained even despite this.
JWS also polled Victoria in 2010 (state election) with very good results. Of 24 marginals canvassed the pollster recorded a tied 2PP result in one (which changed hands), while 18 of the other 23 followed the poll finding (7 of 9 predicted losses and 11 of 14 predicted holds). Furthermore two of the misses involved the Greens who tend to underperform compared to polling by most pollsters. The others all involved a late swing.
(These results do not, of course, mean the pollster should expect such a good strike rate from polls taken this far out from an election as opposed to one week before it.)
In May 2011 JWS conducted a smaller version of the present exercise, showing Labor trailing 42:58 in the nation's 20 most marginal seats (at a time when the national picture was about 46:54). Findings were said to be especially bad in SA, NT and WA (where all the relevant seats were Coalition held anyway). Leadership ratings in these 20 marginals were broadly compatible with those being recorded at the time by other pollsters nationwide. If anything, they were a bit favourable to Julia Gillard compared to Newspoll.
In November the outfit produced a Victorian state poll that showed the Baillieu government with a reasonable lead, quite at odds with results in the last few months from Newspoll and ReachTEL.
The record of the pollster in self-or-media-commissioned polls seems reasonable generally, but we don't have the benefit of a long run of series data that can be used to benchmark it against other pollsters.
The Current Poll
The current poll covers the 54 seats that were the most marginal at the last election, including everything on a margin of less than 6%, and not including seats (like Bass) that are known to be at risk but further up the tree.
The overall finding of the poll is that there is an average swing of 4.8% to the Coalition in these 54 marginal seats. But this swing is supposed to be very oddly distributed. In the ten seats held by Labor on margins of less than 3%, it is found that the swing to the Coalition is a massive 10%, meaning Labor would lose the lot if this was evenly distributed. But in the seats held by the Coalition by similar margins, the swing to the Coalition is just 1.9%. On a statewide basis, there is a swing to Labor in Queensland marginals of 2.8% but against it in NSW marginals of 12.2%.
Based on other recent polls, the national 2PP at the time the poll was taken (Jan 17) was about 52.5:47.5 to the Coalition, a swing of 2.7 points from the last election. So if the 54 most marginal seats in the country are swinging by 4.8% but the country as a whole is swinging by 2.7%, then that means that the remaining 96 seats, consisting mostly of the notionally safer seats on either side, are swinging to the Coalition by only about 1.5% on average. And given the evidence that the notionally "safe" seats in Tasmania (some of which are anything but) are swinging horrendously, that leaves little over 1 point on average for the other 91.
So, we're supposed to have a pattern in which marginal seats generally swing strongly to the Opposition, while safe seats on either side on average barely move? That seems rather unlikely, especially with NSW having 15 Labor seats on margins above 6%, some of which are also supposed to be vulnerable to defeat. An easier pattern to credit would be one in which there were more substantial average swings to the Coalition (even if only a few points on average) in "safe" seats on both sides as well. But for that to be the case the Coalition's 2PP nationwide would be more like 54% at least.
So it's possible that this poll despite its massive size slightly exaggerates the Coalition's lead in the marginals sampled by random chance (after all a 1.7% margin of error is still capable of making a substantial difference to a single poll), or else that it employs methods that would be expected to lead to a 54-ish 2PP for the Coalition if repeated nationwide. In the absence of a history of national 2PPs from the same pollster, it is difficult to say. It's notable that this pollster uses respondent-allocated preferences, though that doesn't seem to make a great difference to the results in this case.
Choose Your Own Adventure
Another problem with this poll release is one that doesn't relate to the data, but rather to the strange way projected seat gains and losses are projected, and the media commentary generated by it. The poll provides seat change findings in five different forms in Table 3: assuming uniformity (18 seats lost by Labor), assuming uniform swing by party holding seat (-25), assuming breakdown by party and margin (-17), assuming breakdown by country/city status (-17) and lastly, assuming breakdown by state (-7).
But the data available in some of these breakdowns falsifies the assumptions of the others, raising the question of why they were included at all (at least, without heavy disclaimers.) So, for instance, we have the Business Spectator writing "the federal Labor Party is at risk of losing at least 18 seats in the election to be held by late 2013, as Labor faces especially declining popularity in the most volatile seats". But that is not actually what the poll shows at all. Its state by state breakdown says Labor's popularity is improving in Queensland marginals, while in New South Wales marginals, instead of losing by very little, Labor is losing massively (which for a given 2PP means potentially more votes to spread around in other marginals elsewhere.)
If we take the -7 breakdown by state, and throw in, say, Bass and Braddon as well, Labor ends up on 63. But that's about where we are anyway on current polling, if we just assume uniform national swing (which leaves Labor with 62) or use the most credible methods that don't specifically poll marginals (Bludgertrack projects 62 Labor off a 2PP of 47.5). If Labor cannot lift its polling above the current level as suggested across all pollsters, it is certainly going to lose and would be very lucky to manage a seat tally above the low sixties (indeed 62-3 seats might be a little optimistic if that was the election day 2PP). So on that basis, the poll isn't showing that Labor's troubles in marginal seats exceed its problems generally.
If someone wanted to argue that the breakdown by party and margin was what mattered, they would have to argue that there was something that made a Labor +1% seat swing totally differently to a Coalition +1% seat, although nothing of that sort has been detected before. In fact, the bad results in the Labor 0-3% seats and the small swings in the Coalition 0-3% seats seem largely driven by differences in state makeup. The former include five NSW seats, three Vic ones and two Qld ones while the latter include six Queensland seats (where Labor is supposedly doing well) and only one in NSW. Once this is taken into account, Labor starts winning some of the Coalition 0-3% seats and no longer loses all of its own, and the -17 breakdown by party and margin goes straight out the window.
An Ancient Genre
The marginal-seat-scare poll (in which a government is supposed to be doing much worse than its overall polling because of an aggregated bleak picture in marginals) is a story as old as the hills. It usually turns out to be nonsense come election day, and in many cases (like this one) the poll doesn't even substantiate the sensational narrative. In the current cycle this theme is likely to keep emerging, often closely tied to the story of Labor's preoccupation with Western Sydney (for more on this see Mumble's Where Would We Be Without Western Sydney? and similar pieces.) There will always be exceptions, but historically, if a government can win the 2PP, the marginals will take care of themselves.
Full Seat Betting Opens
Sportsbet this week opened a market on all or nearly all federal seats. This represents a much earlier opening to this kind of betting than has normally been the case in the past.
The opening release showed the Coalition favourite in nine of the ten Labor seats under 3%, excepting Petrie (Qld, 2.5). The Coalition was also favoured to grab Lingiari (NT, 3.7), the dreaded Eden-Monaro (NSW, 4.2), Parramatta (NSW, 4.4), Dobell (NSW, 5.1, currently Ind-held), Bass (Tas, 6.7), Braddon (Tas, 7.5) and Lyne and New England, for a total of 90 (reported as 91, possibly because of a clerical error in one seat). Three seats are tied. As usual there is a mismatch between the number of seats the Coalition is favourite in and the total number it is expected to win, with 71-80 seats and 81-90 seats equal favourites in the latter category, and 91-100 seats a little behind. Historically, when this mismatch has existed close to the election, the totals betting has been closer to the mark (2010 was a good case of this.)
At this stage, these odds probably represent the company's best modelled projections of the chances in each seat, and not the supposed collective wisdom of the markets, at least not until decent volumes are traded on most individual seats. A few elections ago betting markets seemed to be rather good predictors, but an increased interest in election punting has probably reduced their effectiveness. They simply represent the combined views of people who bet on elections, and if mistaken poll analysis is widespread in the pundit community then it will likely filter through to odds betting as well. After all, Labor was even more heavily favoured at this stage in 2010 as the Coalition is now, and ended up winning by the proverbial nostril. I will not be posting detailed betting advice on this site though now and then you may see me say that in my view something is too long or too short.
Update (8 Feb): Also see the Mumble article here for some more on problems with this JWS release; especially a very small sample of younger voters.