Many basic details of the results were covered in my day-after wrap Decisive Win For Coalition. Apart from Lismore falling back over the line on declaration votes and because the preference flow to the Greens was weaker than that to Labor, no seats have changed hands in the postcount, and in the end the swing did not increase significantly in the postcount.
Vote Share, 2PP And Preference Change
The primary votes were 45.63% Coalition, 34.08% Labor, 10.29% Green and 10.00% Others (including 3.24% Christian Democrats and 2.02% No Land Tax with most of the rest for independents). The 2PP was 54.32% to the Coalition, a 9.9% swing.
Antony Green has posted a lot of goodies about the result including a very comprehensive new pendulum format that deals well with the three-cornered contests.
Antony says that had preference flows not changed, the Coalition would have won with 56.6%. I cannot replicate this; I get 56.1%:
[update: Antony has since corrected the error.]
Also as Antony notes it was not just the flow of preferences by minor party that changed but also the mix of minor parties, with the Greens an increased share of the minor party vote. I grappled with this problem a few times in pre-election modelling and my final estimate was that that alone (off primaries similar to those eventually recorded) was worth 0.2 to Labor's 2PP. So my view is that had preferencing behaviour by voters been the same as in 2011 (ie had Greens voters preferenced in 2015 as they did in 2011 and Others voters collectively ditto) the 2PP would have been about 55.9.
Therefore I estimate the impact of changed preferencing behaviour in New South Wales at 1.6 points, compared to about 2.5 points (2.7 points ignoring the increase in the Greens vote as a share of minor party votes) in Queensland.
My polling-based projection here was 53-36-1-3 based off an expected 2PP of 54.0. In terms of Labor-vs-Coalition seats it was exactly correct, but the Greens won two more seats that would otherwise have been won by Labor (each of which a seat-poll had them losing) and the rural indies failed to take any seats from the Coalition.
I haven't been doing too well with predicting rural indie raids, having not factored any into my seat model federally and in Victoria (where they happened) and then having allowed for them in Queensland and New South Wales (where they didn't). Perhaps I'll stick to one approach so I can get them right half the time in future ...
Polling aggregation for this election involved a lot of judgement calls about the proper weighting of under-tried polls, the house effects for those pollsters that seemed to be leaning to the Coalition, and what to do with preferences. One dubious subjective judgement I made was to wimp out of giving the final Morgan-SMS a multiplier of 10 (my usual for polls taken in the last two days during the final week) because of the erratic behaviour of that poll and because the poll was an obvious rogue. Had I kept the high weighting, my primary estimates would have been worse but the 2PP would have been better at 54.2.
Other forecasts I saw of this election were generally also pretty accurate, with one exception. (Westland's earlier article on John Black is also highly recommended.) Provided you didn't use 2011 election preferences or some of the more excitable respondent-preference estimates, it was difficult to go too far wrong in seat terms because NSW does not have a lot of naturally close seats.
This was a heavily polled election. I am aware of 22 statewide polls released in the three months leading up to the election: six by Galaxy, six Morgans (five SMS, one phone), three ReachTELs, two each by Ipsos, Newspoll and Lonergan and one Essential.
As usual I give out the gong for best final poll using the Root Mean Square Error formula and weighting the error in 2PP as heavily as the errors for all the parties combined. (I departed from this method in Queensland because there was no variation in 2PP methods and so anyone who got close to the 2PP was getting lucky with dud primaries.)
Here's this election's league table for final polls. The closest result in each category is shown in blue bold and other close results are in blue. There are two RMSQ figures on the right - one just for primaries and one for the ranking including the 2PP figure. The lower the better.
(Click for larger version).
The outcome is a clear win for ReachTEL with an outstandingly close final poll - it did not miss by more than 0.3 on anything and was one of the two pollsters to get the 2PP right to one decimal place. The other poll to nail the 2PP was Ipsos, but in their case much less accurate primaries were cancelled out by a respondent preference flow that was stronger than actually happened. Galaxy's final poll was also excellent, Newspoll's was pretty good and Essential's was not bad given the small sample size and the age of the data.
As for Morgan SMS, it did seem in the Queensland election that it might have improved after its poor form in Victoria, but nope, the final poll was abysmal. The second-last Morgan taken from March 20-23 would have placed fourth on primaries and fifth with the 2PP considered.
This election saw great variation in 2PP methods from different pollsters after what happened in Queensland. ReachTEL, Ipsos and Newspoll issued respondent-allocated figures but ReachTEL's were the most detailed. I used them in my own modelling for this reason, and because they intuitively seemed plausible. Based on the earlier of the two ReachTEL results the expected change in the 2PP result caused by changed preferencing behaviour was around 1.3 points and based on the second around 2.0 points. Since the actual impact was 1.6 points an average of these two polls was very accurate. The other pollsters that produced respondent-allocated preferences appeared to have much larger estimates of the impact of changed preference flow but because of rounding it was hard to tell whether an apparent 3-point gap was really a bit over 2 or a bit under 4 (for example).
Thus, the differences in preferencing were at the conservative end of available respondent-preference estimates.
Only three pollsters issued enough polls to look at their tracking through the lead-up to the election and two of them were very steady (ReachTEL 53-53-54 and Galaxy 54-53-54-54-54-55). Morgan SMS went 54-55.5-55.5-56-57.5, in general suggesting a much greater degree of pro-Coalition blowout than seems to have actually happened.
The other thing I like to look at in this section is seat polls, but as already discussed neither ReachTEL nor Galaxy (the only pollsters to attempt them) did well here with each scoring only one hit out of three. There has been a lot of discussion of this, especially in the case of Newtown. See here a few @Pollytics tweets (read from bottom up) from a discussion of why the Greens (notorious underperformers compared to public polling) beat their Newtown ReachTEL 2PP by the small matter of sixteen points:
There's a fair bit of talk about what sort of swing might be required for Labor to win in 2019. Noting that the crossbench is entirely left-leaning, I think the question of the swing required to cost the Coalition its majority is a good place to start.
As Antony notes, by the pendulum and assuming uniform swing, Labor would need a 6.6% swing (ie a 52.3% 2PP to win eight seats from the Coalition next election.) We will see a lot about Labor needing such a high 2PP to win over the next four years but it will all be complete nonsense. The reason why is not hard to see on the new pendulum:
(Click for larger version)
Swings are never uniform between seats, and were spectacularly non-uniform at the election we've just seen. In this case, if Labor won eight seats with a swing of 6.6 points in each, then that would mean they had won Goulburn by a whisker, Penrith by 0.4 and everything else by at least 3.4. The Coalition meanwhile would have held two seats by whiskers, another five by 2.1 or less, and twelve compared to Labor's two by 3.1 points or less.
In practice because swings are not uniform, it is likely the Coalition would not get such a friendly split of the very close seats. An average swing of 6.6 points, even assuming a reduced variation in seat swings, would be expected to cost the Coalition ten seats rather than eight. (Note Sydney is not counted in this list.) A swing of 4.9 points with a standard deviation of 4 points per seat would give Labor a 50:50 chance of gaining eight seats from the Coalition. Thus in my view the target figure for Labor to have an even chance of winning in 2015 is nowhere near as high as 52.3 and probably more like 50.6.
Apart from any ongoing comments on the Upper House, which will probably be posted to the postcount thread, that ends my scheduled coverage of the NSW election. My thanks to all readers and commenters for their interest.