 |
Oh no, not again ... |
On the day after the 2019 federal election I did the most media interviews I have ever done in one day, eleven. Eight of those were entirely about the same thing: the polls being wrong. That day and in the coming days journos from as far afield as Japan and from vague memory Switzerland wanted to know how Australia had gone into an election with Labor unanimously ahead about 51.5-48.5 and come out with the Coalition winning by the same amount. Was this part of a global pattern of polls being increasingly broken and underestimating the right? (Answers: no and no - it was just a shocker by Australia's high standards).
The day after the 2025 federal election it was obvious something had gone astray with polling again, and by something near the same amount, but the media reception was muted. I think I did only one interview where the polling was even part of the report's initial focus. The ABC did an article about the polling, but it was so quarter-arsed that it omitted four final polls, initially got the 2PPs of four others wrong, and even when "corrected" continues to this day to contain errors about what the final poll 2PPs were. There were a few other articles that were better.
When there's a general polling failure (polling picking the wrong winner) that's big news, especially if polls picked the left to win, because that plays into the outdated (and always overrated) "Shy Tory" myth about voters frequently lying to pollsters. When there's just a general polling error (polling picking the correct winner but with a large miss on the margin) hardly anyone cares, especially if polls picked the left to win (see New Zealand 2020, a worse miss than Australia 2025). Even I have been so bogged down in the extraordinary Reps postcount followed by yet another snap election in my home state that it's taken me this long to write anything detailed about the second-worst set of Australian final polls in the last 40 years. This wasn't quite as bad as 2019 and there were some bright spots - but it wasn't that much better either. (This said, by global standards 2025 wasn't terrible. On the major party primary vote gap - which is the international standard for polling errors - it's just an average error, but Australia is used to better.)
A brief note on 2PP conventions for this article. Whether the 2PP was really 55.22 to Labor (the AEC's official figure) or 55.26 to Labor (my estimate accounting for the Bradfield situation, re which more soon) has no real impact on the conclusions or rankings here, since everyone was below that 2PP. However, for the time being I use the AEC's figure. In the event of the AEC releasing figures for Bradfield calculated by the normal methods later, I may edit this article to include a revised figure.
Final polls
In a very richly nationally polled election ten pollsters ended up releasing a final poll. This included two pollsters that had not released any other leadup voting intention polling, one of which was previously unknown to me.
Pollsters are judged a lot by final polls, but final polls are only a single poll (so there's a degree of luck involved as to whether it's a good one), and final polls are taken at the time when there is the most data around for any pollster who might be tempted to herd their results in some way to base such herding on. However, the final poll stage is the only stage where, at least in theory, there's an objective reality to measure the poll against. A poll taken a few days out from election day should have a good handle on how people will vote, even more so these days when so many people have already voted by that point. Any attempt to determine the accuracy of polls taken well before election day requires much more debatable assumptions (but even so, a seat poll taken a month out shouldn't be wrong by double digits).
It's also important to note that often which poll comes out as the most successful depends on how you choose to measure it. A few comments going into the following table:
1. I consider 2PP estimates to be a very important part of the polling service in Australia and therefore weight them heavily in my assessment of pollster performance. This was another election at which, for all the trendy nonsense about the demise of 2PP, two-party swing overwhelmingly determined the shape of the outcome. Labor gained very slightly more seats than the 2PP pendulum predicted for the 2PP swing they got, and the other net changes (three Labor gains from the Greens and one net Independent gain from Coalition) were very minor. Two of the three Labor gains from the Greens were also predictable in advance if one knew the national primary vote swings.
2. YouGov released a final MRP and a final public data poll. The final MRP got far more attention but a poll taken from 1-29 April, where the pollster later released a poll taken 24 April-1 May, should not be considered a final poll. I include it in the table but not in the rankings, and cover it separately.
3. I use the same four indicators as in other recent accuracy articles:
AVE2 (headline): The average of the raw difference between the poll and the results on 2PP and the average raw difference on Labor, Coalition, Greens, One Nation and other - this is a 50-50 weighting of 2PP difference and other differences.
AVE: The average raw difference on the five primary groups, excluding 2PP
RMSQ2: The same as AVE2 but using root mean square error, which more harshly punishes a large miss on one figure than a number of small ones on others
RMSQ: The same as AVE but using root mean square error.
4. I do not consider any poll's stated margin of error. Instead I consider any miss by 3% or more on a figure as significant and highlight it. One reason for this is that judging polls by their stated margin of error discourages pollsters from releasing large final polls. This is an issue that comes up with the 2019 final polls. Apologists for the 2019 poll failure often say that the polls were within their margins of error as if that failure was purely random, but it was absolutely not. Firstly that claim ignores the fact that random misses within the margin of error might explain one or two polls being about 3% out in a given direction, but not 18 in a row. Secondly and more relevantly here, it ignores the fact that several of the 2019 polls bumped up their sample size for their final poll or final few polls and had a lower claimed margin of error than normal, and hence were
individually outside their claimed MOEs. Some of the 2019 final polls that were outside their published MOEs would have been inside them had their sample sizes been smaller and the numbers been the same, but that would not have made those polls any more useful. I
want pollsters to release large final polls because they are more likely to be accurate, so I apply a flat 3% as a benchmark for a significant miss. (Another thing here is that pretty much everything said by pollsters about the margins of error of their polls is
not really true anyway, though some are closer to the truth than others.)
5. Where a pollster published multiple 2PP estimates (such as respondent and last-election preferences) I have used whichever figure they used as the headline, or failing that whichever figure was reported first by their client.
Here's the usual table. I've included a figure Last El which is my conversion of the poll by 2022 election preferences based off the published primaries, so we can see which polls were possibly most off with their preferencing assumptions. In most cases it is hard to be sure about this because most polls do not publish primaries to one decimal. Bold indicates the closest result, blue indicates within 1% and red indicates outside 3%. The breakdowns some polls provided for TOP, IND etc are shown but are not used for the accuracy scores. Notably the polls that polled Independent as a breakout category all did pretty well at it.
The overall average error on the final poll published 2PPs was 2.89%,
more or less identical to the 2.93% error in 2019. The components of the 2025 error I estimate as follows (though rounding can also contribute):
* Major party primary errors (Labor too low and Coalition too high) 2.51 points
* Non-major-party primary errors (eg One Nation too high) 0.27 points
* Estimated preference flow to Labor too low 0.11 points
So in the end, the lion's share of the error is the same as in 2019; the major party primary votes were wrong; preference modelling had little impact.
At times there were large differences between 2022 election preferences and the preference flow estimates many pollsters were using but by election day these differences were on average minor, and there was a very small shift in preference flows to Coalition at this election anyway. However YouGov's final poll underestimated the flow to Labor, to a degree that surprised me even given the partly respondent-sampling-based adjusted preference flows they had been using. Had YouGov used last election preferences in this poll, they could (depending on rounding) have got a 2PP of around 54.3 which would have made them easily the most accurate final poll. As it was the closest of a mediocre bunch was Redbridge, but with an AVE2 score that was beaten by all five final polls in 2022 - and the polls in 2022 were good on the whole but not great. Redbridge was the only final poll to have the major parties tied on primaries rather than wrongly having the Coalition ahead; Labor finished ahead on primaries by 2.74%. While which of my indicators works best is always debatable (eg AVE2 puts Resolve just ahead of Newspoll but Resolve had 3+ point misses on both major party primaries) Redbridge was first or equal first on all four and was clearly the best final poll.
A few of the leadup polls were much closer however than any of the final polls, especially the Morgan of 14-20 April (55.5 to ALP), which would have scored 1.04 on AVE2.
As for the shift that some pollsters expected in preference flows, the expected large shift in One Nation preference flows used by Newspoll among others actually did happen (indeed it was slightly stronger than in the Quensland state election). However it was mostly counter-balanced by a smaller shift to Labor in flows from Greens and independents - only a few percent but Greens and independents combined got three times as many votes as One Nation did, so that added up.
Five pollsters missed by more than 3% on the 2PP and five also (not all the same five) missed by more than 3 points on the Labor primary. That's not good.
Tracking
When final polls are this far out at the end it's not much use to say which tracked the best, as the only way to do it is to subject them to a jury of their peers, but in this case their peers were wrong at least somewhere. If either Morgan or YouGov had kept getting the sort of readings for Labor they were getting earlier then they would have been very clear winners here, but that didn't occur. (Morgan does deserve a nod here for the way they were first to pick up that Labor was surging; I suspect that some of their high polls for Labor were over the top at the time taken but that several of them were actually right.) There was one form of tracking that did rather well and that was the Redbridge/Accent marginals seat tracking poll. This was an unusual and interesting experiment in public polling where Redbridge and Accent mimicked the marginal seats tracking polls used by major parties with a sample of about 1000 voters across 20 classic-2PP marginal seats that in 2022 had averaged about 51-49 to Labor. Starting at 48-52 in early February this poll tracked upwards to 54.5 in the second and third last waves before finishing at 53. Shame about the last one (caused by an overestimate of minor party votes) because the 2PP in those seats was in fact 54.8 to Labor, but even at 1.8 points short the final tracking poll was still closer to the 2PP pin than all of the final national 2PP polls. The tracking poll also provided value by showing that Labor was doing at least as well in the marginals covered than it was overall, and hence was unlikely to be harmed by a bad distribution of support of the sort that had been on the table at some points.
What is unacceptable with this poll was the misreading of it by the Herald-Sun's James Campbell, who shouldn't be allowed to interpret polls in future. After the second of the two 54.5s Campbell wrote "The poll points to a likely minority Labor government dependent on independents but with so many voters still undecided or there is still a chance the Coalition could claw back enough ground to force Mr Albanese to rely on the Greens to retain office."
As I wrote in response regarding what 54.5 to Labor in these marginals meant, "(I'll give James Campbell a very big hint; it's not a hung parliament, it's the red team in the bar at 7:45 pm and the rest of us spending the night going "Bonner? Is that actually a seat?") " We'd actually stopped talking about Bonner already by about 7:45, it was among the first to fall.
Campbell was in the tank again with the final Redbridge 53-47 national poll, which he claimed showed " Anthony Albanese easily returned to the Lodge in a minority and with a small chance he could govern in his own right." As I pointed out at the time " If Labor wins 53-47 they will probably increase their majority (my model has +4 seats median)"; by uniform swing from the actual results Labor would actually have gained even more than that.
MRPs
Several MRP models by YouGov and Redbridge/Accent were released during the leadup but only the final YouGov one was fresh enough to warrant serious attention in this article (the last Redbridge one was very old rope by the time it emerged and was generally ignored). When YouGov came out with a median forecast of 84 seats to Labor I understand the gallery response was "well that's the end of YouGov then" thinking that it was too high and a result with Labor on 74 seats or so would mean the pollster would never be taken seriously again; not so fast. The YouGov MRP, with a 2PP of 52.9, correctly projected the winners of 136/150 seats (the 2022 YouGov MRP done by staff who are now elsewhere scored 138/151, remarkably getting only four classic seats incorrect). The 2025 model missed only Labor wins over the Coalition in Aston, Bass, Dickson, Forde, Hughes, Leichardt, Moore and Petrie, Labor's wins over the Greens in Griffith and Melbourne, and it incorrectly projected Wannon, Cowper, Monash and Goldstein as Independent wins (the Coalition won all these).
There have been suggestions that YouGov got lucky in that Labor in winning a majority much greater than their model also had a vote share much greater than their model and could not have won 84 seats if YouGov's primaries were correct. YouGov had Labor on a primary vote of 31.4 and a 2PP of 52.9, very much lower than the 34.56 and 55.22 Labor actually got.
There are various ways of looking at this. One is that if I take the actual results and uniformly adjust them for national 2PP and primary differences, Labor would not have won Bullwinkel, Bendigo, Menzies, Petrie, Solomon or Forde vs the Coalition, Wills and Richmond vs the Greens or Fremantle or Bean vs independents. That's ten seats which is exactly the difference between what YouGov had for Labor and the result. Another is to take the YouGov 2PPs and adjust them uniformly to match the actual result, and in this case Labor only gains another six seats - this suggests that YouGov's model could have even undershot how many seats Labor would win off 52.9% 2PP, not overshot. All up it seems very possible Labor could have won 84 seats off a primary vote of 31.4 with a 2PP of 52.9. My own pre-election seat probability model put Labor on 81.6 2PP wins for that 2PP, but it did not take account of Labor getting higher average 2PP swings in marginals than safe seats (which they did). It's actually possible that if the 2PP was 52.9, YouGov's model could have nailed the Labor seat total but got more individual seats wrong (say twenty) than it did.
Of course the model had some very inaccurate outputs (such as huge 60+ 2CP wins to independents in Wannon and Cowper, both of which the Coalition retained) but overall the MRP was fairly successful despite landing ten seats short of Labor's landslide. It pointed to a potential for Labor to increase its majority if Labor could get even a small 2PP swing, something I was also saying but which was very hard to get through the wall to wall "inevitable Hung Parliament" media bulldust.
Seat polls - some absolute shockers!
Seat polling in Australia hasn't been good for a very long time but this election saw seat polls that were a mix of so-so performances by some firms and many others that did among the very worst I've ever seen. It is difficult to evaluate some of this polling because of insufficient detail in media reporting.
* YouGov released a set of ten regional seat polls in late April that were criticised including here for very small sample sizes; some turned out to be accurate while others were a long way off; all but two still had the right winner. I was not able to find 2CP results for all of them but on average they underestimated the Coalition primary by 1.9 points, though in some seats (notably Dickson) they overestimated it. They underestimated the Labor primary in every seat where Labor was a top-two contender however, by an average of 5.7%. They had the combined independents primary in Calare close enough to perfect but overestimated Alex Dyson in Wannon by 4.3% (a more serious miss on the Liberal primary there causing an incorrect projection of a Dyson win). Overall the average miss on a major contender's primary in these polls was 5.1%; had these been perfect random samples it would have been around 2.7%. These samples did correctly pick comfortable Labor wins in Braddon and Lyons, although the samples for those seats were larger than the others.
* There were a range of uComms polls commissioned by various sources of which reports surfaced, though rarely the full results. As it is likely sponsors chose strategically which ones to tell media about, it's hard to draw any conclusions from them. There are also several reported 2CPs for Climate 200 sponsored polls that seem likely to have been uComms but where the reporting media were too lazy to say who did the poll. Of the known knowns, either during the campaign or in the leadup close to it, uComms appear to have correctly predicted Labor's wins in Dickson (twice) and Lyons with underestimates of 4%/4.3% and 10.5% on the margins, to have nailed the winner and margin in Wentworth (only 0.4% off!), to have Deakin tied 50-50 (Labor won 52.8-47.2) and to have incorrectly had Zoe Daniel winning in Goldstein 54-46 (she lost by 0.1%). Finally there was a Brisbane uComms that had Labor winning 56-44 if over the Greens, but the numbers published (which had Labor 7.9 points below what they got and the others 2.2 and 1.7 above) didn't have Labor quite making the final two. Labor did make the final two and won 59-41, so at least the poll's projection that the LNP would lose no matter what was right. On average the 2CP error of polls attributed to uComms was just over 4% which for seat polls is pretty reasonable.
Among Climate 200 related polls that may or may not have been uComms there were ones that had them winning Bradfield 52-48 (won by 26 votes), losing Flinders 49-51 (47.7-52.3), winning Cowper 53-47 (lost 47.5-52.5) and winning Forrest 51-49 (quite possibly true on the 2CP, we'll never know!) The Forrest case was reported with primaries of Liberal 34 IND 20 but without any reference to the ALP primary so it's not clear if Labor were in second (and it's also not clear if these are raw primaries or with undecided reallocated). The actual primaries were Liberal 31.8 IND 18.3 so the poll was quite close. Overall the reported C200 results sound reasonably accurate on average, but not of the class of their very accurate Redbridge seat polls from 2022.
* Aggregated seat polls weren't any magic solution. DemosAU had the LNP 2PP in five Queensland seats at 53-47 on April 18-23, the LNP lost four and saved Longman by a whisker, with an average 2PP across the five of 47.2. A Freshwater batched teal seat poll from a few weeks before the election was called had the incumbent teals on average getting 51-49 2CP, they actually got 54.8 (which could well have just reflected voting intention change since the polls were taken.)
And now among those that were definitely poor:
* KJC Research was reported as having Labor losing Tangney, Blair and Richmond and narrowly retaining Hunter (albeit with an extremely high 2CP-undecided vote in Richmond) all on April 24. These polls were way off in the classic seats (underestimating Labor by an average 7.7% 2PP with little variation) and in Richmond the Greens missed the final two.
* JWS in mid-April had Elizabeth Watson-Brown (Greens) losing Ryan with a primary vote of 13% and the LNP winning the 2PP 57-43. Watson-Brown polled 30.2% and won; for what it's worth Labor won the 2PP 52.4-47.6. JWS had Labor winning Brisbane only 51-49 after eliminating Stephen Bates; Labor won 59-41. They also had Labor winning the 2PP in Griffith by the same margin if the Greens dropped out (they didn't, but Labor's 2PP was 65.9). JWS polls prior to the election being called were also off the mark to wildly so with the sole exception that they (alongside Freshwater) correctly had Zoe Daniel losing Goldstein in March, albeit off primaries with Zoe Daniel's vote too low and Labor's too high. Some of the differences to the results (eg in Bullwinkel and Curtin) were too extreme for voting intention change to be a likely cause.
* Before the campaign proper started so strictly could also be excluded from discussion but Insightfully seat polls in mid-March said to portend the Greens being reduced to one seat (well that happened, just not the one expected!) had the Liberals over what they actually got in every Greens target seat for which figures emerged, by an average of 5.1%, and Labor below what they actually got by an average of 10.2%.
* During the Dickson leadup Freshwater at one stage had Peter Dutton up 57-43; he lost 44-56.
* And finally the worst of all,
Compass Polling, which I have imposed a Five Thirty Eight style ban on for
this effort, meaning that if Compass or anyone who I find out worked for them in the 2025 leadup ever does a federal poll I will not aggregate it. The "poll" of McMahon had errors of 31.2%, 26.5% and 6.5% on the three leading vote getters and - as a result of a ridiculous presetting method - had a candidate in a winning position who finished third without even making double figures on primaries. The Australian was stupid enough to report this nonsense semi-credulously and should take a few years off pretending to be a newspaper in shame.
Mention should also be made of some seat-specific preference flow polling that was very wrong. Much was made of JWS seat polls that had minor right parties (ON/TOP/Libertarian) flowing at 85-90% to Coalition in certain seats. This was never going to happen - JWS overestimated both support for these parties and their preference flows. In Ryan JWS had the minor right on 10% with an 85% split, but the result if including GRPF was 5.91% with a 78.6% split. In Whitlam, 13% with a 90% split but the result was 11.6% with a 75.7% split. In Werriwa (a result I found particularly unbelievable because of the weak flow there in 2022) 18% also with a 90% split. The result was just 9.24% with a 67.7% split.
Why were the national polls so far off?
We're two cycles out from the 2019 fail and some lessons have been learnt, but some haven't. At this stage I can only suggest some possibilities as to what happened this year, there is really no firm evidence on the causes of this year's error.
The 2019 failure seems to have been mainly caused by primitive weighting and targeting practices, compounded by herding which meant that the polls that year failed uniformly instead of there being a few lucky winners. The extreme lack of transparency of the industry then made it hard to say exactly what polls were doing wrong. After improving in the 2019-22 cycle, the level of average transparency has been declining in the 2022-5 term. Because the Nine stable and the AFR failed to insist that the pollsters they hired were extensively transparent, some of the other pollsters decided they couldn't be bothered being transparent either and through the term the transparency level of polling (excepting Pyxis/Newspoll and not many others) declined. Only four of the ten final pollsters this year are now listed as Polling Council members.
There's been a lot of publicity about how the Liberals' internal polls by Freshwater were a lot worse than the public national polls, and one aspect of the blowback here has emphasised why low transparency continues to be a problem in Australian polling. In an
article immediately post the election which acknowledged being outperformed by "some" of Freshwater's competitors (try nearly all of them) Michael Turner wrote that Freshwater had overestimated the tendency of "Labor No" voters (ALP supporting voters who voted No to the Voice referendum) to switch to the Coalition. This has copped heavy criticism from Liberal sources who have said that what Freshwater was doing with Voice weighting was silly, as if Freshwater simply assumed without testing that Voice vote was a salient predictor. It's actually not inherently daft to use Voice voting as a weight - indeed YouGov did it without any of the same adverse consequences. It could, for instance, be a useful bulwark against a sample with too many Yes voters that might be politically overengaged and skew left. But I don't know whether what Freshwater did with Voice weighting was sensible but unlucky or silly from the start, because it wasn't publicly documented, at least not in a non-paywalled place. I
had no idea Freshwater were using Voice data and I
still don't know exactly what their method was. Pollsters who still do not tell the public enough about what they are doing - which is still most of them - are in a poor position to defend themselves when things go pearshape.
Typically when polls are off, late swing is blamed by pollsters, who will say the numbers were right when the polls were taken but voting intention moved later. In this case there was no relationship between how old a final poll was and how accurate it was, which counts against that to a degree. Moreover, of the final polls that had a recent precursor by the same company, the average swing from the previous poll was tiny. There was however a
substantial difference in 2PP swing to Labor between votes cast on the day (4.23%) and votes cast before the day (2.65%). If it is assumed that that difference is entirely down to late swing and that the swing in voter intention matched the swing in votes cast before the day then that would excuse just under 0.5 points of the error. It could conceivably be more than that, because the average data age of the final polls was six days, and about two-thirds of the prepoll came in the final week. On the other hand, some of even all of the difference could be unrelated to late swing. On the day voters are becoming less and less representative, and the 2022 election was odd in that some voters who normally wouldn't vote by post did so because of COVID. My view overall is that late swing was probably a minor component of the error.
Herding (final polls producing numbers that were almost identical although by random variation there should have been more variation) was again an issue this election (as again called out by
Mark The Ballot, as well as here), though less so than in 2019. Clustering was especially notable in the final polls after a campaign in which results had varied considerably, and it looked especially suspicious that the polls that had very high readings for Labor and the polls that had very low readings converged on middling 2PP readings at the end. The 2025 final poll numbers were very clustered on two-party preferred, but not especially on the primary votes. All ten polls landed between 51 and 53 with eight between 52 and 53. The standard deviation of the final poll 2PPs was just 0.70% (extremely unlikely to happen by chance), but the standard deviation on the Labor primary vote was 1.28% and on the Coalition's 1.44%. Some of this is because some pollsters tend to get lower major party votes than others, meaning that if they underestimate Labor they will overestimate the Greens, which cancels out on 2PP. The standard deviation on my last-election estimates for the published primaries was therefore only 1.07%, and after applying the same rounding as the polls were using it dropped to 1.01%. Still there is quite a bit of clustering to be explained, and it's a common theme in Australia that the final poll 2PPs are much more clustered than the primary votes.
The classical
herding theory is that pollsters are copying each other (or copying aggregates), making sure their results are similar to those of other pollsters so that they don't end up being the only pollster who is way wrong. However, clustering can happen in a variety of ways, including pollsters being influenced not by other polls but by perceptions about what the result "should" or "shouldn't" be. Because polling is still not very transparent in Australia, there is abundant opportunity for pollsters to make subjective decisions about how to apply a particular weighting from poll to poll (for instance) and in this case there would have been very strong priors against the result that we actually got. Pollsters had overestimated the Labor primary in both 2019 and 2022. There is a general pattern that polling errors in Australia somewhat favour Labor, and elections are also usually close and frequently closer than the final polls suggest. Nobody expects a once in 50 years blowout from a leadup with the parties closely matched. While I am not saying that all or nearly all pollsters would have been influenced in this way instead of just taking a method and sticking with it, it only takes a few to create the appearance of herding. The full scale of the result may have been missed because it was unthinkable. Pre-election perceptions that if the polls were wrong they
must be overestimating Labor again were an example of the same unthinkability bias, and also of course of
Nate's First Rule.
It's also possible the Coalition were simply harder to vote for than polling was able to measure, or that voters when they made their final decision were scared by talk of minority government and decided to give the party that could win a majority numbers to work with (a national version of the Tasmanian bandwagon effect). The funny thing about this is that during the campaign there were no shortage of pollsters willing to lecture us on how vote softness was at record highs but the Labor vote appeared to be the softest - well so what, it always is.
Finally there are two other vulnerabilities in Australian polling. Virtually all the polling is online panel polling, so if there just happens to be a mode effect in online panel polling compared to other methods we wouldn't know about it. Secondly the use of past vote as a weighting and/or targeting measure is now widespread, and while this is OK for those pollsters who have a record of it per respondent for the previous election, the polls that capture it continuously run big risks of having their numbers look too much like the last election.
I will be rolling out various goodies from the federal election as time and the state election permit. This article has a lot of data in it, any minor errors that are found will be edited.
No comments:
Post a Comment
The comment system is unreliable. If you cannot submit comments you can email me a comment (via email link in profile) - email must be entitled: Comment for publication, followed by the name of the article you wish to comment on. Comments are accepted in full or not at all. Comments will be published under the name the email is sent from unless an alias is clearly requested and stated. If you submit a comment which is not accepted within a few days you can also email me and I will check if it has been received.