Wednesday, December 25, 2019

Field Guide To Opinion Pollsters: 46th Parliament Edition

It's a tradition on this site to nearly always release something on Christmas Day, but for those who are done with polls after this year's failure, I realise this gift might fall under the heading of "worst present ever".

Just before the 2013 election I posted a Field Guide to Opinion Pollsters, which has become one of the more enduringly accessed pieces on this site.  However, over time parts of its content have become dated or specific to that election, and with more and more pollsters emerging as others disappear, the thing has got too long. So now I post a new edition early in the life of each parliament, editing it through that parliament as the need arises.  Pollsters not expected to be active in the life of the current parliament will be removed, but the old edition text will remain on the previous page.  For the 2016-2019 parliament see 45th Parliament Edition.

There are a lot of polls about in Australia these days.  But how do they all work, which ones have runs on the board and do any of them deserve our trust at all? This article describes what is known about each pollster and its strengths and weaknesses and includes extensive coverage of general polling issues.



The gold standard for success for an opinion pollster is seen to be that its polls at election time get the result as close to right as possible.  However, some pollsters are little-tested against actual elections, and getting a specific election right is a combination of skill and luck.  In elections where there is a swing on the last day or two of the campaign, a pollster that is actually not polling correctly may have its errors cancelled out by the swing, and hence record a lucky hit.  There is more to being a good pollster than just getting it right at election time - a good pollster should also provide useful data between elections and do so using well-designed questions that are easy to interpret.  And a pollster should also present their data in a way that makes sense and isn't misleading or confusing.

Some Common Themes

There are some general issues that affect a number of pollsters that I should go through before I move onto individual pollsters.  If you just want to look up a given pollster, scroll down, and then you can scroll back to this bit if you see something you want to look up; it might be here.

Australian Polling Council

Following the 2019 polling failure (see below) the Australian Polling Council was formed to improve industry transparency and attempt to improve the standard of polling.  APC members are committed to disclosure of methods details regarding polls that are intentionally published, and to attempting to improve the standard of polling reporting.  

APC members are:

YouGov (who also currently administer Newspoll), Essential, Ipsos, uComms, Lonergan, JWS, Telereach (does MediaReach), Newgate, Omnipoll, RedBridge, Dr Rebecca Huntley

2019 Polling Failure

There was a just-about industry-wide failure of voting intention polling at the 2019 federal election.  Pollsters involved in the failure were Newspoll (then administered by YouGov Galaxy), YouGov Galaxy's polls under its own name, Ipsos, Essential and Morgan.  All these pollsters issued final polls that underestimated the Coalition primary vote while overestimating Labor or (in Ipsos' case) the Greens.  The size of the failure, 3% two-party preferred, was not especially large by world standards but was the largest seen in Australia for decades.  The Newspoll/Galaxy seat polls were skewed against the Coalition by similar amounts, but still recorded a high strike rate in cases where they predicted a winner, though this was undermined by seats that they had at 50-50 often finishing with lopsided results.

Since the failure only YouGov (both in its YouGov branded polls and in Newspoll) has made a serious response to what happened, while Essential has made a limited response that in some respects has gone backwards, and that seems to be mostly concerned with presentation.

The causes of the failure have yet to be fully established but some viable suspects are:

* over-sampling of educated or politically-engaged voters
* difficulty sampling politically disengaged voters, who may have skewed unusually strongly to one side
* weighting by past vote
* herding or under-dispersed polls (see "Bouncing" below)

The polling failure does not mean polls are meaningless.  Polls may, for instance, show leads for one party that are outside the scope of historical polling errors, and such leads are likely to be meaningful when they appear.  However, they mean polls must be treated with great caution in 2022 unless they show one side massively ahead.  There is a risk pollsters could overreact in the opposite direction to a failure, as happened in the UK in 2017.

House Effect

The issue variously called lean, house effect, skew or bias refers to the tendency of a pollster to produce results that are better for one major party or other (or for specific minor parties) than what is likely to be the real situation.  The term "bias" is a poor one for this issue because it carries connotations of the pollster themselves liking one party more than the other or intending to assist one side, but there is no evidence that this is actually true of any major pollster in Australia.  The extent to which the house effects for each pollster are stable, or change in response to slight methods changes or political circumstances, is often a subject of debate.  In particular, the Coalition has historically more often outperformed election leadup polling than Labor, though whether this is still predictively reliable given the constant churn of polls and polling methods is debatable.

Bouncing

The issue often referred to as bouncing, but more technically as overdispersal or underdispersal, refers to how much a poll tends to move about from sample to sample even if voting intention isn't changing very much.  A given poll has a maximum margin of error based on its sample size, meaning that in theory 95% of the poll's results (once adjusted for the pollster's house effect) will be within that margin of error of the true value, but most of them will be much closer to the true value than that.  As the sample size increases, the maximum margin of error decreases, but the decrease isn't proportional.  For instance, for a 50-50 result from a sample size of 1000, the in-theory margin of error is +/- 3.1%, but for a sample of 5000 it is about +/- 1.4%, meaning that national polls with sample sizes above a few thousand are usually not worth the effort of producing them.  In practice, some polls tend to vary from sample to sample by much more than would be randomly expected, and these polls are bouncy or overdispersed.  Some polls are very static (except or sometimes even when voting intention actually changes sharply), and these are underdispersed.

In theory underdispersal is nice, because a pollster wants to accurately reflect the national mood rather than releasing polls that are wrong by several points.  No one wants to issue a rogue poll that everyone then ignores.  But a poll that is underdispersed may in some cases be so because it is slow to pick up major shifts when they occur, or indeed doesn't pick them up fully at all.  There is also the problem that there is no way to make a poll under-disperse when using truly random sampling from the entire Australian population, so if a pollster's results are very steady the question must be asked: how are they doing it?  Is it really a pure and random poll, or is the pollster allowing data from other pollsters to influence the way they fine-tune assumptions that create the final outcome?  (The latter practice is known as herding.)  Other possibilities include that underdispersed pollsters are using tracking from their own poll or other modelling assumptions to chop rough edges off their results, or surveying the same respondents too often.

Mobile and Landline Phone Polls vs Online Polling

No major Australian pollster only polls landlines.
Newspoll no longer polls landlines or indeed phones at all.

A common trope in online polling discussion is the claim that such-and-such pollster is inaccurate because it "only polls landlines".  However this is not still true for any serious national pollster.  Landline-only polling has been largely abandoned because of low response rates and difficulty sampling young voters by this method.

The old Newspoll's landline-only polling was still outperforming online polling in 2013 but during the following term landline-only polling reached a crisis point and largely disappeared.

There is no perfect sampling method.  Many voters do not have or answer landlines (this is especially true of voters under 40).  Not all mobile phone numbers are on lists available to pollsters and some voters do not answer unfamiliar numbers on their mobile phones.  Online panels rely on respondents who are comfortable with technology and like filling out online surveys to build up points for voucher rewards at what amounts to a sweatshop-level income.  Face-to-face polling is prone to skew towards parties seen as nice and against parties seen as nasty.  And so on.

Scaling

Getting a truly random sample of the Australian national population is difficult.  Some types of voters are simply much easier to contact than others.  One option is to keep contacting or emailing potential respondents until you get exactly the right demographic mix.  However this can introduce time delays and increase the costs of polling if you are using phone polling.  Another option is to "scale" the responses you have by applying corrections based on which group you have less of in your poll than others.  For instance, suppose that voters in age group A are 10% of the voting population but only 5% of your sample, while voters in age group B are 25% of the voting population but 30% of your sample.  A simple scaling method would then be to double the value of each response from group A and multiply the value of each response from group B by 25/30.   In practice, scaling is much more complicated and a given response might be scaled on many different criteria at once, some of which might increase its weighting and others of which might decrease it.

Scaling effectively increases the margin of error and bounciness of a poll, because any sampling error in a small group that is scaled up could be magnified in the overall total.  There is also a risk that if a demographic group is hard to poll, then the voters who can be polled within that group might not be a fair sample, and that any error caused by that might then be magnified.  For instance, young voters are hard to reach using landline polling, excepting those living with their parents.  But are young voters who live with their parents representative of all young voters?  Robopolls that tend to have extremely low response rates from young voters are especially prone to the problem of scaling up what are basically rubbish samples by a large amount.

Some areas of Australia are simply very difficult to poll accurately by any method.  The Northern Territory is one of them.  Inner city electorates are also hard to poll because of high rates of enrolment churn and non-enrolment in the electorate, and because they often have high proportions of young voters who are difficult to poll.

Internal and external

Many prominent pollsters conduct both "public polls" and "commissioned polls".  A public poll is a poll either conducted by the pollster themselves without anyone paying for it, or commissioned by a major media source, for which full details of results are usually publicly released.  Although there is the potential in theory for the party biases of media sources towards a party to result in them hiring a pollster to present results in a good light for that party, there is really no evidence that this happens in Australia.

Commissioned or internal polls are polls paid for by a political party or by a group with an interest in an issue (such as a lobby group or company).  Commissioned polls usually ask standard voting intention questions, but it is the choice of the client whether to release results, and it is common for internal polling to be only selectively released (an increasing problem with robo-polling reducing polling costs).  Often the full details of commissioned polls are not released.

Some companies produce excellent public polling while also accepting commissioned polls in which the questions are asked in a way more likely to get the result the client wants.  Often the client wants a poll that shows strong support for their cause so that they can then get more publicity for their cause and attempt to convince politicians that it is popular.

Just because a pollster does good public polling does not mean their commissioned polls should always be trusted.  As a general rule no commissioned poll reported in media should be taken all that seriously, whatever the pollster, without the full release of the verbatim wording of all the questions in the order asked, and an extensive breakdown of results.  Even with these things, the wording of the questions often turns out to be suspect. Even if there is nothing wrong with the commissioned poll at all, there is still the possibility of selective release of good polls while not releasing the bad ones.  Furthermore, the accuracy of internal polling is prone to morale bias: some parties could be more likely to hire companies that tend to tell them what they want to hear, even when it actually isn't true.

The Australia Institute is an especially high-volume source of unsound commissioned polling (often with longwinded preambles that lead the respondent to answer in a certain way) though that is not to say that all of their polling is unsound, or that they are anywhere near the only culprits.

Upfront Exclusion

This term refers to the proportion of voters who are eliminated from results because they cannot specify a preference, refuse to answer the question, or fail to complete the survey interview.  For most pollsters this proportion is slight to moderate (a few % to sometimes 10%).  In theory if undecided voters had a tendency to shift to a particular party, this could make polls very inaccurate, but there is not much evidence that this issue has bitten in recent elections.  Generally, the higher the upfront exclusion rate, the more chance that those voters who do reply are not representative, but this seems to become a serious problem only with polls that upfront-exclude over 10%. A more serious problem is voters who pollsters do not reach at all.

The Green Vote

Some pollsters have a recent track record of usually or always overestimating the Green vote compared to actual election results, especially when the Green vote is fairly high.  An especially stark example was the 2014 Victorian state election, in which all 17 polls published in the two months before the election had the party's vote too high, by up to eight points.  Part of the reason for this is that the Green vote has often been very soft; there may be other reasons.  Small and new pollsters, and pollsters with high undecided rates, are especially prone to this problem.   Polling of "others" and "independents" is often also inaccurate.  Smaller parties tend to be under-polled if they are not specifically named, while the category "independents" tends to over-perform in polling compared to election results.  Voters may offer "independent" as an ambit wish for a good high-profile independent candidate, but they won't vote for one if one isn't on the ballot.

Preferred Prime Minister

Preferred/Better Prime Minister or Premier polling questions are a bugbear of Australian poll commentary, which would probably be more informed if such questions did not exist.

Given that Australian politics is so presidential and that the personal approval/disapproval ratings of the Prime Minister are a driving indicator of change in the 2PP vote, it might be expected that a question about who should be Prime Minister would yield good information.  It frequently doesn't.  For whatever reason (and it seems to have something to do with the don't-know option), the preferred leader scores of most major pollsters flatter the incumbent.  For instance, in Newspoll, if the two parties are tied on the 2PP vote, and their leaders are tied on personal ratings, then the Prime Minister will typically lead the Opposition Leader by about 15 points (two different estimate methods give figures of 14 and 17 points) as Preferred Prime Minister.  This skewing leads to the media talking about fairly big PPM leads for incumbent PMs as evidence of dominance when they are not, or small PPM leads or deficits as evidence that the government still has an ace up its sleeve when in fact they are evidence of trouble.  See Why Better Prime Minister/Premier Scores Are Still Rubbish.

The only pollsters that seem to avoid this are ReachTEL (see below) and Morgan SMS.

2PP Preferencing: Last Election vs Respondent Allocated

Most pollsters publish two-party preferred results that are based on the assumption that voters who do not vote for the major parties will distribute their preferences in the same way as at the last election.  Many pollsters who do this try to calculate the preference flow for the Greens (and often a few other named parties) separately from other parties, but some (eg Ipsos) use "overall preference flow" which assumes that the average flow from all non-major-party voters will stay the same (even if the proportion of them who vote for the Greens changes.)

Some pollsters, however, use respondent-allocated preferences, ie they ask the respondent how they will distribute their preferences.  One problem with this is that many voters will actually follow what their party's how-to-vote card says rather than decide for themselves. In any case this method has a history of strong Labor skew at federal elections and is generally less accurate.  At some state elections this method has shown a Coalition skew, at least among some pollsters.

The 2016 federal election reinforced the superiority of last-election preferences, following some recent cases (2013 federal, 2014 Victorian) where the truth was somewhere between the two.  In the 2015 Queensland election, last-election preferences proved very inaccurate and it's likely respondent-allocated preferences would have been more predictive for that election, and will be so for some other such elections with very large swings.  In the 2015 NSW state election the most conservative estimates of respondent-allocated preferences were accurate.  At the 2019 federal election, there was a significant preference shift but respondent preferences were no more accurate than last-election preferences.  The most accurate preference flow estimates were Galaxy's which were based on an undisclosed extrapolation from multiple recent elections, combined with an estimate for the United Australia Party.

For a detailed technical discussion at federal level see Wonk Central: The Track Record Of Last-Election Preferences.

Single Polls vs Aggregates

No matter how good a pollster is, no single poll series will consistently give a perfect and reliable picture of voting intention.  Aggregating polling from multiple polls to get a picture of the whole range of voting intention is usually more reliable than assuming any one poll or poll series is accurate, except when polls engage in herding.  If you just have one poll saying 52-48, you do not know for sure the leading party is in front.  If you have five with an average of 51.5-48.5, all taken at the same time and without significant house effects, you usually have a much better idea that the leading party really is in front, though in 2019 this proved not to be the case.  When herding occurs, all aggregators will fail, and there are also cases (such as the UK 2015 and 2017 elections) where so many pollsters are wrong that some polls easily beat the aggregate.  The hard bit is knowing which ones!

Many people make the mistake of saying that if all the polls are within their margin of error of 50-50 then the race is as good as a tie.  Generally, this isn't true.  See "Margin of error" Polling Myths.

Different poll aggregates will give slightly different values at any given time because of the nature of the different assumptions made by those running them.  Such issues as what weight to place on given polls based on their past record, how quickly to assume a poll has ceased to be relevant and what the hell to do about Essential are not easy and different modellers will try different assumptions, and then modify them when elections provide more data.

A list of active polling aggregators is given at the base of this article.

Poll Fraud and Fake Polls

Poll fraud occurs when someone (possibly a pollster) simply makes up numbers, which means it can produce a "poll" without needing to spend time or money surveying anyone.  Poll fraud can be detected by various methods, including results that fail non-randomness tests in their fine detail.  Poll fraud is a problem at times in the USA.  No poll fraud in commercial polling has been detected in Australia to my knowledge, but fake polls are now and then circulated.

Faked private polls seen in Australia have included a faked "ReachTEL" for the seat of Toowoomba North at the 2015 Queensland state election, a faked "Nationals internal" for the seat of New England in October 2017, a faked Labor internal for the 2018 Batman by-election, and a faked "ReachTEL" for the seat of Curtin at the 2019 election.

A satire website called The Bug Online, associated with the @drongojourno and @thebugonline twitter feeds, has often produced fake Newspolls on what it believes, not always correctly, to be Newspoll nights.  Gullible Twitter users are sometimes fooled into believing these are real Newspolls.  Real Newspoll results do not appear via the Australian until at least 9:30 pm on Sunday nights, though sometimes results or hints are tweeted or broadcast before then by reliable or mostly reliable journalists.

Transparency

The transparency record of currently active Australian pollsters generally is dire.  Meaningful improvement has been seen from YouGov/Newspoll in the wake of the 2019 failure, but much more would be good.  The impending formation of an Australian Polling Council should lead to some improvement.

The pollsters

Newspoll
Method: Online panel poll for national and state polls.  Live phone polling (mixed landline/mobile) for seat polls
Preferences: Based on undisclosed mix of previous elections (including federal and state elections) and sometimes own judgement.  Seat polls may use respondent preferences in cases

Newspoll, house pollster for The Australian, is Australia's best-known polling brand and the one that most influences political debate, election betting market moves, and public comment about party standing.   

Between election campaigns it has often polled fortnightly, but three-week gaps became increasingly common (see How Often Are Federal Newspolls Released?) and as of 2021 three-week gaps are the norm.  However sometimes the gaps will be varied for reasons such as avoiding public holidays, coinciding with resumptions of parliament, Budget schedules and avoiding clashes with state elections.  Currently the poll is typically released online on Sunday nights with in-print reporting on Monday.

Until July 2015, Newspoll was a telephone pollster that dialled randomly selected numbers and only called landlines.  In July 2015 the brand was transferred away from the company previously running it (which was dissolved, with some key staff moving to start Omnipoll).  Newspoll was then operated by what was Galaxy (see YouGov below) and was a hybrid pollster using a combination of online panel polling (a la Essential) and robopolling (a la ReachTEL).  

In late 2019, Newspoll was switched to a purely online panel poll similar to those used by YouGov overseas.  This poll performed moderately well at its first test (Queensland 2020).  The poll uses advanced targeting of respondents to preselect more apparently representative samples and to try to reduce reliance on scaling. 

The Newspoll brand has a long history, going back to late 1985, and has asked questions in a very stable form, making it an excellent poll for past data comparisons; these seem to have been not much affected by the 2015 methods change.  The brand has a predictive record at state and federal elections that is second to none, despite a failure in 2019 and a fairly bad final 2PP figure in 2004 (a result of a shortlived and incorrect 2PP estimate method).  The track record of the 2015-9 Newspoll is discussed here.

Far too much attention is still paid to poll-to-poll moves in Newspoll without considering the pattern (when available!) from other polls.  One behavioural change following the switch to Galaxy (now YouGov) is that Newspoll has become much less bouncy.

The poll has had a poor transparency record over time, although since the 2019 polling failure there has been some improvement.  From December 2017 shifts in the behaviour of Newspoll's 2PP relative to its primary votes became strongly apparent, as its 2PPs started to look a lot less like 2016 election preferences.  Galaxy had started employing a different formula for One Nation preferences following the Queensland state election (with One Nation preferences splitting around 60-40 to Coalition), but this was not documented until it was suspected and then exposed by psephologists over a period of months.

The new Newspoll has been tested at the 2020 Queensland and 2022 South Australian election (where it performed well) and the 2021 WA election (where its final poll fell 3.7% short of predicting the margin of Labor's massive win, an error mainly resulting from errors on the primary vote, but also partly from preference assumption issues. A slightly earlier poll was more accurate.)  It was again rather accurate at the 2022 federal election though it fairly badly overestimated Labor's primary vote.

In 2020 YouGov introduced a new form of seat polling with targeted live phone polling of around 400 voters per electorate.  Of four such polls in the 2020 Queensland election, two were highly accurate and two were rather inaccurate. A further poll correctly predicted the demise of WA Opposition Leader Zak Kirkup, with an acceptable 2PP miss of just under 4%.

Newspoll attracts a massive volume of online conspiracy theories, most of them left-wing and virtually all of them groundless and daft.  Reading a full #Newspoll Twitter feed on a given poll night may cause permanent brain damage, and at least 95% of tweets that mention "Newspoll" and "Murdoch" together are rubbish.

[UPDATE: In July 2023 YouGov head of polling Campbell White and analyst Simon Levy left YouGov to form a new company Pyxis Polling and Insights.  Following a brief hiatus it was announced in mid-August that Newspoll would be done by Pyxis and would no longer be done by YouGov.]

YouGov (formerly Galaxy and YouGov Galaxy)
Method: As for Newspoll for state/federal level polls.  Live phone polling for seat polls
Preferences:Based on undisclosed mix of previous elections (including federal and state elections) and sometimes own judgement (in rare cases respondent preferences in seat polls)

YouGov, an international polling firm, acquired Galaxy Research in late 2017.  This had no immediate effect on Galaxy's existing polling methods, but following the 2019 polling failure, YouGov's Australian operation adopted similar practices to those YouGov uses overseas.  This applies both to Newspoll and the firm's other polling, such as regular polls of Queensland voting intention for the Courier-Mail, and sporadic YouGov-branded federal polls. YouGov has switched to live phone polling (mixed landline/mobile) for seat polling (see above).

Prior to being acquired by YouGov, Galaxy Research had been conducting federal polling since the 2004 federal election.  Galaxy's federal polling was for some time conducted by random telephone surveying but it was an early adopter of adding mobiles to the mix.

In this time, Galaxy had a formidable predictive record until 2019 and was an uncannily steady (underdispersed) poll, a character the new Newspoll has inherited. At the 2016 and 2019 elections, Galaxy seat polls were notably underdispersed - they were not only much less variable than the actual results, but less variable than they would have been expected to be even if there was no difference between seats. Galaxy was in my view the best pollster of the 2013 federal election campaign and lead-up, and the Galaxy/Newspoll stable shared this honour again for 2016.

The major difference between YouGov and Newspoll currently is that YouGov runs in the Murdoch tabloids while Newspoll runs in The Australian.  However Newspoll has some established question conventions that lead to, for instance, lower undecided scores on leadership rating questions than in the tabloid-published YouGovs.  

As noted re Newspoll, the first test of the new YouGov methods at Queensland 2020 was fairly successful.  This included an accurate local combined poll of three south-eastern seats.    

In 2022 YouGov pioneered a MRP model for the federal election.  This was mostly very successful - nailing the 2PP, correctly predicting all but four of the seats won by major parties, but it did miss three wins by Greens and five by independents.

YouGov-Fifty Acres was a short-lived different poll with some extremely weird preferencing patterns; see previous edition.

[UPDATE: See note in Newspoll section re changes in Newspoll from Aug 2023 onwards.]

Essential Report
Method: Online poll
Preferences: Respondent (last-election for voters undecided on 2PP preference)

Essential Report is a fortnightly (formerly weekly) online poll that was, for a long time, the house pollster for the Crikey website's subscriber newsletter.  It is now mainly published by the Guardian.  Essential's respondents are selected from a panel of around 100,000 voters, and about 1000 are polled each week, by sending out an email offering small shopping credit rewards for participation.  Respondents found to have engaged in "flatlining" or "speeding" to get credits are excluded.  Until the end of 2017, Essential unusually used two-week rolling samples of voting intention to counteract "bouncing" from poll to poll, but now each sample is an independent sample.

In its very early days Essential was a very bouncy and Labor-skewed poll, but it made changes at some stage in 2010 and delivered a good result at that year's election.  However, in the 2010-3 and 2013-6 federal terms the poll still had some problems.  It was too underdispersed (see Essential: Not Bouncy Enough), but in a way that seemed to cause it to become "stuck" and to respond slowly and incompletely to big changes in voting intention, as compared to other pollsters.  Quite why this was is not entirely clear - it could be to do with the composition of the panel or with repeat sampling issues within it (against which some precautions are taken).  Essential also sometimes displayed a very different trend pattern to other pollsters.  Its performance in the 2013 election leadup was idiosyncratic.  At the 2016 election it produced an impressive final-week poll but doubts remained about its tracking behaviour.  After that its poll-to-poll variability became more natural, but it was still caught up in the 2019 failure.  Overall, Essential has tended to lean to Labor, except for its very last poll in the 2016 campaign.  

Essential asks a very wide range of attribute and issue based questions that try to drill down into the reasons why voters have specific attitudes, that in turn underlie their votes.  However the quality of these varies and some seem to be left-wing message-testing or headline-generating type questions rather than quality polling.  Many Essential polls are marred by high don't-know rates, a hazard of online polling formats.  It is also unclear how representative Essential's panel is on issue questions - see the debate involving Essential and the Scanlon Foundation here.

Following the 2019 polling failure Essential went forwards by publishing some extra details such as raw data breakdowns for support by party, some crosstabs and income classes, but also went backwards by initially ceasing to publish voting intentions for reasons I do not consider to have been adequately explained.  As a result it was impossible, for long time, to benchmark Essential issues polling.

In July 2020 Essential introduced a new "2PP Plus" method of presenting results in which undecided voters are included rather than excluded.  This means that, for instance, a result might be reported as 48-44 with 8 undecided instead of 52-48.  Also, results are back-released every three months rather than regularly.  

A further minor annoyance is that Essential usually lists responses to its agree/disagree style issue questions in order of support rather than in the order offered.

Essential used to roll two weeks' sampling together for its voting intention results but ceased doing so at the end of 2017. 

Essential's 2022 federal poll performed fairly poorly, greatly underestimating Independents and Others.

Resolve Strategic/Resolve Polling Monitor
Method: Mostly online panel polling, sometimes with live phone polling (landline/mobile) included
Preferences: Not currently releasing 2PP

As of April 2021, Resolve has commenced polling as the new major poll series for the Nine newspapers (SMH/Age) covering similar ground to the former Nielsen and Ipsos series.  The poll will be released monthly with bi-monthly NSW and Victorian polls.  

Resolve is founded by Jim Reed, a very experienced pollster formerly of C|T (Crosby-Textor) and Newgate.  There is little available public data on accuracy because all these have mainly served party and other private clients, though Newgate was one of the better pollsters on the same-sex marriage postal survey.  Prior to starting the Resolve Political Monitor series, Resolve was best known for a major contract with the Australian federal government on coronavirus-related polling.

Early indications were that Resolve had high estimates of the Independent vote compared to other pollsters, and low estimates for Others who are not Independents.  This may be partly a result of putting Independent on the readout in seats where no high-profile independents run.  (This may have also caused overestimation of One Nation in the opening Resolve poll.)  Once readouts were limited to candidates who ran, Resolve was much more accurate on the independent vote.  Its final 2022 poll shared honours with Newspoll as among the best.

Resolve initially did not publish two party preferred estimates with its primary votes, however estimates could be derived by observers such as me.  The dropping of 2PP estimates was supposed to get rid of horse race style commentary but as predicted has just led to horse race commentary continuing but being wrong.  Eventually, Resolve started releasing respondent preference estimates. Resolve also asks its approval questions for leaders in a manner that specifies "recent weeks" (however at least some respondents don't read the instructions) and uses poll questions on issue trust with a "someone else" option that may disadvantage Labor (since "someone else" on some issues will be the Greens.)  

Resolve uses a better-party-to-handle question format allowing for a "someone else" option, which disadvantages Labor because left voters may think the Greens would handle and issue better while still thinking Labor would be better than the Coalition.

Resolve is not an Australian Polling Council member and its disclosure practices have varied.  See my opening article regarding Resolve here and also an excellent article by Murray Goot.  

uComms
Method: Robopolling (landlines and mobiles)
Preferences: Respondent

uComms uses the technology of robopollster ReachTEL (see below) to conduct polling, mostly seat polling.  A robopoll conducts automatic phone calls to randomly selected landline and mobile phones, and respondents pick answers according to options stated by a recorded voice. Robopolls are cheap to run and can be conducted very quickly, but have the disadvantage that more voters will hang up on them immediately.  Therefore they require a lot of scaling, which in theory increases the chance of errors.  However, ReachTEL performed well at many elections.  

In the past there has often been confusion about whether certain polls are ReachTEL or by uComms using ReachTEL.  uComms' 2018 Victorian statewide polling was no worse than others (not that anyone was all that close) while its 2018 Wentworth polling was no better.  

The few uComms polls released for 2019 federal seats were good, but these were of seats outside of the inner city areas where robopolls have most struggled.   uComms polls of two Queensland state by-elections in 2020 were also rather good.  A uComms poll of Greater Darwin for the 2020 NT election was excellent, and a poll for the 2020 ACT election was fairly accurate in an area where past polls have often been poor.  However uComms polls for the Western Australian and Tasmanian 2021 elections were wildly inaccurate, underestimating the incumbent governments by several points in both cases, and only the former had the excuse of being taken six weeks out from the election.  A uComms seat poll of the redistributed SA district of Stuart, taken a similar time out, was one of the most inaccurate polls seen in Australia, underestimating independent Geoff Brock by around 35 points.  uComms performed poorly at the 2022 election - its polls were erratic and skewed to the left.

In April 2019, an ABC report disclosed that two of the three shareholders of uComms are major unionists, resulting in the Nine papers (Age/SMH) announcing they would no longer use uComms, and some activist groups following suit.  

A common annoyance with uComms polls, as with ReachTEL, has been that figures are published with initially "undecided" voters who give a response at the second attempt not redistributed, making the primary votes for all parties look too low. Other pollsters include these voters in the figures for their second-attempt response.  Especially when media sources don't publish a full version of the poll, this often makes finding out exactly what is going on with the primary votes very difficult.  Also, uComms' use of the term "undecided" excludes "hard undecided" voters who have no idea what party they would choose - other pollsters call these the "undecided" voters (and most pollsters usually exclude them.)

Age breakdowns of uComms polls have frequently shown the 18-34 age group as oddly conservative.  This appears to be a result of scarcely reaching any voters in this group.

Morgan
Method: Varies, most recently multi-mode (online/"telephone") but also recently SMS, face-to-face and live telephone
Preferences: Respondent (last-election may be also published),  Switched to last-election during 2022 campaign.

Roy Morgan Research is a polling house that traces its lineage back to Morgan Gallup polls conducted from the early 1940s.  The very experienced pollster was formerly the house pollster for The Bulletin magazine (which no longer exists), and suffered badly when it predicted a Labor win in 2001.  Now unattached to any specific media, Morgan is not as much discussed as other pollsters, but the lack of media attachment is not the only reason for that.  Morgan's polling is confusing and unreliable, usually not sufficiently documented, and its reputation among poll-watchers has declined in recent decades.

Various forms of Morgan polls are seen from time to time including the following:

* SMS only mobile phone polling (mainly used for state polls, also for some issues polling)
* Telephone polls (mainly used for leadership polling)
* Multi-mode polls (most recently a mixture of online and "telephone" polling)

Other combinations of multi-mode polling have been seen in the past, and at one stage Morgan used to issue a lot of pure face-to-face polls, which skewed heavily to Labor.  Morgan also usually uses respondent-allocated preferencing, which can create further skew to the ALP.  The pollster has often displayed severe skew to the Greens in its primary votes, and some of its local panels may be unrepresentative.  The small sample size of its state polls of the smaller states is another problem - Tasmanian samples are sometimes reported in the media, but with a sample size of 300 or less, why bother?

Morgan's multi-mode polls that include a face-to-face component have often skewed to Labor, but skewed to the Coalition for a while after Malcolm Turnbull first became Prime Minister.  The skew to Labor resumed at the 2019 election, but this time was shared with other pollsters.  Morgan ceased face-to-face polling in early 2020 on account of the COVID-19 pandemic, and its current online/phone multi-mode poll initially displayed little skew but later started skewing to Labor (mostly through the use of respondent preferences.)

Morgan polls seem to be very reactive to "news cycle" events.  SMS sampling (apparently drawn from a panel rather than random selection from the whole population) is probably too prone to "motivated response", with responses from voters who have strong views about the issues of the day being overrepresented in the results. My view is that SMS is a suspect polling method. At state elections Morgan's SMS polling has often been very volatile.

In the leadup to the 2016 election, Morgan issued many seat-by-seat results, frequently based on tiny sample sizes and often accompanied by unsound interpretation.  The pollster also stopped releasing national polling in the last month of the campaign, making it impossible to benchmark its performance for the future.  

Morgan's worst trait is its habit of cherry-picking which of its polls to release full details of at the time - it at times conducts far more polling than is immediately (or ever with full details) released to the public, and it has openly stated that it released one New Zealand poll at the time because it found the result noteable.  Many federal polls in the current term have appeared only as 2PP readings on graphs, and the graphs have sometimes contradicted each other.  As a general rule, Morgan polls should be treated with great caution at all times.

Lower profile or nowadays less common polls - in alphabetical order:

ANUpoll

ANUpoll is a quarterly series through the ANU Centre for Social Research and Methods.  It is a mostly online but partly phone based poll functioning through a "Life in Australia" panel which sends out email, SMS and phone reminders for an online survey.  Sample sizes are typically large and it is not clear what proportion of the panel is polled.  Responses are reweighted but the weighted demographic indicators are not published.  Voting intention polling has been published from some waves of these surveys, not very consistently, and persistently displays a large skew to the Greens, suggesting that the poll is not very useful.  Results tend to be released with "undecided" included in a headline figure, thus understating primaries for other parties.  ANUpoll was very inaccurate both before the election and in exit polling in 2022.

Community Engagement

Community Engagement produced one national commissioned poll and some commissioned seat polls at the 2016 election.  Documentation on its early polls was so inadequate that it is not even clear what kind of pollster it was.  Results at the 2016 election were inaccurate on federal primaries and especially in the seat of Higgins with a 10-point error on the Liberal primary.  The pollster has been active in commissioned polling, including for the ALP in Western Australia in 2018, where it was described as a robo-poll, and I believe also in internal polling for the Victorian ALP in 2014 and 2018.  A report in the West Australian also showed the pollster copying ReachTEL's habit of including "undecided" in a headline figure, though whether it prodded respondents and discarded the unproddables, as ReachTEL does, is unknown.

Dynata (formerly Research Now)

Dynata is an online panel pollster similar to Essential.  It has produced a fair amount of mostly commissioned issues polling, with relatively little testable polling of vote shares.  In 2016 it conducted Senate polling for the Australia Institute which overestimated the vote for every party named (especially the Coalition and Greens) and hugely underestimated the vote for unnamed parties. (Senate polling by any pollster is likely to encounter this problem to some degree.)  In 2019 Australia Institute Senate polling was again inaccurate, overestimating the vote for One Nation especially, and underestimating the Coalition by a massive 10% in the final poll.  Dynata polls were again very inaccurate at the 2022 election.

The only lower house poll by this firm I am aware of is one taken about a month prior to the 2016 federal election, which underestimated the Coalition's primary by 4%, mainly by overestimating Others.

Dynata polls commissioned by the Institute of Public Affairs have been published with large volumes of demographic data (though only percentage breakdown crosstabs by age group, without saying how many respondents are in each age group).  These have displayed skews by gender, with younger respondents more likely to be female and older respondents more likely to be male.  At best, heavy scaling would be required to adjust for this. 


Environmental Research and Counsel

This polling firm is within the Essential Media group of companies that includes Essential and also UMR (a well-known ALP internals pollster) but is not the same company as Essential.  Two commissioned seat polls for the Greens at the 2019 election both significantly favoured the Greens compared to the actual results; one (Higgins) used much too small a sample size and incorrectly had them winning the 2CP.

EMRS

EMRS is a Tasmanian pollster that has surveyed state and federal voting intention in Tasmania since the late 1990s, and sometimes does commissioned voting polls inside and outside the state.  It is best known for quarterly state voting intention polling.  Currently its state polls use a mix of landline and mobile polling while some of its other polling uses a mixture of online and phone polling.  I believe that even its state phone polling is at least partly a panel poll and not truly random.

EMRS was historically a difficult pollster to make sense of because its undecided rates were often much higher than for other pollsters, and this often applied even after the initial prodding of unsure voters to say which party they are leaning to.  (Antony Green's partial defence of the company's high historic undecided rates here was refuted here).  At past state elections the pollster has tended to overestimate the Green vote and underestimate the incumbent government of the time's vote by, on average, a few points each time.  In late 2019 the pollster made significant weighting changes and increased its coverage of mobile phones.  This appears to have fixed issues with excessive Greens and at times Others votes.

A commissioned EMRS poll of the 2018 Hobart City Council mayoral race was very accurate, despite the race being a voluntary vote and much harder to poll than a state or federal seat poll.  In recent times EMRS has often released polls in batches of several months' polling at a time (so that only about one in three polls are released when they are taken), a suboptimal practice.  Disappointingly, EMRS did not release a final poll for the 2021 Tasmanian election (perhaps because nobody paid them to) but a poll from months before was still reasonably close.  

EMRS is now a member of C|T Group (formerly Crosby Textor), which mainly does Liberal Party internal polling and does very little public polling.  Message-testing polls attributed to EMRS in January 2022 attracted online controversy.

Ipsos
Method: Live phone polling (mixed landline and mobile) and online panel polling
Preferences: Last-election preferences (batched "overall preference flow") and respondent preferences were both released

Ipsos is a global polling brand with a good reputation.  Fairfax Ipsos started in late 2014 and is a live phone poll that samples both landlines and mobiles and operates on a similar scale and frequency to the former Fairfax Nielsen polls.   The poll's biggest issue is that it persistently has the Green primary vote much too high and the ALP primary too low; it is also prone to have Others on the high side.  It is also somewhat bouncier than other national polls, largely because of its smaller sample sizes (recently reduced from 1400 to 1200 as of late 2018).  At elections so far it tends to have performed well on the 2PP vote (except for 2019) but not so well on the primaries.

Ipsos' leader ratings polling is noticeably more lenient than Newspoll's.  This especially applied in the case of former Prime Minister Turnbull.  Ipsos can have long gaps between releases, which does not stop Fairfax Nine journalists making a big deal of meaningless changes since the previous Ipsos in their coverage.

Ipsos uses a slightly different last-election preference method to other pollsters.  A single preference flow for all minor parties is found and the same flow is applied irrespective of changes in the (alleged) support for particular minor parties.  One media report suggests this method might have changed recently but this isn't verified yet.

Following the 2019 polling failure, Nine media (formerly Fairfax) dumped Ipsos, though said media usually takes long breaks after elections anyway.  Ipsos was commissioned by the AFR for a new poll in the leadup to the 2022 election and this poll performed reasonably despite some skew to Labor.

Ipsos is also conducting panel polling through its iSay (formerly MyView) online system and some results have been reported from this, including state leadership results.  They regularly release data in an Issues Monitor series but it has been difficult to find details about the polling methods beyond that they use this panel, which uses quotas for respondents to at least some degree.

Prior to the start of the Fairfax polling, Ipsos conducted some other voting intention polls under the name Ipsos i-view.

JWS Research

JWS Research is another robopollster.  It conducted a massive marginal-seat poll at the 2010 federal election with indifferent predictive results on a seat basis (but an excellent overall 2PP estimate) and a similar exercise at the 2010 Victorian state election with excellent results.  In the 2010-3 term it was notable for a string of aggregated marginal seat mega-polls, including some jointly commissioned by AFR and a Liberal Party linked strategic advice/lobbying firm called ECG Advisory Solutions.  These polls were often blighted by the release of clearly unsound seat projections based on them, but that is not the fault of the data.  JWS also conducted many local-level seat polls at the 2013 campaign.  Electorate-level polls released by JWS during the 2013 campaign showed a strong general lean to the Coalition.  It is likely that the series of aggregated marginal polls experienced the same issue.

In the 2013-6 and 2016-9 cycles JWS kept a lower profile, but it releases very thorough and useful issues polling every four months in an omnibus called True Issues.   In 2017 the AFR reported that John Scales of JWS Research holds large market research contracts with the federal Government.

(KORE: See Voter Choice below)

Lonergan Research

Lonergan is a long-established robopollster that sometimes produces public polling, and has also done many internal polls, often for the Greens and other left-leaning entities.

Lonergan had a poor 2013 campaign with its seat polls showing a massive Coalition skew and a commissioned mobile-phone-only poll proving very inaccurate (perhaps because its sample size was too small).  Its final 2016 federal poll, however, was quite accurate despite being taken nearly two months before the election.  Its NSW 2014 state election polls showed skew to the Coalition and results from its commissioned seat polls have been mixed.  A Lonergan landline-only seat poll of the Batman by-election, however, scored a bullseye, but it may have been lucky as it was taken before public attacks on the Greens candidates by disgruntled Greens members.  At the Queensland 2019 state election only one of three Lonergan polls for the Greens was too favourable to them, with the other two being highly accurate.

Lonergan initially attracted criticism for scaling results to voter reports of how they voted at previous elections. Some voters may not report their voting behaviour at previous elections accurately, and may over-report voting for the winner, as a result of which polling becomes skewed towards the other side.  Following the 2019 polling failure it became apparent that more major pollsters were using this method but unlike Lonergan were not admitting to it (I have no evidence that Lonergan still uses this method.)

Lonergan oddly confessed to herding by suppressing supposedly accurate Queensland 2019 federal polling because it did not believe the results.  The weird aspect of this episode is that Lonergan was not conducting any published public polling and as such was not previously tarnished by the failure.

A Lonergan Senate poll of SA for the 2022 election was inaccurate.

MediaReach / TeleReach / KJC Research

MediaReach/TeleReach/KJC Research (these names appear to be used by the same firm) is another IRV pollster (robopollster) that is owned by a firm with several years' experience in the field.  It has done, for example, state polling of WA and the NT and an electorate poll of Mackellar.  At both the Northern Territory election and the 2016 election for the seat of Solomon, MediaReach overestimated the large swing against the CLP by about five points.  It produced very detailed polling for the 2020 NT election, but the polling was in fact party polling for the Territory Alliance, and, as is often the case with released internal polling, it vastly overestimated the sponsor's result and was completely useless in predicting the outcome. The poll predicted the TA would win several seats; it in fact barely scraped one.  On the other hand the poll under the KJC Research brand correctly predicted that Dale Last would retain Burdekin in the 2020 Queensland election with an increased margin.

MediaReach was heavily used in the 2018 Tasmanian election campaign by the Liberal Party with numerous results selectively released to the media.  The released results performed very well, effectively tracking the collapse of the Jacqui Lambie Network vote and that the Liberals were on course for a comfortable victory.  There may have been a self-fulfilling prophecy aspect in that success.  However at the 2018 Braddon by-election the poll was not accurate, incorrectly finding that the Liberals had driven independent Craig Garland's vote down below 5% and were on track to win (he polled 10.6% and his preferences caused them to lose.) Generally the usually cherrypicked results of these polls released to media should be viewed as attempts by parties to control the narrative rather than to report it.  

In March 2022 seat polls variously described as Telereach and KJC Research were commissioned by NewsCorp.  These showed improbably high One Nation votes in several seats, also some unlikely 2PP results given the primary votes (presumably as a result of respondent preferencing) and very high variation between seats.  A KJC Research poll of Wentworth was also seen.  

Painted Dog

Painted Dog Research is a market research firm which has conducted some issues polling through an online panel site called Rewarding Views.  In June 2020 it polled leadership ratings for WA state leaders and the federal leaders in WA, but the results were described as representative of the Perth metro area, not necessarily the whole state.  I am not aware of any voting intention results close to election day.  A Painted Dog poll at the time Scott Morrison became PM had Labor leading the Coalition on primaries by 2 points in WA; the Coalition ultimately outpolled Labor by 14 points.  The turnaround on those figures was somewhat larger than that experienced in national polling overall.

ReachTEL
Method: Robopolling (landlines and mobiles)
Preferences: Respondent

ReachTEL was for a while a commonly encountered "robopoll", often used by Sky, Channel Seven and various newspapers, and also as a commissioned poll for various groups.  It has now been mostly supplanted by uComms.

ReachTEL soon established itself as a reliable and accurate national and state-level pollster, being among the top few pollsters in a string of elections, including being the best statewide pollster at the 2015 NSW state election and the most accurate pollster of primary votes in Queensland in 2015.  However its tracking performance at state elections in 2017 was poor, showing the WA election as unduly close until its final poll and generally showing the LNP as winning the Queensland 2PP until very close to the end of the campaign.  ReachTEL bounced back at the Bennelong federal by-election where both its polls were much more accurate than all three of Newspoll/Galaxy's.  It was also the most accurate poll at the 2018 Tasmanian state election (where Newspoll/Galaxy did not take the field.)

ReachTEL's electorate-level public federal polling has sometimes been skewed to the Coalition.  It has also struggled (as have all pollsters, but ReachTEL more than most) in inner-city state electorates with high Green votes.

ReachTEL forces answers to some questions, often disallowing an undecided option and requiring the respondent to choose one option or the other.  This results in preferred Prime Minister figures that are often closer to the national two-party vote than those of other pollsters.  It also can produce worse ratings for the government on issues questions.  The suggestion is that there are many people who have slightly negative views of the government but will let it off with a neutral rating unless forced.  Forcing can cause voters to hang up but the company advises me that the percentage of hangups after the first question is very small.

ReachTEL leadership performance ratings use a middle option of "satisfactory" which seems to capture some mildly positive sentiment.  For this reason ReachTEL ratings when expressed in the form Good vs Poor are sometimes harsher than those of other pollsters.  In 2016 ReachTEL switched to using respondent-allocated preferences for most of its polls.

A common annoyance with ReachTEL polls has been that figures are published with initially "undecided" voters who give a response at the second attempt not redistributed, making the primary votes for all parties look too low. Other pollsters include these voters in the figures for their second-attempt response.  Especially when media sources don't publish a full version of the poll, this often makes finding out exactly what is going on with ReachTEL's primary votes very difficult.  Also, ReachTEL's use of the term "undecided" excludes "hard undecided" voters who have no idea what party they would choose - other pollsters call these the "undecided" voters (and most pollsters usually exclude them.)

In 2017 ReachTEL was found to be incorrectly distributing respondent-allocated preferences of Nationals voters in its federal polls, producing a skew to Labor in its respondent preferencing methods as most actual Nationals votes are never distributed.  This may have been cancelled out by a tendency of ReachTEL respondents to be otherwise overly generous to the Coalition in their preferencing (as seen in the Queensland election leadup).

From at least March 2017 reports emerged that some ReachTEL polls were including a charity section in which the respondent could not complete the call without pressing a number, following which their details would be passed to a charity.  What happened to the poll data if the respondent hung up at this point wasn't established.

Cases of ReachTEL calling voters who do not live in the electorate being surveyed are often reported.  It appears this is a product of a small error rate but a very large number of total calls.

From 2018 onwards, pure ReachTEL polls were rarely seen, and many polls reported as pure ReachTELs were actually uComms polls using ReachTEL's system.  Following acquisition of the company, there has been little fresh public polling activity.  However at least one commissioned poll that was clearly ReachTEL and not uComms was seen in December 2019.

Redbridge

Redbridge is a newly-emerged pollster with staff from a range of political backgrounds, but the most prominent on social media is former Labor staffer Kos Samaras.  Redbridge has conducted several seat polls and Victorian state polls, the former of which have often had remarkably bad results for Labor in outer suburban or blue-collar seats, even while national polls show no swing away from Labor.  Redbridge often uses unusual practices such as naming party leaders when describing a party.  An early 2021 commissioned poll by Redbridge on electric vehicles was reported as finding that votes were likely to be affected by electric vehicle issues, but this poll asked voters many questions with clear potential to influence their results to later questions.  

Some media reports have used Redbridge polls to claim that such-and-such party has suffered a spectacular decline in its primary vote.  However a problem with these reports is that Redbridge headline figures do not redistribute undecided voters (and tend to get high undecided rates as well), meaning that the primary vote in the polls is artificially deflated and should not be compared to the previous election.

Redbridge polls have also attracted attention for findings of high United Australia Party votes, but these polls use a relatively short readout of parties which inflates the UAP vote.

Redbridge has recently conducted one major state poll for the Herald Sun.  

In the 2022 campaign Redbridge conducted some very sophisticated seat polling in league with data experts through Climate 200.  Results were exceptionally accurate.

Omnipoll

Omnipoll was started by some leading staff of the original Newspoll company when The Australian transferred the Newspoll brand to Galaxy in 2015.  Omnipoll mainly conducts issue polling and has done very little known voting intention polling.  Seat polling for the coal industry prior to the 2020 Queensland election was very inaccurate, overestimating the LNP by about 8.5 points and underestimating Labor by the same amount, and also underestimating One Nation and KAP and (to a lesser extent) overestimating the Greens.  

Utting Research

Utting Research was founded by John Utting, former head of then-commonly-used ALP internal pollster UMR.  Utting Research is not an Australian Polling Council member and details on methods and results of what are often commissioned robopolls with sometimes small sample sizes tend to be very sketchy.  There has been relatively little public testing of Utting Research's accuracy but two seat polls at the 2022 South Australian election were close to the mark despite small sample sizes.  Utting Research was also the first known pollster to find WA Labor heading for a landslide victory with a 6 in front of the 2PP at the 2021 state election though the poll was taken so far out from the election there is no way to say if it was accurate when taken.   The final Utting polls for the 2022 federal election greatly overestimated the Coalition, except for correctly predicting their defeat in Curtin.

Voter Choice (now KORE)

Voter Choice was a scaled opt-in panel project (like a smaller scale Vote Compass in that way) that produced some polling and also some comments based on qualitative research.   Its polling was untested at elections other than the 2019 federal election (at which it was worse than the established pollsters), and the Wentworth by-election (at which it was very inaccurate, especially concerning the Liberal-Labor 2PP result), although it released predictions for the Super Saturday elections that had some basis in very small sample unpublished polling exercises.

Voter Choice has openly documented that it adjusted results of its Wentworth polling to introduce new weightings because the numbers "looked wrong" (in what way and why isn't stated) and that after the new weightings were added the pollster "liked" the results.  This is apparently because of qualitative data but why is not made clear.  The Wentworth poll used a simulated ballot of major contenders to distribute preferences (an advanced form of respondent preferencing.)  The owner of Voter Choice has also explicitly admitted to not publishing a measure supposedly pointing to a Coalition win in 2019 because she did not believe the measure.

Voter Choice claims and reports should be treated with enormous caution as the founder of Voter Choice sometimes makes very strange claims online, including a tweet that said Labor "could not win" the 2019 election if its 2PP vote was 52.6%. 

In September 2021 Voter Choice rebranded as KORE.  KORE is also a small-scale panel pollster that uses both a panel and whatever responses it can get from social media, aside from those it sometimes discards as based on obvious stacking (including a case where people assumed it was a Scott Morrison poll).  As a result there may well be a lot of the same people taking every KORE survey, but it is also prone to viral social media skew through being shared in political echo chambers.  KORE's primary votes also appear to skew to Independents, like Resolve but for different reasons.  KORE also published a January 2022 federal poll with the Coalition in the low 20s, so I do not take its polling seriously.  

Instead of 2PP, KORE presents a very complicated "effective vote" system based on a simulated preferential ballot that is supposed to account for non-classic contests (exact methods details of this are unknown).   This ends up being broadly similar to 2PP except with some support for Others.  KORE also uses an "incumbent vs challenger" method to say what the poll is showing about a likely election result, but this is based on whether voters, in a 2PP-like sense, support their incumbent.  Since most do so (the majority of incumbents winning their seats easily in any election) this measure is completely unsound.  

KORE makes many unsubstantiated claims about the effectiveness of its survey methods, few if any of which I believe. In general in my view Voter Choice and KORE polls should not be reported by media.

KORE's final poll for 2022, released in March, was very inaccurate, although their seat projection was surprisingly good as a result of large errors in methods cancelling out.

WAOP

Western Australian Opinion Polls (WAOP) is a very long-active but relatively little-known WA pollster. It has polled commissioned polls on a range of issues, usually with modest sample sizes, and in 2017 released commissioned voting intention polling for the federal seat of Pearce.  Its use of decimal results means its results may be mistaken for ReachTEL's.  There is very little information online regarding WAOP's methods and no available basis yet for testing its accuracy.  In 2012 one of its polls was described as a "telephone poll".  A 2015 poll used mainly "voice broadcast" polling (ie robopolling) with a small live telephone sample.

Others

Others will be added here as I come across them or on request.

Online or TV News "Polls": They're Useless!

Ah, but what about those polls on newspaper websites or Yahoo that give you the option of voting for a stance on some hot-button issue?  What about those TV news polls that ask you to call a number for yes or a different number for no?

The short answer is that these are not real polls.  They are unscaled opt-ins and they are not scientifically valid as evidence of voter intentions.  For the first thing, as regularly noted in the fine print, they only reflect the views of those who choose to participate.  If a media source tends to be read more by right-wing voters, then its opt-in polls will tend to be voted on more by right-wing voters.

Secondly, opt-ins suffer from "motivated response".  People who care deeply about an issue will vote on them, but people who really don't have a strong view (but might answer a question put in a real poll that they've agreed to spend time on) will probably not bother.

Thirdly opt-ins are prone to co-ordinated stacking.  Activist networks will send messages by email or social media telling people there is a media poll they can vote in, and this will often lead to votes being cast from way outside the area to which the poll relates.  Opt-ins are easily flooded by this method, producing very skewed results.

Finally, opt-ins are often prone to deliberate multiple voting by single voters, either by people with strong views on an issue who want to manipulate the outcome or by people who want to ruin them just because the results are taken far too seriously.  There are ways to try to stop it, but some of them work better than others. (See in this regard the brilliant work of Ubermotive and also see the guide to how to stop it here.)

It is especially unfortunate that the ABC sometimes reports "polls" of this kind.  They should know better.

Weighted Opt-Ins (Vote Compass etc)

However we do sometimes see weighted opt-ins (examples including the ABC's Vote Compass and Australia Talks surveys) where respondents choose to take part in the survey and then responses are reweighted to reflect the overall population distributon.  These are more credible than unweighted opt-ins but less reliable than true polls.  Recruitment of respondents is more likely to experience biases that cannot be controlled for by weighting (eg people who like a particular media source may have a leaning one way or the other after accounting for demographics), and when such surveys are conducted on a smaller scale than Vote Compass (eg thousands of respondents only) they may be prone to stacking.

I hope this guide is useful; feedback is very welcome.

Poll Quality Reviews

The following pieces on this site have compared the performance of different polls at a specific election:

Voice Referendum Polling Accuracy
NSW 2023
Victoria 2022
2022 Federal Election: Pollster Performance Review
Tasmania 2021: What Was The Point Of This Election?
Recent State Polling Does Not Skew To Labor

The only currently active aggregators are Bludgertrack and Mark the Ballot. I will mention others as I become aware of them.  

7 comments:

  1. Hi Kevin,
    Have a great holiday, and please keep up the good work. I find that all of your posts are excellent and really informative.
    dedwards

    ReplyDelete
  2. Kevin - thanks for this analysis. Much appreciated.

    ReplyDelete
  3. Hi Kevin,

    Is that Voter Choice pollster the same one that was once run by this Dr RK Crosby?

    https://twitter.com/ktxby

    ReplyDelete
    Replies
    1. Yes. That's caused me to check her feed (having been blocked by her for some time) and I see Voter Choice has rebranded as Kore.

      Delete
    2. You and me both.

      Rebekah pointed out that the polls did systematically screw up in 2019, and that herding was a distinct possibility due to how close the polls were for much of the campaign. Crosby told us to look at the Voter Choice project, claiming that she found a 2% swing to the Coalition in the final days of the campaign. She also claimed that if we put her results into our list of final polls, we would get the variance we were looking for.

      It's down now, but I found an archived version of the Voter Choice page:

      http://web.archive.org/web/20200309123102/https://www.voterchoice.com.au/weekly-survey-5-results-part-2-explaining-the-late-shift/

      Looks interesting...until you realise she weighted her results by the vote count as of 25/May! How is that in any way comparable to pre-election polling, where you have to select weighting frames without knowing the result in advance??

      Not to mention - if you amplify the Coalition vote through weighting by vote share, then of course it'll look like there was a swing to them! e.g. if your original poll showed equal and opposite swings, but you upscale the Coalition vote to match the election results, then no duh you get a swing to the Coalition.

      The tweet thread: https://twitter.com/RebofAlexandria/status/1403871291304595458

      (note she deleted a lot of the stuff which painted her in a bad light)

      Anyways. What'd you get blocked for?

      Delete
    3. I was blocked for taking issue with a now-deleted tweet she made on 8 or 9 April 2019 that said "Except that with 52.6 2PP Labor could not win the election. So bored with people equating national 2PP into seats, it doesn't work that way". Her attempt to justify that claim in later tweets included a claim that 8 ALP seats might be won by independents.

      For sure, my review of her comments did not exactly mess about, including describing her claim as "ludicrous" and also quote-tweeting it with the comment "If someone strapped me to a chair and forced me to watch Sky after dark 24/7 it might still take me weeks to see electoral analysis as bizarre and bad as this here tweet."

      Delete
    4. Given my own experience with attempting a reasonable conversation with her, I have to say your response is very deserved and absolutely understandable.

      I attempted to be as polite as possible starting out, but once she blocked me after I asked a question about what might have been a typo, I got fed up and decided to let it rip. Kept claiming Rebekah and I were wrong and didn't know what we were talking about without any evidence or reason given (she's deleted all of that now, but Rebekah's kept the screenshots).

      Delete

The comment system is unreliable. If you cannot submit comments you can email me a comment (via email link in profile) - email must be entitled: Comment for publication, followed by the name of the article you wish to comment on. Comments are accepted in full or not at all. Comments will be published under the name the email is sent from unless an alias is clearly requested and stated. If you submit a comment which is not accepted within a few days you can also email me and I will check if it has been received.