Just before the 2013 election I posted a Field Guide to Opinion Pollsters, which has become one of the more enduringly accessed pieces on this site. However, over time parts of its content have become dated or specific to that election, and with more and more pollsters emerging as others disappear, the thing has got too long. I've decided therefore from now that I will post a new edition shortly into the life of each parliament, editing it through that parliament as the need arises. Pollsters not expected to be active in the life of the current parliament will be removed, but the old edition text will remain on the previous page.
There are a lot of polls about in Australia these days. But how do they all work, which ones have runs on the board and which ones can you trust the most? This article describes what is known about each pollster and its strengths and weaknesses and includes coverage of general polling issues.
The gold standard for success for an opinion pollster is seen to be that its polls at election time get the result as close to right as possible. However, some pollsters are little-tested against actual elections, and getting a specific election right is a combination of skill and luck. In elections where there is a swing on the last day or two of the campaign, a pollster that is actually not polling correctly may have its errors cancelled out by the swing, and hence record a lucky hit. There is more to being a good pollster than just getting it right at election time - a good pollster should also provide useful data between elections and do so using well-designed questions that are easy to interpret. And a pollster should also present their data in a way that makes sense and isn't misleading or confusing.
Some Common Themes
There are some general issues that affect a number of pollsters that I should go through before I move onto individual pollsters. If you just want to look up a given pollster, scroll down, and then you can scroll back to this bit if you see something you want to look up; it might be here.
The issue variously called lean, house effect, skew or bias refers to the tendency of a pollster to produce results that are better for one major party or other (or for specific minor parties) than what is likely to be the real situation. The term "bias" is a poor one for this issue because it carries connotations of the pollster themselves liking one party more than the other or intending to assist one side, but there is no evidence that this is actually true of any major pollster in Australia. The extent to which the house effects for each pollster are stable, or change in response to slight methods changes or political circumstances, is often a subject of debate.
The issue often referred to as bouncing, but more technically as overdispersal or underdispersal, refers to how much a poll tends to move about from sample to sample even if voting intention isn't changing very much. A given poll has a maximum margin of error based on its sample size, meaning that in theory 95% of the poll's results (once adjusted for the pollster's house effect) will be within that margin of error of the true value, but most of them will be much closer to the true value than that. As the sample size increases, the maximum margin of error decreases, but the decrease isn't proportional. For instance, for a 50-50 result from a sample size of 1000, the margin of error is +/- 3.1%, but for a sample of 5000 it is about +/- 1.4%, meaning that national polls with sample sizes above a few thousand are usually not worth the effort of producing them. In practice, some polls tend to vary from sample to sample by much more than would be randomly expected, and these polls are bouncy or overdispersed. Some polls are very static (except or sometimes even when voting intention actually changes sharply), and these are underdispersed.
In theory underdispersal is nice, because a pollster wants to accurately reflect the national mood rather than releasing polls that are wrong by several points. No one wants to issue a rogue poll that everyone then ignores. But a poll that is underdispersed may in some cases be so because it is slow to pick up major shifts when they occur, or indeed doesn't pick them up fully at all. There is also the problem that there is no way to make a poll under-disperse when using truly random sampling from the entire Australian population, so if a pollster's results are very steady the question must be asked: how are they doing it? Is it really a pure and random poll, or is the pollster allowing data from other pollsters to influence the way they fine-tune assumptions that create the final outcome? (The latter practice is known as herding.) Other possibilities include that underdispersed pollsters are using tracking from their own poll or other modelling assumptions to chop rough edges off their results, or surveying the same respondents too often.
Mobile and Landline Phone Polls vs Online Polling
No major Australian pollster only polls landlines.
In the lead-up to the 2013 federal election it was widely argued that the rising proportion of mobile-phone-only households (which contain mostly young voters) meant that landline-only polling skewed to the Coalition. Yet at that election there was no such skew, and not much difference in performance between landline-only phone polling and polls that called mobiles. The most accurate final poll at that election polled landlines only. Partly this was because unrepresentativeness in landline-only polling can be overcome by scaling (see below) and partly this was because the political attributes of landline and non-landline households seem to not be as different as might be expected. See Christian Kerr's report of Newspoll surveying.
The 2013 election, at least, supported the view that purely online-panel pollsters have bigger problems to contend with than landline-only pollsters (and again the Newspoll study above is relevant). Online panel polling, whatever its recruitment method, may have biases that cannot be removed, because online respondents are people who like filling out surveys (often in return for rewards) and are comfortable with technology. Not everyone is like that, and it is a difficult thing to predict by demographic attributes alone, and one that may skew political opinion.
In the 2013-6 parliament, landline-only polling entirely disappeared from the federal scene. All major pollsters now either call mobiles as well, or have included some other kind of surveying (such as online panel surveying) as part of their sampling mix.
Getting a truly random sample of the Australian national population is difficult. Some types of voters are simply much easier to contact than others. One option is to keep contacting potential respondents until you get exactly the right demographic mix. However this can introduce time delays and increase the costs of polling if you are using phone polling. Another option is to "scale" the responses you have by applying corrections based on which group you have less of in your poll than others. For instance, suppose that voters in age group A are 10% of the voting population but only 5% of your sample, while voters in age group B are 25% of the voting population but 30% of your sample. A simple scaling method would then be to double the value of each response from group A and multiply the value of each response from group B by 25/30. In practice, scaling is much more complicated and a given response might be scaled on many different criteria at once, some of which might increase its weighting and others of which might decrease it.
Scaling effectively increases the margin of error and bounciness of a poll, because any sampling error in a small group that is scaled up could be magnified in the overall total. There is also a risk that if a demographic group is hard to poll, then the voters who can be polled within that group might not be a fair sample, and that any error caused by that might then be magnified. For instance, young voters are hard to reach using landline polling, excepting those living with their parents. But are young voters who live with their parents representative of all young voters?
Some areas of Australia are simply very difficult to poll accurately by any method. The Northern Territory is one of them. Inner city electorates are also hard to poll because of high rates of enrolment churn and non-enrolment in the electorate.
Internal and external
Many prominent pollsters conduct both "public polls" and "commissioned polls". A public poll is a poll either conducted by the pollster themselves without anyone paying for it, or commissioned by a major media source, for which full details of results are usually publicly released. Although there is the potential in theory for the party biases of media sources towards a party to result in them hiring a pollster to present results in a good light for that party, there is really no evidence that this happens in Australia.
Commissioned or internal polls are polls paid for by a political party or by a group with an interest in an issue (such as a lobby group or company). Commissioned polls usually ask standard voting intention questions, but it is the choice of the client whether to release results, and it is common for internal polling to be only selectively released (an increasing problem with robo-polling reducing polling costs). Often the full details of commissioned polls are not released.
Some companies produce excellent public polling while also accepting commissioned polls in which the questions are asked in a way more likely to get the result the client wants. Often the client wants a poll that shows strong support for their cause so that they can then get more publicity for their cause and attempt to convince politicians that it is popular.
Just because a pollster does good public polling does not mean their commissioned polls should always be trusted. As a general rule no commissioned poll reported in media should be taken all that seriously, whatever the pollster, without the full release of the verbatim wording of all the questions in the order asked, and an extensive breakdown of results. Even with these things, the wording of the questions often turns out to be suspect. Even if there is nothing wrong with the commissioned poll at all, there is still the possibility of selective release of good polls while not releasing the bad ones. Furthermore, the accuracy of internal polling is prone to morale bias: some parties could be more likely to hire companies that tend to tell them what they want to hear, even when it actually isn't true.
This term refers to the proportion of voters who are eliminated from results because they either cannot specify a preference, refuse to answer the question, or fail to complete the survey interview. For most pollsters this proportion is slight to moderate (a few to sometimes 10%). In theory if undecided voters had a tendency to shift to a particular party, this could make polls very inaccurate, but there is not much evidence that this issue has bitten in recent elections. Generally, the higher the upfront exclusion rate, the more chance that those voters who do reply are not representative, but this seems to become a serious problem only with polls that upfront-exclude over 10%.
The Green Vote
Most pollsters have a recent track record of usually or always overestimating the Green vote compared to actual election results, especially when the Green vote is fairly high. An especially stark example was the 2014 Victorian state election, in which all 17 polls published in the two months before the election had the party's vote too high, by up to eight points. Part of the reason for this is that the Green vote is actually very soft; there may be other reasons. Small and new pollsters, and pollsters with high undecided rates, are especially prone to this problem. Polling of "others" and "independents" is often also inaccurate. Smaller parties tend to be under-polled if they are not specifically named, while the category "independents" tends to over-perform in polling compared to election results. Voters may offer "independent" as an ambit wish for a good high-profile independent candidate, but they won't vote for one if one isn't on the ballot.
Preferred Prime Minister
Preferred/Better Prime Minister or Premier polling questions are a bugbear of Australian poll commentary, which would probably be more informed if such questions did not exist.
Given that Australian politics is so presidential and that the personal approval/disapproval ratings of the Prime Minister are a driving indicator of change in the 2PP vote, it might be expected that a question about who should be Prime Minister would yield good information. It frequently doesn't. For whatever reason (and it seems to have something to do with the don't-know option), the preferred leader scores of most major pollsters flatter the incumbent. For instance, in Newspoll, if the two parties are tied on the 2PP vote, and their leaders are tied on personal ratings, then the Prime Minister will typically lead the Opposition Leader by 16 points as Preferred Prime Minister. This skewing leads to the media talking about fairly big PPM leads for incumbent PMs as evidence of dominance when they are not, or small PPM leads or deficits as evidence that the government still has an ace up its sleeve when in fact they are evidence of trouble. See Why Preferred Prime Minister/Premier Scores are Rubbish.
The only pollsters that seem to avoid this are ReachTEL (see below) and Morgan SMS.
2PP Preferencing: Last Election vs Respondent Allocated
Most pollsters publish two-party preferred results that are based on the assumption that voters who do not vote for the major parties will distribute their preferences in the same way as at the last election. Many pollsters who do this try to calculate the preference flow for the Greens (and often a few other named parties) separately from other parties, but some (eg Ipsos) use "overall preference flow" which assumes that the average flow from all non-major-party voters will stay the same (even if the proportion of them who vote for the Greens changes.)
Some pollsters, however, use respondent-allocated preferences, ie they ask the respondent how they will distribute their preferences. One problem with this is that many voters will actually follow what their party's how-to-vote card says rather than decide for themselves. In any case this method has a history of strong Labor skew and is generally less accurate.
The 2016 federal election reinforced the superiority of last-election preferences, following some recent cases (2013 federal, 2014 Victorian) where the truth was somewhere between the two. In the 2015 Queensland election, last-election preferences proved very inaccurate and it's likely respondent-allocated preferences would have been more predictive for that election, and will be so for some other such elections with very large swings. In the 2015 NSW state election the most conservative estimates of respondent-allocated preferences were accurate. It seems that voter choice about preferencing makes more difference in optional-preferential voting (which now exists only in NSW and the NT) than compulsory, because voters can choose to exhaust their vote.
For a detailed technical discussion at federal level see Wonk Central: The Track Record Of Last-Election Preferences.
Single Polls vs Aggregates
No matter how good a pollster is, no single poll series will consistently give a perfect and reliable picture of voting intention. Aggregating polling from multiple polls to get a picture of the whole range of voting intention is usually more reliable than assuming any one poll or poll series is accurate. If you just have one poll saying 52-48, you do not know for sure the leading party is in front. If you have five with an average of 51.5-48.5, all taken at the same time and without significant house effects, you have a much better idea that the leading party really is in front.
Many people make the mistake of saying that if all the polls are within their margin of error of 50-50 then the race is as good as a tie. Generally, this isn't true.
Different poll aggregates will give slightly different values at any given time because of the nature of the different assumptions made by those running them. Such issues as what weight to place on given polls based on their past record, how quickly to assume a poll has ceased to be relevant and what the hell to do about Essential are not easy and different modellers will try different assumptions, and then modify them when elections provide more data.
A list of active polling aggregators is given at the base of this article.
Poll fraud occurs when a pollster simply makes up numbers, which means it can produce a "poll" without needing to spend time or money surveying anyone. Poll fraud can be detected by various methods, including results that fail non-randomness tests in their fine detail. Poll fraud is a problem at times in the USA. No poll fraud in public polling has been detected in Australia to my knowledge.
Newspoll, house pollster for The Australian, is Australia's best-known polling brand and the one that most seems to influence political debate, election betting market moves, and public comment about party standing.
Between election campaigns it normally polls fortnightly, but sometimes the schedule is adjusted to respond to current events, to coincide with a new parliamentary week, or to avoid long weekends. Also the contracted schedule is actually not quite fortnightly, so sometimes there is a three-week break for no obvious reason. The day of release (either Monday or Tuesday, with first figures becoming known about 10 pm the previous night) is also varied, mainly for the first reason.
Until July 2015, Newspoll was a telephone pollster that dialled randomly selected numbers and only called landlines. In July 2015 the Newspoll brand was transferred away from the company previously running it (which was dissolved, with some key staff moving to start Omnipoll). Now, Newspoll is operated by Galaxy (see below) and is a hybrid pollster using a combination of online panel polling (a la Essential) and robopolling (a la ReachTEL). The robopolling is of landlines only, but the online polling will reach respondents who do not have landlines.
The Newspoll brand has a long history, going back to late 1985, and has asked questions in a very stable form, making it an excellent poll for past data comparisons, although how much these are affected by the mid-2015 methods change remains to be seen. The brand has a predictive record at state and federal elections that is second to none, despite a fairly bad final 2PP figure in 2004 (as a result of a shortlived and incorrect 2PP estimation method). The new Newspoll has performed very well at its first electoral test, including a stunningly accurate final poll. However, far too much attention is still paid to poll-to-poll moves in Newspoll without considering the pattern from other polls. One behavioural change following the switch to Galaxy is that Newspoll seems to have become less bouncy.
An often-discussed aspect of the old Newspoll was its upfront exclusion rates and I wrote a detailed article about that here. Newspoll also attracts a massive volume of online conspiracy theories, most of them left-wing and virtually all of them groundless and daft. Reading a full #Newspoll Twitter feed on a given poll night may cause permanent brain damage, and at least 90% of tweets that mention "Newspoll" and "Murdoch" together are rubbish.
A recent source of silly Newspoll conspiracy theories has been the pollster's habit of hibernating for several weeks over summer. Historically Newspoll has always taken at least four weeks off between polls over the Christmas and New Year periods, usually at least five and in cases eight or more. Also, Newspoll is more likely to take long breaks shortly after an election. In 2011, Newspoll did not emerge until the first weekend in February. In 2008, it polled once in late January (its first poll since the election of the Rudd Government) and then took another four weeks off.
Galaxy Research has been conducting federal polling since the 2004 federal election. Galaxy's federal polling was formerly conducted mainly by random telephone surveying but its polls now use a mix phone polling (including of mobile phones) and online panel polling. Galaxy appears sporadically between elections and is the house pollster for a string of News Limited tabloids. It is polling less frequently in its own name following its large deal to run the Newspoll brand.
Galaxy has a formidable predictive record and is an uncannily steady (underdispersed) poll. Earlier in its career it appeared to produce slightly Coalition-leaning results between elections, but the lean would go away during the campaign. There is a sharp contrast with Galaxy's specific issue/attribute questions, which (presumably at the behest of sponsoring media) frequently use murky and provocatively subjective language and are often difficult to make accurate sense of.
Galaxy sometimes uses other polling methods. For instance it has been using automated phone polling (robopolling) in seat polls. At the 2016 election, these polls were notably underdispersed - they were not only much less variable than the actual results, but less variable than they would have been expected to be even if there was no difference between seats.
Galaxy was in my view the best pollster of the 2013 federal election campaign and lead-up, and the Galaxy/Newspoll stable shared this honour again for 2016.
ReachTEL is the most commonly encountered "robopoll" and is now regularly used by Channel Seven and various newspapers. It is by far the most commonly commissioned poll. A robopoll conducts automatic phone calls to randomly selected landline and mobile phones, and respondents pick answers according to options stated by a recorded voice. Robopolls are cheap to run and can be conducted very quickly, but have the disadvantage that more voters will hang up on them immediately. Therefore they require a lot of scaling, which in theory increases the chance of errors.
ReachTEL has now established itself as a reliable and accurate national and state-level pollster, being among the top few pollsters at all elections in the last few years, including being the best statewide pollster at the 2015 NSW state election and the most accurate pollster of primary votes in Queensland in 2015. Its electorate-level public federal polling has sometimes been skewed to the Coalition.
ReachTEL forces answers to some questions, often disallowing an undecided option and requiring the respondent to choose one option or the other. This results in preferred Prime Minister figures that are often closer to the national two-party vote than those of other pollsters. It also produces much worse ratings for the government on issues questions. The suggestion is that there are many people who have slightly negative views of the government but will let it off with a neutral rating unless forced. Forcing can cause voters to hang up but the company advises me that the percentage of hangups after the first question is very small.
ReachTEL leadership performance ratings use a middle option of "satisfactory" which seems to capture some mildly positive sentiment. For this reason ReachTEL ratings when expressed in the form Good vs Poor seem harsher than those of other pollsters. Lately the gap between ReachTEL and other ratings in this regard seems to be closing.
Cases of ReachTEL calling voters who do not live in the electorate being surveyed are often reported. It appears this is a product of a small error rate but a very large number of total calls.
Ipsos is a global polling brand with a good reputation. Fairfax Ipsos started in late 2014 and is a live phone poll that samples both landlines and mobiles and operates on a similar scale and frequency to the former Fairfax Nielsen polls. Initially Ipsos appeared to lean somewhat to the Coalition but this seems to have abated. The poll's biggest issue is that it persistently has the Green vote much too high and the ALP primary too low. It is also somewhat bouncier than other national polls, largely because of its smaller sample sizes. At elections so far it tends to have performed well on the 2PP vote but not so well on the primaries. Prior to the start of the Fairfax polling, Ipsos conducted some other polls under the name Ipsos i-view.
Essential Report is a weekly online poll and the house pollster for the Crikey website's subscriber newsletter. Essential's respondents are selected from a panel of around 100,000 voters, and about 1000 are polled each week, by sending out an email offering small shopping credit rewards for participation. Unusually, Essential publishes rolling results that are the sum of each week's poll and the last week's poll. The purpose of this strategy is to reduce bouncing and the impact of brief kneejerk reactions on the poll.
In its very early days Essential was a very bouncy and Labor-skewed poll that was pretty much useless, but it made changes at some stage in 2010 and delivered a good result at that year's election. However, the poll still seems to have some problems. It too is underdispersed (see Essential: Not Bouncy Enough), but in a way that seems to cause it to become "stuck" and to respond slowly and incompletely to big changes in voting intention, as compared to other pollsters. Quite why this is is not entirely clear - it could be to do with the composition of the panel or with repeat sampling issues within it (against which some precautions are taken). Essential also sometimes displays a very different trend pattern to other pollsters. Its performance in the 2013 election leadup was idiosyncratic. At the 2016 election it produced an impressive final-week poll but doubts remain about its tracking behaviour.
Essential asks a very wide range of useful attribute and issue based questions that often help to drill down into the reasons why voters have specific attitudes, that in turn underlie their votes. These are sometimes marred by high don't-know rates, which are an inescapable problem with online polling formats.
Roy Morgan Research is a polling house that traces its lineage back to Morgan Gallup polls conducted from the early 1940s. The very experienced pollster was formerly the house pollster for The Bulletin magazine (which no longer exists), and suffered badly when it predicted a Labor win in 2001. Now unattached to any specific media, Morgan is not as much discussed as other pollsters, but the lack of media attachment is not the only reason for that. Morgan's polling is confusing and unreliable, often not sufficiently documented, and its reputation among poll-watchers has declined in recent years.
Various forms of Morgan polls are seen including the following:
* SMS only mobile phone polling (mainly used for state polls, also for some issues polling)
* Telephone polls (mainly used for leadership polling)
* Multi-mode polls (most recently a mixture of face-to-face surveying and SMS polling)
Other combinations of multi-mode polling have been seen in the past, and at one stage Morgan used to issue a lot of pure face-to-face polls, which skewed heavily to Labor. Morgan also usually uses respondent-allocated preferencing, which can also create skew to the ALP. The pollster has recently displayed severe skew to the Greens in its primary votes, and some of its local panels may be unrepresentative. The small sample size of its state polls of the smaller states is another problem - Tasmanian samples are sometimes reported in the media, but with a sample size of around 300, why bother?
Morgan's multi-mode polls that include a face-to-face component have often skewed to Labor, but skewed to the Coalition for a while after Malcolm Turnbull first became Prime Minister.
Morgan polls seem to be very reactive to "news cycle" events. SMS sampling (apparently drawn from a panel rather than random selection from the whole population) is probably too prone to "motivated response", with responses from voters who have strong views about the issues of the day being overrepresented in the results. My view is that SMS is a suspect polling method.
In the leadup to the 2016 election, Morgan issued many seat-by-seat results, frequently based on tiny sample sizes and often accompanied by unsound interpretation. The pollster also stopped releasing national polling in the last month of the campaign, making it impossible to benchmark its performance for the future, and its future polling intentions are unclear. Finally, in recent state elections Morgan's SMS polling has been absurdly volatile.
As a general rule, Morgan polls should be treated with a lot of caution at all times.
Lonergan is another robopollster that has fairly recently moved into public polling (and has also done a few internal polls for the Greens and other left-leaning entities).
Lonergan had a poor 2013 campaign with its seat polls showing a massive Coalition skew and a commissioned mobile-phone-only poll proving very inaccurate (perhaps because its sample size was too small). Its final 2016 federal poll, however, was quite accurate despite being taken nearly two months before the election. Its NSW state election polls showed skew to the Coalition and results from its commissioned seat polls have been mixed.
Lonergan initially attracted criticism for scaling results to voter reports of how they voted at previous elections,. Some voters may not report their voting behaviour at previous elections accurately, and may over-report voting for the winner, as a result of which polling becomes skewed towards the other side. I am not aware of the poll still employing this method.
JWS Research is another relatively recent robopollster. It conducted a massive marginal-seat poll at the 2010 federal election with indifferent predictive results on a seat basis (but an excellent overall 2PP estimate) and a similar exercise at the 2010 Victorian state election with excellent results. In the 2010-3 term it was notable for a string of aggregated marginal seat mega-polls, including some jointly commissioned by AFR and a Liberal Party linked strategic advice/lobbying firm called ECG Advisory Solutions. These polls were often blighted by the release of clearly unsound seat projections based on them, but that is not the fault of the data. JWS also conducted many local-level seat polls at the 2013 campaign. Electorate-level polls released by JWS during the 2013 campaign showed a strong general lean to the Coalition. It is likely that the series of aggregated marginal polls experienced the same issue.
In the 2013-6 cycle JWS kept a lower profile, but it releases very thorough and useful issues polling every four months in an omnibus called True Issues.
See previous edition.
MediaReach is another IRV pollster (robopollster) that is reported as being owned by a firm with several years' experience in the field. It has done, for example, state polling of WA and the NT and an electorate poll of Mackellar. At both the Northern Territory election and the election for the seat of Solomon, MediaReach overestimated the large swing against the CLP by about five points.
Metapoll is a new online pollster sometimes published in the Guardian. It is also the author of a deluxe polling aggregate that initially included its own unpublished data, though this was later removed except in the area of preferencing.
Research Now is an online panel pollster similar to Essential. It has produced a fair amount of mostly commissioned issues polling but does not seem to have published any voting intentions polling prior to elections, so its accuracy in Australia is unknown.
Community Engagement produced one national commissioned poll and some commissioned seat polls at the 2016 election. Documentation is so inadequate that it is not even clear what kind of pollster it is. Early results were not accurate.
EMRS is a Tasmanian pollster that has surveyed state and federal voting intention in Tasmania since the late 1990s, and sometimes does commissioned voting polls inside and outside the state. It is best known for quarterly state voting intention polling. It is a phone pollster calling landlines, formerly on a random basis but I believe it has since become a panel pollster.
EMRS is sometimes a difficult pollster to make sense of because its undecided rates are much higher than for other pollsters, and this applies even after the initial prodding of unsure voters to say which party they are leaning to. (Antony Green's partial defence of the company's high undecided rates here was refuted here). At past state elections the pollster has tended to overestimate the Green vote and underestimate the Labor vote by a few points each time because of this. A Labor majority was more or less written off (except by psephologists) in the leadup to the 2006 state poll on this basis, but it was the eventual result.
Nielsen (No longer active)
See previous edition.
Others will be added here as I come across them or on request.
Online or TV News "Polls": They're Useless!
Ah, but what about those polls on newspaper websites or Yahoo that give you the option of voting for a stance on some hot-button issue? What about those TV news polls that ask you to call a number for yes or a different number for no?
The short answer is that these are not real polls. They are opt-ins and they are not scientifically valid as evidence of voter intentions. For the first thing, as regularly noted in the fine print, they only reflect the views of those who choose to participate. If a media source tends to be read more by right-wing voters, then its opt-in polls will tend to be voted on more by right-wing voters.
Secondly, opt-ins suffer from "motivated response". People who care deeply about an issue will vote on them, but people who really don't have a strong view (but might answer a question put in a real poll that they've agreed to spend time on) will probably not bother.
Thirdly opt-ins are prone to co-ordinated stacking. Activist networks will send messages by email or social media telling people there is a media poll they can vote in, and this will often lead to votes being cast from way outside the area to which the poll relates. Opt-ins are easily flooded by this method, producing very skewed results.
Finally, opt-ins are often prone to deliberate multiple voting by single voters, either by people with strong views on an issue who want to manipulate the outcome or by people who want to ruin them just because the results are taken far too seriously. There are ways to try to stop it, but some of them work better than others. (See in this regard the brilliant work of Ubermotive and also see the guide to how to stop it here.)
It is especially unfortunate that the ABC's Lateline employs "polls" of this kind. They should know better.
I hope this guide is useful; feedback is very welcome.
Poll Quality Reviews
The following pieces on this site have compared the performance of different polls at a specific election:
2016 Federal Election: Best And Worst Pollsters
New South Wales 2015
2013 Federal Election: Best And Worst Pollsters
* My own, in the sidebar of this site (methods post here). This is a relatively quick model, aggregating 2PP results using published 2PPs and primaries, and designed for fast updating as new polls come out. It includes adjustments for accuracy and house effect.
* Bludgertrack. This is the best known aggregator. It incorporates state-level polling data to predict seat tallies and recorded an extremely accurate seat and 2PP projection at the 2013 federal election. It derives its 2PP figures from adjusted primary figures rather than aggregating released 2PPs.
Andrew Catsaras formerly did the Poll of Polls segment on ABC's Insiders and now and then posts his aggregate, which provides a monthly rounded 2PP figure and now primary estimates.
Several other aggregators operated during the 2016 election cycle and links and comments on them will be added if they resurface. New aggregators may also be added.