Murray Goot Archives • Inside Story https://insidestory.org.au/authors/murray-goot/ Current affairs and culture from Australia and beyond Tue, 12 Mar 2024 01:58:45 +0000 en-AU hourly 1 https://insidestory.org.au/wp-content/uploads/cropped-icon-WP-32x32.png Murray Goot Archives • Inside Story https://insidestory.org.au/authors/murray-goot/ 32 32 Nuclear power, Newspoll and the nuances of polled opinion https://insidestory.org.au/nuclear-power-newspoll-and-the-nuances-of-polled-opinion/ https://insidestory.org.au/nuclear-power-newspoll-and-the-nuances-of-polled-opinion/#comments Tue, 12 Mar 2024 01:58:45 +0000 https://insidestory.org.au/?p=77505

Is the Australian’s polling and commentary doing the opposition any favours?

The post Nuclear power, Newspoll and the nuances of polled opinion appeared first on Inside Story.

]]>
Opinion polls emerged in the United States with the rise of “objective” journalism after the first world war — or, more precisely, with the rise of objectivity as an ideology, as Michael Schudson argues in Discovering the News, his landmark social history of American newspapers. Central to the rise of objectivity was “the belief that one can and should separate facts from values.” But “facts,” here, were not “aspects of the world.” Rather, they were “consensually validated” claims about the world, to be trusted because they conformed with “established rules deemed legitimate by a professional community.”

While not mentioned by Schudson, nothing spoke to the rise of “objective journalism” more clearly than the rise of “scientific” polling: the attempt to document “the voice of the people” based on interviews that, in principle, gave every citizen an equal chance of being heard, of saying what they had to say, via questions free of bias, that bane of objectivity.

George Gallup, a figure central to the spread of polling, presented poll-takers, in his polling manifesto The Pulse of Democracy (1940), as people “moving freely about all sorts and conditions of men and noting how they are affected by the news or arguments brought from day to day to their knowledge.” Gallup took this model from James Bryce’s The American Commonwealth (1888), but his own polling, with its set questions and predetermined response categories, was far removed from the kind of observation Bryce favoured

In reality, Gallup followed a news-making model — the model exemplified by press conferences and media releases, where news is made for the press without being controlled by the press. Gallup not only created news, controlling what was asked, how it was asked and when; he also syndicated his results to a broad range of newspapers. Having his polls published by papers whose politics ranged widely shored up his claims to objectivity.

A parallel existed with the Associated Press, America’s first wire service. Since it “gathered news for publication in a variety of papers with widely different political allegiances,” Schudson notes, “it could only succeed by making its reporting “objective” enough to be acceptable to all its members and clients.”

While servicing a diverse range of outlets was central to Gallup in America, this is not what happened in Australia. When Keith Murdoch introduced the Gallup Poll here in 1941 he made sure that the company he set up to run it was controlled by his own Herald and Weekly Times and its associates in various states. Although Australian Public Opinion Polls (“The Gallup Method”) was notionally independent, executives from the Herald and Weekly Times, including Murdoch, could (and did) influence the questions Roy Morgan, APOP’s managing director, asked, including whether they should be repeated from poll to poll.

Whereas the American Gallup boasted subscribing newspapers that were Republican (as Gallup himself may have been), Democrat and independent, none of the newspapers that subscribed to the Australian Gallup Poll are likely to have ever editorialised in favour of federal Labor; for many years, Morgan himself was an anti-Labor member of the Melbourne City Council.

Much of the polling done in America and later in Australia, however, fits a third model: things that the press creates either directly (in-house polling; for example, of a newspaper’s own readers) or indirectly (by commissioning an independent market research firm to ask questions on the newspaper’s behalf). Media products that fit this category range from Clyde Packer’s creation of the Miss Australia contest in the 1920s (also copied from America) and the Australian Financial Review’s endless business “summits” in the 2020s, to the media’s ubiquitous sit-down interviews with politicians and celebrities. This is now the dominant model.

Creating news is the surest route to having an “exclusive” and creating “product differentiation.” If the “exclusive” is produced often enough, is highly valued, and prominently flagged — polling is now featured on the front page — it becomes a way of building “brand loyalty.” Newspapers that regularly commission polls from the same source, or that have a regular but non-financial relationship with a pollster, hope for all of this. Media that don’t commission their own polls — television and radio, especially — are often happy to recycle polls published in the press.

Brand loyalty is a way of building a readership. When it comes to polling, it generally means not citing polls generated by competing brands — especially polls that could raise doubts about one’s own polls. Where different polls produce different — even conflicting — results, this usually means that the rules of objectivity that require journalists to confirm their stories using more than one source are readily abandoned. While some newspapers are more brand-focused than others, journalists consulting their own polls and not others has become standard practice.

In polling, the strength of any brand — the reputation of the poll — depends on the prestige of the news outlet that publishes it. It also depends on the poll’s record, and that record is assessed against the few objective measures that exist: election results and referendums.

Polls that score well on these measures are more likely to be trusted on things other than the vote. That, at least, is the hope of the companies that poll for the press or have their polls publicised by the press. Companies involved in the prediction business try to ensure that their polls come as close as possible to predicting the actual vote — closer, certainly, than any of their rivals.

What pollsters hope to be trusted on, as a result of the accuracy on these measures, is everything else they do for the press — notably, reporting on the popularity of party leaders and taking “the pulse” (as Gallup liked to say) on issues of public policy. More than that, they are after a spillover or halo effect for their market research businesses more generally; financially, this is the point of involving themselves in the not particularly lucrative business of predicting votes. Trust is important because what companies report on matters other than the vote typically cannot be checked directly against any external measure.

Absent any objective check, there is always a risk of polling that panders, consciously or otherwise, to the client’s agenda or the pollster’s preferences. Against this happening, the guardrails erected by industry bodies like the relatively new Australian Polling Council or the old (Market) Research Society are either weak or non-existent — the APC mostly concerned that pollsters explain their methods and post their questionnaires online, a very welcome development but one that stops well short of setting wide-ranging standards in relation to the questions members ask; the Research Society mostly concerned to reassure respondents about the way polling companies protect their privacy.

Newspoll — and other polls

Enter Newspoll, a brand owned by Rupert Murdoch’s News Corp. Established for a high-end newspaper, the Australian — whose news and views are seen by some as exerting an out-size influence on conservative politics — Newspoll can claim a record of predicting national elections second to none.

In the course of conducting its most recent poll — a fortnightly event that usually grabs the headlines for what it has to say about national voting intentions, leadership satisfaction and preferred prime minister — Newspoll raised the issue of nuclear power. “There is a proposal to build several small modular nuclear reactors around Australia to produce zero-emissions energy on the sites of existing coal-fired power stations once they are retired,” Newspoll told respondents (emphasis in the original). It then asked: “Do you approve or disapprove of this proposal?” Respondents were invited to select one answer: “Strongly approve” (22 per cent); “Somewhat approve” (33 per cent); “Somewhat disapprove” (14 per cent); “Strongly disapprove” (17 per cent); “Don’t know” (14 per cent). In short: 55 per cent in favour; 31 per cent against; 14 per cent not prepared to say either way.

As Newspoll might have anticipated on an issue as contentious as this, its question generated controversy. Unimpressed, the economist John Quiggin proposed — tongue-in-cheek — a quite different way the question might have been worded: “There is a proposal to keep coal-fired power stations operating until the development of small nuclear reactors which might, in the future, supply zero-emissions energy. Do you approve or disapprove of this proposal?”

A question on nuclear power could have been asked in any number of ways: by putting the arguments for and against nuclear power; by taking the timeline for getting nuclear power up and running and comparing it to the timeline for wind + solar + hydro; by asking who should pay (governments, consumers, industry, etc.) for different forms of energy with zero emissions, and how much they should pay; by qualifying the “zero-emissions” solution with some reference to the waste disposal problem; by omitting the words “small, modular” — not just descriptors but, potentially at least, words of reassurance; and so on.

Different questions might still have produced a majority in favour of nuclear energy. A question asked for the Institute of Public Affairs by Dynata, in April 2022, on whether Australia should build nuclear power plants to supply electricity and reduce carbon emissions,” found a majority (53 per cent agreeing), and an even lower level of opposition (23 per cent).

As with Newspoll, the IPA poll raised considerations that invited an affirmative response: “small modular,” “zero-emissions energy,” “on the sites of existing coal-fired power stations once they are retired” (Newspoll); “to supply electricity,” “reduce carbon omissions” (IPA). Not a single consideration in either poll might have prompted a negative response.

The high proportion in the IPA survey neither agreeing nor disagreeing (24 per cent) — an option Newspoll didn’t offer — allowed respondents who actually had an opinion to conceal it, Swedish research on attitudes to nuclear power suggests. So, while the level of opposition recorded by the IPA might have been higher without the “easy out,” the level of support might have been higher too.

Other questions about nuclear power failed to attract majority support. Asked in September by Freshwater “if Australia needs nuclear power” (the precise question was not published), and presented with a set of response options similar to those offered by the IPA, 37 per cent of respondents supported nuclear power and 36 per cent opposed it, 18 per cent saying they were “neutral” and 12 per cent “unsure.” Apart from coal (supported by 33 per cent), every other energy source received wider support: hydrogen (47 per cent), natural gas (56 per cent), offshore wind (58 per cent), onshore wind (61 per cent) and solar (84 per cent).

Asked in the same poll whether “Australia should remove the ban on nuclear power development,” 44 per cent agreed. But asked whether they agreed or disagreed that “Australia does not need to generate any energy from nuclear power,” 36 per cent disagreed. Similarly, no more than 35 per agreed that “the federal government must consider small nuclear modular reactors as part of the future energy mix” — a much lower figure than Newspoll’s, even if the question isn’t necessarily better.

Freshwater also asked respondents to choose between two trade-offs: “Australia builds nuclear power plants meaning some coal power plants are replaced earlier” (44 per cent chose this one) and “Australia does not build nuclear power plants meaning some coal power plants are extended” (38 per cent); 18 per cent were “unsure.” Respondents opposed to both coal and nuclear power were left with only one place to go — “unsure.” But on the poll’s own evidence — 33 per cent supporting coal, 36 per cent supporting nuclear — the figure of 18 per cent appears to underestimate this group considerably.

Another question on nuclear power, this time asked by RedBridge, is said to have shown a 35–32 split over “the idea of using nuclear to provide for Australia’s energy need.” As yet, however, neither the question nor any figures have been posted on its website.

Yet another question, asked in February by Resolve for the Sydney Morning Herald and the Age, also failed to show majority support for nuclear power. Told that “there has been some debate about the use of nuclear power in Australia recently” and asked for their “own view,” respondents split four ways: “I support the use of nuclear power in Australia” (36 per cent); “I do not have a strong view and am open to the government investigating its use” (27 per cent); “I oppose the use of nuclear power in Australia” (25 per cent); and “Undecided” (15 per cent).

In reporting this “exclusive survey,” David Crowe, chief political correspondent for the two papers, made no reference to the Newspoll published the previous day. This, notwithstanding that in reporting the Resolve poll Crowe gave pride of place to “mining billionaire” Andrew Forrest’s attack on the Coalition’s nuclear policy — a policy the Australian suggested had received a “boost” from the Newspoll. Nor did Crowe refer to any other poll.

On one reading, most respondents (61 per cent in the Resolve poll compared to 39 per cent in Newspoll) had “a strong view” (the respondents who declined to say “I do not have a strong view…”), those without “a strong view” either being “open to the government investigating” the use of nuclear power or “undecided.” More likely, the question didn’t measure how strong any of the views were — some of those without strong views being “open to the government investigating its use,” others joining those who harboured strong views (respondents Resolve didn’t directly identify) to indicate either their support or their opposition to nuclear power.

Effectively, the Resolve poll rolled three questions into one — one, about support or opposition to nuclear power; another about the strength of these opinions; and another about “the government investigating” the “use” of nuclear power. But since responses to one of these questions would not necessarily have determined responses to any other, Resolve’s shortcut obscures more about public opinion than it illuminates; a respondent with a strong view, for example, might still have been “open to the government investigating its use.”

In October 2023, Resolve asked another question — this one reportedly commissioned by the consulting firm Society Advisory, and run “exclusively” by Sky News. The result suggested a degree of openness to nuclear power that was even higher than that indicated by Resolve’s poll for the Age and Sydney Morning Herald. Asked if “Australia should rethink its moratorium (ban) on nuclear power to give more flexibility in the future,” half (49 per cent) of the respondents were in favour, less than half that number (18 per cent) were against, opposition to “flexibility” requiring some strength, with an extraordinary 33 per cent “unsure” — a sign that this question too was a poor one.

Not only do answers depend on the question, they also depend on the response options. In an extensive survey — not just a one- or two-item poll — conducted in October–November 2023, the British firm Savanta asked respondents “to what extent, if at all,” they supported or opposed using nuclear energy “to generate electricity” in Australia? While 40 per cent said “strongly support” or “tend to support,” 36 per cent said “strongly oppose” or “tend to oppose,” 7 per cent said “Don’t know,” and 17 per cent said they “neither support nor oppose.”

As with the Resolve poll for the Age and Sydney Morning Herald, Savanta’s response options — which included “neither support nor oppose” — reduced the chance that its question, however worded, would yield a majority either in favour of nuclear energy or against it; almost as many opposed nuclear energy as supported it, a quarter (24 per cent) choosing to sit on the fence. In the Newspoll, where 55 per cent approved and 31 per cent disapproved, there was no box marked “neither approve nor disapprove.” If there had been, then almost certainly Newspoll would not have found majority support either.

The Savanta survey also shows what happens to support for a single option — here, nuclear power — when respondents are given a range of options. Asked to think about how their “country might shift its current energy generation mix” and given a list of five alternatives, only 23 per cent nominated “nuclear energy”; 41 per cent, almost twice as many, nominated “large-scale solar farms.” Of the rest, 15 per cent nominated “onshore wind farms,” 6 per cent “gas carbon and storage (CCS),” and 4 per cent “biomass from trees.”

Newspoll made no attempt to ascertain whether the public had heard of “small modular nuclear reactors” much less what the public knew about such things. In the Guardian, the proposal was described as “an uncosted Coalition thought-bubble”; in the Lowy Institute’s Interpreter, former deputy Reserve Bank governor Stephen Grenville noted that there were “just two operational SMRs, both research reactors” and that work on what “was expected to be the first operational commercial SMR” had “been halted as the revised cost per kWH is uneconomic for the distributors who had signed up.” Elsewhere, an academic specialising in electricity generation described SMRs as “not, by any stretch of the imagination, what most people would consider small.”

On what the public knows — or, more accurately, on how much it thinks it knows — the Savanta survey is again useful. When asked what they had heard of nuclear energy, few (8 per cent) said “I have not heard about this energy option” or “don’t know.” But just 18 per cent said “I have heard about this energy option, and know a lot about how it works.” Most said “I have heard about this energy option, and know a little about how it works” (41 per cent) or “I have heard about this energy option, but don’t know how it works (33 per cent).

In a poll conducted by Pure Profile, reported in May 2022, 70 per cent said they didn’t understand “the difference between nuclear fission and nuclear fusion.”

… and the Australian

Keen to publicise the result of its Newspoll — a result the paper openly welcomed — the Australian’s reporting of the poll and its commentary around it was tendentious.

The distinction between respondents’ having a view and their having a “strong” view was one it mostly ignored or fudged. The paper’s political editor Simon Benson, reported in Crikey to be “responsible” for the poll, ignored it. He repeatedly represented “majority” support as “strong” support. The fact that pollsters themselves regularly make this mistake shouldn’t make it any more acceptable. If support is a metre wide, it isn’t necessarily a metre deep.

The headline in the print edition — “Powerful Majority Supports Nuclear Option for Energy Security” — fudged the distinction. In itself, 55 per cent is not an overwhelming majority; in 2017, same-sex marriage was supported in the nationwide “survey” by 62 per cent. In itself,  55 per cent is hardly a “powerful” number — one that politicians ignore at their peril; in the lead-up to the same-sex marriage decision, both John Howard and Tony Abbott made it clear that they wouldn’t consider anything less than 60 per cent in favour to be a number that the parliament would have to heed. Had 55 per cent (not 36 per cent) “strongly” approved nuclear reactors, the Australian would have had a defensible case. But even in polls that offer a binary choice, “strong” majorities are rare.

Rather than representing a “powerful majority” in favour of the “nuclear option,” Newspoll’s figures might equally be said to show that most respondents (61 per cent) did not feel strongly one way or the other — a majority that the Australian would not have wanted to call “powerful.”

A highlight, Benson argued, was the fact that respondents aged eighteen to thirty-four — “the demographic most concerned about climate change” — was the demographic most likely to support nuclear power, 65–32. “There is no fear of the technology for most people under 40,” he concluded. This line was one that impressed shadow climate change and energy minister, Ted O’Brien, when he discussed the poll on Sky News.

It also resonated with opposition leader Peter Dutton. Attacking the prime minister for being out of touch with public opinion, which he was reported to have said was “warming to nuclear power,” Dutton noted that nuclear power was “supported by a lot of younger people because they are well-read and they know that it’s zero emissions, and it can firm up renewables in the system.”

The news that “NewsPoll [sic] showed a majority of young Australians supporting small-scale nuclear power generation,” even prompted a discussion of the pros and cons of nuclear power — not the pros and cons of the polling — on the ABC.

But eighteen- to thirty-four-year-olds as the age group most favourably disposed to nuclear power is not what Essential shows, not what Savanta shows, and not what RedBridge shows. In October’s Essential poll, no more than 46 per cent of respondents aged eighteen to thirty-four supported “nuclear power plants” — the same proportion as those aged thirty-six to fifty-four but a smaller proportion than those aged fifty-five-plus (56 per cent); the proportion of “strong” supporters was actually lower among those aged eighteen to thirty-four than in either of the other age-groups.

In the Savanta survey, those aged eighteen to thirty-four were the least likely to favour nuclear energy; only about 36 per cent were in favour, strongly or otherwise, not much more than half the number that Newspoll reported.

And according to a report of the polling conducted in February by RedBridge, sourced to Tony Barry, a partner and former deputy state director of the Victorian Liberal Party, “[w]here there is support” for nuclear power. “it is among only those who already vote Liberal or who are older than 65.”

In the Australian, the leader writer observed that “public support for considering nuclear power in Australia is rising as the cost and implications of meeting the decarbonisation challenge becomes more real.” But Newspoll had never sought to establish what respondents think are the “cost and implications of meeting the decarbonisation challenge” so it could hardly have shown whether these thoughts have changed.

Benson’s remark, on the Australian’s front page, that the poll showed “growing community support” for nuclear power was also without warrant; “growing community support” is something that the poll does not show and that Benson made no attempt to document. Since the question posed by Newspoll had never been asked before, and since polled opinion is sensitive to the way questions are asked, “growing community support” is one thing the poll could not show.

Subsequently, Benson cited Liberal Party polling conducted “immediately after the [May] 2022 election loss” which “had support at 31 per cent.” The question? Benson doesn’t say. Is it really likely, as Benson believes, that in a “short space of time,” as he describes it — less than two years — support for nuclear power could have jumped from 31 per cent to 55 per cent? The considerable shift in polled opinion on same-sex marriage that Wikipedia suggests happened sometime between 2004 and 2007 is hardly likely to have happened since 2022 in relation to nuclear energy.

Peta Credlin, Australian columnist and Sky News presenter, argued the growing-support line by stringing together: a poll conducted in 2015 (by Essential, though she didn’t identify it as an Essential poll), which had support at 40 per cent; the IPA poll (which it was safe to name) from 2022, which had support at 53 per cent; and the Newspoll, which had it at 55 per cent. Not only was each of these conducted by a different pollster, hence subject to different “house effects”; each had posed their own question.

Had the Australian wanted to see whether support really was growing it might have considered re-running one of the questions it had asked years before — or, preferably, re-run more than one. But perhaps the point of the polling was not to show that support was growing but to create the impression that it was growing — that it had a momentum that might leave Labor, “in its fanatical opposition to nuclear power,” as Benson wrote, stranded on “the wrong side of history.”

This was not the first time the Australian has interpreted the results of a Newspoll as heralding a turning point on this issue. In 2007, shortly before prime minister John Howard announced that the Coalition would set up a nuclear regulatory regime and remove any unreasonable impediments to the building of nuclear power plants in Australia, the Australian told its readers that there had been a “dramatic shift” in support for nuclear power. The basis of its claim: questions asked by Newspoll — two in 2006, one in 2007. (In those days Newspoll was a market research company, not a polling brand whose field work had been outsourced first to YouGov and more recently to Pyxis.)

The questions asked in 2006 were not the same as the question asked in 2007. In May and December 2006, Newspoll told respondents: “Currently, while there is a nuclear reactor at Lucas Heights in Sydney used for medical and scientific purposes, there are no nuclear power stations being built in Australia.” It then asked: “Are you personally in favour or against nuclear power stations in Australia?” The majority was against: 38–51, in May; 35–50, in December.

In March 2007, Newspoll changed the question, and framed it quite differently: “Thinking now about reducing gas emissions to help address climate change,” it asked, “are you personally in favour or against the development of a nuclear power industry in Australia, as one of a range of energy solutions to help reduce greenhouse gas emissions?” On this, opinion was fairly evenly split: 45–40. The majority were not against; in fact, there was a plurality in favour. The Australian’s interpretation: in just four months, Dennis Shanahan and Sid Marris concluded, the attitude of Australians to nuclear energy had “dramatically reversed.”

Not so. After commissioning Newspoll to ask the 2006 question again, in April 2007, the Australia Institute found that the level of support for “nuclear power stations being built in Australia” was 36 per cent (35 per cent in December 2006), the level of opposition was now 46 per cent (previously, 50 per cent), and the “don’t knows” were now 18 per cent (previously 15 per cent). In short, whereas opposition had exceeded support by fifteen percentage points, 50­–35, it now exceeded support by ten points, 46–36 — a decline of five points, but no reversal, dramatic or otherwise.

This time around, both the Australian Financial Review and the Sydney Morning Herald have asked questions similar to the one Newspoll asked in February, but in polls of their readers not in a public opinion poll. Asked, in July 2023, whether Australia should “consider small nuclear reactors as one solution to moving away from fossil fuels?,” the Financial Review’s readers favoured “consider[ing]” the idea, 58–30. Asked, in July 2023, whether “small nuclear power reactors should be part of Australia’s energy mix,” the Herald’s readers opposed the idea, 32–55. Even if these questions had been included in national polls, the Australian might have baulked at citing the results of either, since it would have given oxygen to another brand.

There is evidence of a growth in support for nuclear power between June 2019 and March 2022, but there is no convincing evidence that points to “growing support” in the two years since. When the Lowy Poll asked respondents, in March 2022, whether they supported or opposed “removing the existing ban on nuclear power,” 52 per cent said they supported it, an increase on the level of support in March 2021 (47 per cent). And in September 2021, when Essential asked respondents whether they supported or opposed “Australia developing nuclear power plants for the generation of electricity,” 50 per cent said they supported nuclear power, a sharp increase on the level of support (39 per cent) it reported in June 2019. However, when Essential asked the question again, in October 2023, the level of support hadn’t moved.

The only evidence for a recent shift comes from Resolve. In October 2023, when Resolve first asked the question it asked in February 2024, 33 per cent (compared with 36 per cent in February) supported “the use of nuclear power” and 24 per cent (23 per cent in February) opposed it. (Nine Entertainment appears not to have previously published Resolve’s result for October.) Its February poll represents an increase of four percentage points in the gap between the level of support and the level of opposition, from nine points to thirteen.

But a shift of four points is well within the range one might expect given the vagaries of sampling — the “margin of error” that pollsters regularly parade but just as regularly ignore. Non-sampling error — a much bigger problem than pollsters acknowledge — also might have played a part, especially given a question as complex and confused as the one Resolve asked. Errors of both kinds are compounded by the widespread use by pollsters of opt-in rather than probability-based panels.

Jim Reed, who runs Resolve, is reported as saying that voters “were increasingly open to the potential of nuclear power now the Coalition was advocating for existing technology in large-scale plants.” According to Reed, support has “swung towards at least openness to nuclear power.” But Nine did not reveal what change, if any, Resolve had detected since October in the number without “a strong view” and “open to the government investigating its use (27 per cent in February).” Support, Reed added, was “weak… at the moment simply because people aren’t being asked to approve an actual site.” Even if he had measured strength, which it appears he hadn’t, one could equally imagine support becoming weaker, not stronger, once voters were asked to “asked to approve an actual site.”

What sort of voters did he think were now supportive or at least “open’? “We’ve got a new generation of younger people who are quite positive towards nuclear power,” Reed said. Was this “new generation” evident in October or did it only become evident in February? If it was evident in October, was it responsible for February’s four-point shift? Nothing in what Nine published allows us to say.

While Reed restricted himself, largely, to interpreting the actual data, in the Australian the commentary strayed much further. It wrote, for example, of “the costs and risks of renewable energy” having “become clearer.” But it offered no evidence that those costs and risks had become clearer to the public — not surprisingly, since these too were things about which Newspoll had not asked.

Leveraging the Newspoll result to predict that “most Australians would back a move to small scale nuclear power,” the headline in the online edition of the Australian ignored another distinction — not between strong and weak opinion but between polls that showed un-mobilised opinion and polls that showed mobilised opinion; so, too did Sky News. Any “move to small-scale nuclear power” would be politically contested, and once contested opinion might shift.

Subsequently, Benson ventured a more sober assessment of the Coalition’s prospects of carrying the day. “For Dutton to win the argument,” an argument that would take “courage” to mount, “any Coalition energy policy must be framed in a cost-of-living context that can demonstrate how nuclear power will deliver cheaper and more reliable power into the future,” he wrote. For Dutton to position nuclear power as “a central component” of his energy policy, Benson declared, was “as big and brave as it gets.”

Others went further. In a rare note of dissent within News Corp, James Campbell, national weekend political editor for Saturday and Sunday News Corp newspapers and websites across Australia, called the idea of Dutton “going to the next federal election with plans to introduce nuclear power” as “stark raving mad.” One thing the Coalition should have learnt from the Voice referendum was that “support for anything radical in Australia shrinks the moment it hits any sort of concerted opposition.” And, he added, “there’s the unity problem. Do you really think Liberal candidates in ‘tealy’ places are going to face the front on this?”

Benson, meanwhile, had back-tracked. Pointing again to the distribution of opinion among eighteen- to thirty-four-year-olds, he advanced a quite different assessment: “the onus is now on Labor to convince Australians why we shouldn’t have nuclear power.” Chris Kenny, the Australian’s associate editor, thought “the nuclear argument could play well in the teal seats where there is an eagerness for climate change and a high degree of economic realism.”

If Benson was right the first time, however, and the Coalition needs to take care over how it frames the debate, then the Savanta data suggest that it may face a few challenges. Asked what impact nuclear energy would have on their “energy bills,” about a third (35 per cent) of its respondents said it would make their bills “much cheaper” or “slightly cheaper,” less than a third (28 per cent) thought it would make them “much more expensive” or “slightly more expensive,” but more than a third (38 per cent) said they either didn’t know or thought it would make “no difference.”

In the Essential poll, conducted around the same time, respondents saw little difference in “total cost including infrastructure and household price” between three energy sources: “renewable energy, such as wind and solar” (38 per cent considering it the “most expensive” option; 35 per cent, the “least expensive”), nuclear power (34 per cent considering it the “most expensive” option; 34 per cent, the “least expensive”), and “fossil fuels, such as coal and gas” (28 per cent considering it the “most expensive” option; 31 per cent, the “least expensive”).

Supporters of nuclear energy may also have to address some of the concerns Benson didn’t mention. In the Savanta study, 77 per cent were either “very concerned” (45 per cent) or “fairly concerned” (32 per cent) about “waste management”; 77 per cent were either “very concerned” (47 per cent) or “fairly concerned” (30 per cent) about “health & safety (ie. nuclear meltdowns, impact on people living nearby)”; and 56 per cent were either “very concerned” (23 per cent) or “fairly concerned” (33 per cent) about the “time it takes to build.”

In another poll, this one conducted by Pure Profile in the first half of 2022, respondents were asked how they would feel if a new nuclear power station were built in their city. Around 50 per cent said they would feel “uncomfortable,” more than a quarter “extremely uncomfortable”; just 7 per cent would have felt “extremely at ease.”

It would be reassuring to think that any newspaper that wanted its polling taken seriously would need to commission better polling than the polling the Australian was so keen to promote. But the Newspoll results were taken seriously by a rival masthead. “The Newspoll published in the Australian,” the political editor of the Australian Financial Review, Phillip Coorey wrote, “found there was now majority support for the power source.”

A week after its poll was published, and its results — with a nod to the Coalition — described as “powerful,” the Australian’s front page led with another “exclusive,” this time courtesy of the Coalition: its “signature energy policy” to be announced “before the May federal budget” would include “a plan identifying potential sites for small nuclear reactors as future net zero sources.” The following day, Benson wrote that Newspoll had “demonstrated strong support for the proposal that Dutton is working on announcing soon.” But the policy Dutton was working on, apparently, was not the policy Newspoll had tested. “The Coalition energy plan,” Benson revealed the same day in another front-page “exclusive,” was “likely to include next-generation large-scale nuclear reactors — not just the small-modular reactors.”

A newspaper that has a position on nuclear power and thinks of polls as an objective measure of public opinion should make sure that the questions it gets (or allows) pollsters to ask, and the results it gets journalists to write up, look fair and reasonable to those on different sides of the debate. In effect, this was the discipline George Gallup placed on himself when he signed up newspapers with divergent views.

Even if a newspaper wanted to use its polling to gee-up its preferred party, it might also think about using its polling to identify some of the risks of pursuing a policy it backed — risks that no party wanting to win an election could sensibly ignore — not just the opportunities to pursue that policy.

Whether Michael Schudson left polling out of his account of objectivity because it didn’t fit with his argument about objectivity as an ideology, or because he didn’t think it a part of journalism — neither journalism nor market research being a profession in the sense that law or medicine are professions — or simply because of an oversight, is unclear.

Better, more comprehensive, polling wouldn’t end the political debate or the debate about the objectivity of the polls. Nor should it. Nonetheless, it might be a good place from which to progress these debates.

Of course, for those who don’t want to foster a debate about the policy or about the polls, any plea for do better is entirely beside the point. •

The post Nuclear power, Newspoll and the nuances of polled opinion appeared first on Inside Story.

]]>
https://insidestory.org.au/nuclear-power-newspoll-and-the-nuances-of-polled-opinion/feed/ 12
The “end” of Labor’s honeymoon and the “collapse” of women’s support for the Voice https://insidestory.org.au/the-end-of-labors-honeymoon-and-the-collapse-of-womens-support-for-the-voice/ https://insidestory.org.au/the-end-of-labors-honeymoon-and-the-collapse-of-womens-support-for-the-voice/#comments Tue, 25 Jul 2023 04:06:10 +0000 https://insidestory.org.au/?p=74919

How Newspoll reports public opinion and how the Australian reports Newspoll

The post The “end” of Labor’s honeymoon and the “collapse” of women’s support for the Voice appeared first on Inside Story.

]]>
Newspoll, published and paid for by the Australian, is the voice of the people most clearly heard in Canberra and most widely heeded either side of an election. This has been true since the 1980s, not only between elections but also in the lead-up to referendums.

Apart from its election record, which for the last thirty years has been the gold standard, Newspoll’s status derives from its longevity (Roy Morgan Research is the only polling brand that has been around for longer), where it is published (an upmarket newspaper read by most federal politicians, with an online presence featuring excellent graphics) and its frequency (unmatched). Poll addicts crave nothing more than a known quantity, easily accessible trend data and a regular fix.

It’s not just the percentages Newspoll generates that matter; it is also the way the Australian interprets the figures. How much the figures themselves matter, and how much the Australian’s interpretation matters, is difficult to say. Both are recycled by politicians and journalists, among others, without much thought being given to whether they make sense.

In the latest poll, conducted 12–15 July, Labor’s primary vote was down (from 38 per cent, 16–24 June, to 36 per cent), as was the Coalition’s (35 per cent to 34 per cent), but Labor’s two-party lead grew from 54–46 to 55–45 — rounded, as are all Newspoll figures, to the nearest integer. As Adrian Beaumont noted in the Conversation, Labor “may have been unlucky” in the rounding of the two previous Newspolls but it “was probably lucky” this time.

At the Australian, the judgement of long-time political editor Simon Benson was unequivocal. Focusing on the fall in Labor’s first-preference support rather than the rise in its two-party share, he declared: “Labor’s honeymoon is officially over.” “Officially”? It was as if Newspoll should be recognised as having the same sort of status as the Australian Bureau of Statistics, say, or the Australian Electoral Commission. If, as Phillip Coorey observed, “the latest Newspoll” was merely “the latest to declare the government’s honeymoon over” (it was the Australian not Newspoll that declared it) then it was uniquely the Australian that made it “official.”

Benson took it for granted that Labor’s “honeymoon” came to an end once its first-preference support declined to a post-election “low” by an amount Benson judged to be significant. No matter that this support for Labor was still well above the 32.6 per cent (primary) or 52.1 per cent (two-party) vote recorded at the May 2022 election. The “honeymoon” had ended, and that was now “official.”

An electoral honeymoon, unlike the real thing, can end it seems — or begin to end — at whatever moment a poll-watcher chooses. Last September, when Labor’s two-party support in Newspoll reached 57 per cent — just two points higher than its current level — and its primary support stood at 37 per cent (one point ahead of where it currently sits), Benson judged that “the electoral honeymoon for Anthony Albanese continues”; in the preferred prime minister stakes, Albanese (61 per cent) was well ahead of Dutton (22 per cent), figures virtually unchanged from July.

This year, at the beginning of March, when Labor’s two-party support was at 54 per cent (three points lower than it had been in September) but its primary support still on 37 per cent, Benson took it as “a sure sign that the romance of the honeymoon phase is coming to an end for the government.” At 54–28, the Albanese–Dutton head-to-head had changed as well, but not dramatically. By mid May, however, when Newspoll estimated Labor’s two-party support at 55 per cent (its current standing) and its primary support at 38 per cent (higher than its current 36 per cent), he wondered whether it was “now the beginning of the end of the government’s honeymoon”; head-to-head, Albanese was still ahead of Dutton 56–29.

The day after the Australian published Newspoll’s figures for July, Nine’s metropolitan dailies published the latest figures from their July poll, the Resolve Political Monitor. Resolve’s percentages read as if Labor’s honeymoon was still in full-swing: Labor on 39 per cent, not 36 (the Newspoll figure); the Coalition on 30 per cent, not 34 (the Newspoll figure).

Political polling is nothing if not competitive. Making its own call about the end of Labor’s honeymoon, Resolve was not to be outdone. In March, after his poll had produced exactly the same figures (39–30) it would produce in July, Resolve’s director Jim Reed took Labor’s fall from 40 per cent in his previous poll as “another confirmation that the honeymoon highs have come to an end.” In June, Resolve had Labor back on 40 per cent. What had previously been a “honeymoon high” was now a sign of something quite different; in May, after all, Labor’s support had been 42 per cent, two points higher. Resolve, the Sun-Herald reported, “had started noting declines in Albanese and Labor’s honeymoon ratings early this year.”

Clearly, the only rule these commentators seem to follow in declaring an electoral honeymoon to have ended is that the level of support for the government in the latest poll is lower than the level recorded in the immediately preceding poll. Neither absolute levels of support nor the longer-term record count. If subsequent support for the government rises and falls — even if it is to a level higher than the previous high — one can declare an end to the honeymoon all over again. Neither the rise nor fall need be outside the poll’s margin of error — a figure the Australian and the Nine newspapers parade endlessly but their commentary studiously ignores.

Poll-watchers who have insisted for years that the Australian interprets its Newspoll data to cheer up or cheer on the Coalition may have noticed that its reading of the latest Newspoll backed up the interpretation of the Fadden by-election offered by the Liberal National Party candidate in Fadden, Cameron Caldwell. The Australian gave Caldwell’s interpretation the hortatory headline, “Fadden result ‘shows the honeymoon is over for Labor.’”

As well as spelling the end of the honeymoon, the result in Fadden showed “concern over the Indigenous voice” to be “high,” Caldwell argued. Columnist Joe Hildebrand — a vocal Yes supporter — recycled and generalised Caldwell’s line in the Daily Telegraph: “It could not be clearer,” he wrote, “that voters are rewarding the Prime Minister for his moderate and centrist direction and punishing him for the one aspect of his government” — the Voice — “that has been cast by his critics as radical or woke.”

Perhaps voters in Fadden were concerned about the Voice. “Using Fadden as a trial run,” Coorey had written on the eve of the by-election, “Dutton is attempting to turn the Voice into a lightning rod for broader discontent with the government.” After the by-election, however, another senior journalist, Paul Bongiorno, was equally adamant that “Dutton didn’t push his opposition to the referendum in the campaign”; having “raised it in a doorstop a few weeks ago, he dropped it as the poll neared.”

How anyone could conclude that Dutton had succeeded in making the Voice an issue based on nothing more than the result in Fadden, neither the Australian nor Hildebrand explained. One needs survey data, not a set of electoral returns, to determine whether Caldwell’s claim has merit. Bongiorno reports Caldwell saying that “people raised the Voice with him quietly because they didn’t want to be accused of racism or prejudice if they raised it publicly” — raised with him, he might have added, because they assumed Caldwell would not have thought such concerns racist or prejudiced. But Coorey, citing another LNP source, discounts the idea that views about the Voice affected the result: “the Voice had little impact either way,” he reports.


Even if the Voice was not shifting voters against Labor, were voters shifting against the Voice? As luck would have it, Newspoll’s latest poll also included a question on “whether to alter the Australian Constitution to recognise the First Peoples of Australia by establishing an Aboriginal and Torres Strait Islander voice.” For the Yes side, the topline numbers brought no more cheer than Caldwell: Yes, 41 per cent; No, 48 per cent; Don’t Know, 11 per cent. The corresponding figures after the same question was asked three weeks earlier: 43–47–10.

The changes between June and July may have been small but they played to the dominant media narrative about the Voice: that support is declining; that No has now overtaken Yes; that the referendum, if not doomed to failure, is not on a path to success. In June, Benson had cautioned that it would be “foolhardy” to “make a call… four months out from polling day” (expected mid October), and that it was “not over yet for the voice.” Now, just three weeks later, with the margin between Yes and No growing from four points to seven — well within what the Australian describes as Newspoll’s “theoretical margin of error” — Benson concluded that “the voice referendum [was] in serious trouble,” support “gradually collapsing” with “confusion over the detail, the scope and the function of the voice… killing any goodwill many undecided voters may have had.”

More striking than the topline figures was a startling shift in the differences between women’s responses and men’s. The new poll reported a seven-point rise in support for Yes among men and a ten-point fall in support among women. Suddenly, from being more likely to vote Yes than to vote No (a six-point gap), women were more likely to vote No than to vote Yes (a gap of eleven points) — a turnaround of seventeen percentage points. And from being more likely to vote No than to vote Yes (a fourteen-point gap), suddenly men were almost as likely to vote Yes — a twelve-point change.

By any measure, these were remarkable changes. The movement of one-in-five women from the Yes column (48 per cent down to 38 per cent) to either the No column (42 per cent up to 49 per cent) or the Don’t Know column (10 per cent up to 13 per cent) in such a short time — and before the start of the formal campaign — is difficult to credit. The movement of one-in-ten men from the No column (52 per cent down to 47 per cent) or the Don’t Know column (10 per cent up to 8 per cent), while only half as big, also stretches credulity.

Since the shifts were in opposite directions, they largely cancelled each other out. Had the shift among either group been less dramatic, the topline results might have looked quite different. For example, if support among women had declined by no more than half as much as Newspoll reports, support for the Voice would have stood at 43 or 44 per cent and opposition at 45 or 46 per cent. This would have represented an improved result, not a worse result, for the Yes camp than Newspoll’s figures of three weeks before. What might the headline have been then?

When Newspoll asks about the Voice, Benson writes, “female voters have until now been significantly overrepresented among the undecideds.” Now, when Newspoll asks those respondents who initially say they “don’t know” whether they “approve” of the alteration to the Constitution, “which way they would lean if forced to profess a view,” things are different: “women voters are now significantly more likely to say No.”

Neither Newspoll nor the Australian is keen to disclose the patterns of response to the initial question — before respondents were leant on to choose Yes or No — in the last three polls. Benson failed to reply to a request that the Australian do so; YouGov, the British-owned firm that conducts Newspoll, said it “can’t really comment.” As a consequence, Benson’s account can’t be confirmed independently. Yet the rules of the Australian Polling Council, of which YouGov is a founding member, say that if “voting intention figures are published with the undecided participants excluded, the proportion who were thus excluded should be published.”

Why might women have moved from Yes to No? Benson attributes the shift to the “targeted campaign by the No camp.” Crucial to this was the fact that the government, “in its contortions over the voice,” had “vacated the field of talking to voters’ primary concern — the cost of living.” Noting that “any pollster… will tell you female voters are more highly attuned to cost-of-living pressures than male voters” — though “cost of living is by far the issue of most concern to a majority of all voters” — Benson insists this gave the No camp a “strategic edge.” The No campaign had also “spent significant funds directly targeting women.” This, in his view, “appear[ed] to have paid off.”

To have “paid off” to anything like the extent Benson implies, the No campaign would have needed not only to have targeted female voters but also to have done so across most of the social media platforms on which the No campaign’s advertising, coordinated by Advance Australia, has largely been conducted. But targeting of this kind is not what the evidence shows. An analysis of the three Facebook pages — Fair Australia, Not Enough, and Referendum News — that Advance Australia has been populating concludes that only one (Not Enough) was targeting voters in the two largest states.

If the other two pages were “essentially ignoring New South Wales and Victoria” — the two states where the majority of women (and men) reside — the No campaign can hardly have been reaching the majority of female voters. Moreover, while the ads on Referendum News skewed “towards a female audience,” the ads on the other pages skewed to different demographics.

Assuming, for the sake of the argument, that the No campaign did enjoy the kind of success Benson attributes to it, are we to conclude that as well as shifting women in extraordinarily large numbers to the No side, the No campaign — in a terrible own goal — also shifted a large number of men across to the Yes side? If not, what did shift these men? This is not a question Benson attempts to answer; everything he has to say goes to explaining why support for the Voice should be falling rather than why, among men, it might have risen.

The explanation for the “rise” in support among men may lie in nothing more profound than the vagaries of polling. Newspoll has asked its Voice question with its current response architecture three times (the first is here). If one looks at all three polls — not just, as Benson does, the last two — among men the Yes–No split is 45–46, 38–48, 45–47: it’s the second (June) poll, not the third (July), that is the odd one out. If the second poll underestimated support among men, the most recent poll may simply be correcting that.

Before the latest Newspoll, only one poll had ever reported finding more men than women in favour of a constitutionally inscribed Voice. Conducted in December 2022 by Freshwater Strategic, it showed only the narrowest of differences in support between men (51 per cent) and women (50 per cent); but even in this poll, more men (30 per cent) than women (22 per cent) were opposed. The most recent poll to use the same response architecture as Newspoll — a poll conducted by Essential Media (5–9 July), a week ahead of Newspoll — shows women (49 per cent) more likely than men (44 per cent) to support Yes, and men (47 per cent) more likely than women (40 per cent) to say No.

None of this appears to have registered at the Australian. For Benson, the referendum had “suffered a collapse in support among women voters,” with women “for the first time… now more likely than men to vote no, a central change to core support.” The precipitous fall in support among women was noted by the paper’s national editor, Dennis Shanahan. The story about a new gender divide got a run in an editorial on the day it broke, and another run the next day. Other outlets, too — seemingly less concerned with objectivity, which requires critical evaluation, than with neutrality, which requires no more than reporting what is newsworthy — reproduced the figures.

Could such a shift have happened? Bongiorno — another strong supporter of a Yes vote — thought it not only could have happened but had happened, even as he took out the standard insurance against being held personally responsible for his report. “If you can believe the opinion polls,” he reported, “regional Australia has gone very cold on the idea of a constitutionally enshrined Indigenous Voice to Parliament.”

Perhaps Bongiorno also had in mind a poll published a couple of weeks earlier by the Canberra Times, not referenced by the Australian. The poll was conducted online by Chi Squared (the research arm of the Canberra Times’s owner, Australian Community Media) among readers of fourteen daily newspapers “serving Canberra and key regional population centres such as Newcastle, Wollongong, Tamworth, Orange, Albury and Wagga Wagga in New South Wales, Ballarat, Bendigo and Warrnambool in Victoria, and Launceston and Burnie in northern Tasmania,” to which 10,131 readers had responded.

Chi Squared purported to show that “in the regions” the level of support for establishing the Voice (the question was not disclosed) stood at just 35 per cent. While this figure was not very different from Newspoll’s estimate, the “poll” was conducted from 16 to 26 June — at a time when Newspoll, using sampling techniques better suited to the task, not simply self-selection, was reporting a 40–51 split in the regions rather than Chi Squared’s 35–57. If regional opinion had shifted between June and July in the way Newspoll suggests, why might it have shifted? Benson doesn’t venture an answer; nor does Bongiorno.

“The bottom line,” says Benson, “is that the trend towards a No vote is increasing and it is expanding in the wrong demographics for the yes camp.” What the “right demographics” might be, he doesn’t say. The Yes camp needs a majority of the national vote and would be happy, one assumes, to accept contributions from all demographics. No demographic — certainly not women rather than men, or regional rather than metro voters — is “right” or “wrong”; if support is slipping, it is slipping largely across the board. To win, Yes also needs majorities in the majority of states; any four will do, though a victory in one or more of the bigger states will do more to secure a national majority vote than a victory in one or more of the smaller states.

To see whether the latest Newspoll has got things horribly wrong on the Voice — or whether, on the contrary, it should be recognised for being the first to detect an extraordinary change in the gender gap and a substantial expansion of the metro–regional divide — we will need to wait for the next polls, whether from Newspoll itself or from Resolve, Freshwater or Morgan.


Finally, a word about an unreported upheaval at YouGov. Between the June poll and the one conducted in July, virtually all of those working in the public affairs and polling unit at YouGov left; the departures included the head of the unit (and chair of the Australian Polling Council), Campbell White.

Did the number and quality of the personnel heading out the door have an impact on the analysis of the more recent poll? If the changes at YouGov have affected data quality or the quality of the analysis, and aren’t corrected, then — much like support for Labor or support for the Voice — Newspoll’s status in Canberra might slide as well. •

The post The “end” of Labor’s honeymoon and the “collapse” of women’s support for the Voice appeared first on Inside Story.

]]>
https://insidestory.org.au/the-end-of-labors-honeymoon-and-the-collapse-of-womens-support-for-the-voice/feed/ 1
“Undecided” on the Voice https://insidestory.org.au/undecided-on-the-voice/ https://insidestory.org.au/undecided-on-the-voice/#respond Tue, 20 Jun 2023 04:31:38 +0000 https://insidestory.org.au/?p=74522

Depending on the choices pollsters offer, the undecideds range all the way from none to two-thirds of respondents

The post “Undecided” on the Voice appeared first on Inside Story.

]]>
Public polls overwhelmingly show support falling for a constitutionally entrenched Voice to Parliament, and opposition growing. With the gap between Yes and No narrowing — hardly a recent phenomenon, as several charts make clear — Yes campaigners will be increasingly concerned about how to stem the flow both nationally and in the required four states. The more ambitious of the Yes campaigners may also be examining ways of not just stemming the flow but reversing it, with the level of support nationally in the latest Resolve poll having dipped below 50 per cent (a 49–51 split) and support in three of the states also less than half.

A key question for campaigners is whether voters are switching from “undecided” to No or from Yes to No. “What worries the government,” says columnist George Megalogenis, “is the recent narrowing of the gap between committed Yes and No voters, which reflects a greater shift from the undecided to the No column than from Yes to No.” Another columnist, Janet Albrechtsen, calls Noel Pearson’s highly personal attacks on those disagreeing with him a boon to the No side because “more undecided voters might ask themselves ‘would I want this man running the Voice?’ and shift into the No side of the ledger.”

Is the rise in No being driven by “undecided” voters coming off the fence or by less “committed” Yes voters jumping the fence? That could depend on how “undecided” is defined. In talking about the “undecided,” Albrechtsen and Megalogenis may be focusing on quite different sets of voters.

In any poll, the “undecided” are defined not by the poll’s question but by the question’s “choice architecture” — the range of possible responses the pollster offers respondents. On the Voice, the polls have attempted to measure the “undecided” in at least three different ways. Some polls have offered respondents the opportunity to indicate they have no clear opinion; hence, the “Don’t know” option, or something similar. Some polls have encouraged respondents to express an opinion that has more nuance than Yes or No, enjoining them to indicate whether their views are held “strongly” or “not strongly”; views not strongly held, arguably, are another form of indecision. And some polls have presented respondents with a similar range of responses, but with another possible response — “Neither support nor oppose” — in the middle.

These don’t exhaust the range of possibilities. Some polls have asked respondents, directly, how likely they are to change their positions — “somewhat” or “very” likely — which is another way of indicating that while they appear to have made a choice, their decision is not final. Others have asked respondents who have indicated support for Yes or No how likely they are to turn out and vote.

Still other architectures remove the “undecided” option altogether. Both the most favourable and the least favourable polls for the Yes and No sides are polls of this kind: the latest Resolve poll, which has Yes trailing No, and the latest Essential poll, which has support for Yes a long way ahead of support for No (60–40); each restricted respondents to a Yes or No.

Not to distinguish among these response architectures — some of which allow for further variations — is to risk drawing comparisons between polls that can’t readily be compared, even where the questions asked are similar. It is also to risk inferring trends based on polls that offer respondents very different choices: none of the graphs tracking the narrowing of the gap between Yes and No appears to take any account of the various choice architectures involved in generating the numbers. Not to be aware of these different architectures also risks focusing on only one version of what is going on. Thus, the attention paid to the latest forced-choice Resolve poll or the latest Essential poll is disproportionate.

Depending on the chosen architecture, the “undecided” vote can vary enormously — from more than half, when respondents are invited to consider a middle option in a five-point scale, to zero, when being “undecided” is designed out of the choices on offer. In other words, the contribution to the No vote of the “undecided” is a function, in part, of the choice architecture. Nonetheless, across all choice architectures, the boost to the No vote by the “undecided” appears to have been much smaller than the contribution of those who switched from Yes.

Three types of response architecture: In the standard architecture — following the kinds of questions pollster George Gallup promoted in the 1940s as a “sampling referendum” — respondents are presented with two options (Yes/No, Support/Oppose, and so on) plus a third, for those who don’t want to choose either.

On whether to put a Voice into the Constitution, the standard architecture offers various choices: Yes/No/Don’t know (Newspoll’s most recent polling for the Australian; YouGov for the Daily Telegraph); Yes/No/Undecided–Prefer not to say (Freshwater Strategy for the Australian Financial Review); Yes/No/Undecided (Roy Morgan Research); Yes/No/Unsure (Dynata for the Institute of Public Affairs); Support/Oppose/Don’t know–Not sure (Dynata for the Australia Institute); Yes/No/Need more information–Can’t say” (JWS).

Three things are worth noting. One is that these polls don’t imagine respondents having no opinion. The third choice they offer allows for respondents who have conflicting opinions that leave them “undecided,” qualified opinions that don’t readily fit a straight Yes or No, or Yes/No opinions that reticent respondents may prefer not to declare (a possibility acknowledged explicitly only by Freshwater).

A second point to note is the near-universal assumption that anyone who ticks Yes/No (Support/Oppose) has decided where they stand, at least for the moment. Those who haven’t decided are captured under a residual term: Undecided, Unsure, Don’t know, Can’t say. If some of those — perhaps most of those — who tick Yes/No (Support/Oppose) are still not entirely decided, this particular architecture provides no way of indicating it.

Third, some pollsters (JWS; Resolve Strategic, below) have offered respondents a residual category that conflates two quite different things: not wanting to align one’s views with Yes/No (Support/Oppose) and having a particular reason (“lack of information”) for not wanting to do so. Not only might those in the residual category place themselves there for reasons other than wanting more information, respondents who answer Yes/No (Support/Oppose) might welcome more information too.

In Gallup’s day, a response other than Yes/No, Support/Oppose and so on was usually left to respondents to volunteer. Pollsters have always been keen to promote the idea that the public’s views fit whatever categories the pollsters choose; a choice outside these categories is not something they are generally keen to encourage. With online polling, which means almost all polls these days, respondents can only be offered a residual option — as they should be — as an explicit alternative.

In what we might call the non-standard architecture, pollsters offer a set of response categories designed to distinguish respondents who hold their views (in favour/against) strongly from those who don’t hold their views strongly — the latter sometimes described as being “softly” in favour or “softly” against.

This is one of the two architectures Resolve has used. Since August 2022, it has asked whether respondents support a Voice in the Constitution and, it seems, offered these alternatives: Yes, definitely; Yes, probably; No, probably not; No, definitely not; Undecided/Not enough information. Since April, though, and possibly earlier, the final alternative has read Undecided/Not enough information/May not vote, a category that mixes up the one thing that necessarily distinguishes these respondents from the other respondents (Undecided in the sense of “none of the above”) from other things that may not (Not enough information and/or May not vote).

Before switching to a standard format at the end of May 2023, Newspoll used a similar non-standard response set — something that has been a hallmark of its issue polling over nearly forty years. On three occasions, Newspoll sought to identify those “strongly in favour,” “partly in favour,” “partly against” and “strongly against,” offering “Don’t know” as a residual category. (In principle, there is no reason why one could not also distinguish a strong “Don’t know” from a somewhat “Don’t know,” but that is a distinction that pollsters never draw.)

In the third choice of architecture — one that resembles the non-standard architecture but needs to be distinguished from it — response options take the form of a five-point scale with “Neither support nor oppose” (or some neutral equivalent) in the middle. These scales are known in the trade as Likert items, after the American survey researcher Rensis Likert. The use of “Neither support nor oppose” distinguishes a Likert item from the non-standard architecture,  which has a “don’t know” at the end but no middle option.

SEC Newgate has asked respondents regularly whether they “Strongly support,” “Somewhat support,” “Neither support nor oppose,” “Somewhat oppose,” or “Strongly oppose” the “creation of an Indigenous Voice to Parliament.” The Scanlon Foundation has adopted a similar approach. So, too, has Essential — but only once, with another option, “Unsure,” added at the end of the scale.

Accepting versus squeezing: architectures that make the “undecided” visible: Do the various choice architectures affect the proportion of respondents who are “undecided”? If we compare the “undecided” in the standard architecture (Yes/No/Don’t know) with those who tick “Neither support nor oppose” on the Likert items, the answer may be no. In the standard format, the proportion “undecided” about a constitutionally enshrined Voice averaged as follows: 27 per cent (across three questions) between May and September 2022; 19.5 per cent (two questions) between October 2022 and January 2023; and 22 per cent (five questions) between February and May 2023. Given other variations among questions, these are not very different from the proportions ticking “Neither support nor oppose” in the Likert items: 23 per cent between May and September 2022 (four items); 25 per cent between October 2022 and January 2023 (one item); and 23 per cent between February and May 2023 (two items).

Eliminating the “undecided” — architectures of denial and removal: Pollsters have developed ways not only of reducing the “undecided” votes but of making them disappear. The most extreme of these methods is a binary response architecture that imposes a strict two-way choice: Yes/No, Support/Oppose, and so on. These polls give no other option. If we ask whether the choice architecture affects the proportion that shows up as “undecided,” nowhere is the answer clearer than here.

How many respondents have refused to answer when the question is asked in this way is nowhere disclosed; Essential Research, whose polls are published in the Guardian, says it doesn’t know the number. What happens to respondents who refuse to answer is not something pollsters are keen to disclose either. Resolve, which has used the binary format in relation to the Voice since August 2022, appears not to block these respondents from taking any further part in the poll. But in the Essential poll, respondents who baulk at the binary are removed from the sample.

What the process of deleting respondents does to the representativeness of a sample is something pollsters don’t openly address. In an industry that encourages the belief that sampling error is the only kind of error that matters, this is not entirely surprising.

In estimating support for a constitutional Voice, a number of pollsters have resorted to the binary format either wholly (Essential, Compass, and Painted Dog in Western Australia) or in part (Resolve). Their justification for offering respondents just two options is that at the referendum these are the two choices that voters will face. This is misleading. Voters will have other choices: not to turn out (acknowledged by Resolve in the response options it offers in the preceding question) or to turn out but not cast a valid vote. On the ABC’s Insiders, independent senator Lidia Thorpe said she was contemplating turning out but writing “sovereignty” on the ballot.

Binaries are not favoured by the market research industry. In Britain, the Market Research Society Code of Conduct states that “members must take reasonable action when undertaking data collection to ensure… that participants are able to provide information in a way that reflects the view they want to express, including don’t know/prefer not to say.” This code covers all members, including those whose global reach extends from Britain to Australia (YouGov, Ipsos and Dynata).

In Australia, a similar guideline published by the Research Society (formerly the Market Research Society of Australia) advises members to “make sure participants are able to provide information in a way that reflects the view they want to express” — a guideline almost identical with that of the MRS, even if it stops short of noting that this should allow for a “don’t know/prefer not to say.” Whether such guidelines make a difference to how members actually conduct polls is another matter; of the firms that have offered binary choices on the Voice, some (Essential) are members of the Research Society, others are not (Compass, Resolve).

But a binary is not the only way to make the “undecided” disappear. Some pollsters publish a set of figures, based on the standard architecture, from which respondents registered as “undecided” have been removed using a quite different technique. In its latest release, for example, Morgan publishes one set of figures (Yes, 46 per cent; No, 36 per cent; Undecided, 18 per cent) followed by another (Yes, 56 per cent; No, 44 per cent), the latter derived from ignoring the “undecided” and repercentaging the rest to a base of 82 (46+36). This is equivalent to assuming the “undecided” will ultimately split along the same lines as those who expressed a choice. In publishing its figures, with the “undecided” removed, Freshwater appears to do something similar.

Whether the basis on which Morgan (or Freshwater) reallocates the “undecided” is correct is open to doubt. Morgan acknowledges this: “past experience,” it cautions, “shows that ‘undecided’ voters are far more likely to end up as a ‘No’ rather than a ‘Yes’ vote.” Indigenous Australians minister Linda Burney, who is said to be “completely confident the Yes campaign will convince undecided voters to back the Voice,” expresses the opposite view.

In considering the narrowing lead of Yes over No, we should ask how the “undecided” have been acknowledged, defined and dealt with in each poll’s response architecture.

What the standard architecture (Yes/No/Don’t Know) shows: Between June and September 2022, the three polls that used a “Yes/No/Don’t Know” response architecture (two by Dynata for the Australia Institute, one by JWS) reported that an average of 55 per cent of respondents said they would have voted Yes, 18 per cent would have voted No, and 27 per cent would not have put their hand up for either.

Across the following four months, the corresponding averages (for the two questions asked by Freshwater and Morgan) were 51.5 per cent, 28.5 per cent, and 20 per cent. (Omitted is a poorly constructed question conducted by Dynata for the Institute of Public Affairs.) From February 2023 to the end of May, when Freshwater, Morgan, and JWS  asked five questions between them, support for a Voice in the Constitution averaged 43 per cent, opposition 34.5 per cent, and the “undecided” 22 per cent.

Since May 2022, support for Yes has declined (from 55 per cent in the first four months to 43 per cent in the most recent quarter) and support for No has risen (from 18 to 34.5 per cent), quarter by quarter, but the decline in the proportion supporting neither Yes nor No (from 27 to 22 per cent) has been relatively small. So, while the 16.5 percentage point rise in the No vote is not entirely accounted for by the 12 percentage point fall in the Yes vote, the contribution to the No vote of the “undecided” appears to have been much smaller than the contribution of those who switched from Yes.

In some cases, pollsters have tried to reduce the number of “don’t knows” by asking these respondents a follow-up question — known in the trade as a “leaner” — designed to get them to reconsider; this might be seen as a way of distinguishing “soft” don’t knows from “hard” don’t knows.

Some of these pollsters have published the figures both before and after the leaner (JWS) or made them available (Freshwater). On these figures (one set from JWS; three sets from Freshwater), the proportion of “undecided” respondents was 8 percentage points smaller, on average, after the leaner than before. Except for one occasion when they split evenly, more chose the Yes side than chose the No side. So, far from contributing to a narrowing of the gap between Yes and No, squeezing the undecided widened the gap.

What the non-standard architecture (Yes, strong/weak; No, strong weak; Undecided) shows: In the first four months after the 2022 election, none of the pollsters who asked questions about support for the Voice used the non-standard architecture. That was to change, first through Resolve, then through Newspoll.

Between September 2022 and January 2023, Resolve adopted this architecture twice. Averaging the two polls, support stood at 50 per cent, opposition 29.5 per cent, Undecided/Not enough information 21 per cent. Between February and May, across three more polls, the corresponding figures were 45 per cent Yes; 34 per cent No; 20 per cent Undecided/Not enough information/May not vote. So, over the two periods, Yes dropped by 5 points, No rose by 4.5, and those opting for the residual category dropped by just 1 point. The rise in opposition is almost entirely accounted for by the fall in support.

Taken at face value, the three Newspoll surveys, conducted in the last quarter, tell a rather different story: 54 per cent Yes; 38 per cent No; 8 per cent Don’t know. But they can throw no light on the shift from quarter to quarter because Newspoll’s figures indicates the size of the “don’t knows” after the leaner; asked to divulge the proportion before the leaner, Newspoll declined.

Could the leaner — or the “squeeze’,” as Freshwater prefers to call it — explain the difference between the size of the “don’t know” response with the standard architecture and the size of the “don’t know” response in the non-standard architecture? In the standard (Freshwater) format, the “don’t knows” averaged 15 per cent, squeezed; in the non-standard (Newspoll) format, the “don’t knows” averaged just 8 per cent, squeezed. (Resolve’s data is not squeezed.) This suggests that, compared with the standard architecture, asking about the Voice while offering a non-standard set of response options makes a difference to the number that finish in the “undecided” column; the non-standard architecture lowers the number markedly.

What the Likert items (Yes, strong/weak; Neither…nor; No, strong/weak) show: The Likert items confirm these shifts. In the first four months, when four Likert items (from Essential, SEC Newgate and the Scanlon Foundation) featured in the polls, the level of support for the Voice (“strongly support” plus “somewhat support”) averaged 57 per cent; the level of opposition (“somewhat oppose” plus “strongly oppose”), 17.5 per cent; those inclined neither one way nor the other, 34.5 per cent. In the next quarter, SEC Newgate produced the only Likert item: 55 per cent supported the Voice, 19 per cent opposed, and 25 per cent neither supported nor opposed. In the most recent period, which saw two (SEC Newgate) items, support averaged 52.5 per cent, opposition 24 per cent, and 23 per cent were neither for nor against.

While the proportion of respondents only partly in support appears to have declined (from 24.5 to 21 per cent) the proportion strongly opposed appears to have increased (from 17.5 to 24 per cent). But the proportions strongly in support or partly opposed have barely shifted. This lends some support to Dennis Shanahan’s remark, seemingly based on private polling, about the “start” of a “drift from soft Yes to hard No.” But on whether this is due to “young people and Labor supporters,” as Shanahan believes, there is room for doubt; although SEC Newgate does not report separately on the demographics of those who are partly in support or strongly in support, the drift away from the Voice has been much more marked among older than among younger voters and much more marked among Coalition than among Labor voters, in their polling.

Compared with results obtained with the standard set of responses, the Likert items point to much smaller shifts away from support and towards opposition: a drop in the level of support for the Voice of just 4 percentage points, not 12; a rise in the level of opposition of just 6.5 points, not 16.5; and a falling away of the “undecided” vote — here, the proportion neither in favour nor opposed — of just 1.5 percentage points, not 5. As with the standard architecture, most of the additional No vote appears to have come from those who supported (strongly or somewhat) the Voice in earlier polls, with the decline in the “Neither… nor” group appearing to contribute much less to the growth in the No vote.

What the binary architecture (Yes/No) shows: Binaries are designed to eliminate the “undecided.” But when they are asked in the wake of response architectures that recognise the undecided, they can tell us one important thing: what happens to the “undecided” when they are forced to choose.

If we compare the results Resolve produced when it used the non-standard architecture and followed up with a binary, it is clear that the Yes side enjoyed a greater boost than the No side when the “undecided” were forced to choose. In other words, far from contributing to a narrowing of the gap between Yes and No, eliminating the undecided widened the Yes vote’s lead; this is consistent with the picture that emerges from other architectures when the “undecided” are squeezed. The one exception was Resolve’s June poll, its most recent, where the “don’t knows,” given a binary choice, appear to have split in favour of the No side (7 Yes, 11 No), causing the overall balance to shift to the No side (49–51).

“Undecided” — differences across the complete catalogue of measures: Across the pollsters’ questions, “Undecided” is hardly a fixed category. Typically, moreover, the “undecided” vote varies with the choice architecture.

Some commentators base their discussion of the “undecided” on the standard response format: Yes/No/Don’t know, “can’t say,” “not sure,” and so on. Megalogenis is one; constitutional lawyer and columnist Greg Craven is another. Each estimates the “undecided” vote to be “around 20 per cent” — a number clearly based on the (unsqueezed) numbers published in relation to questions that offered the standard response options. This proportion was lower in polls that used a leaner: 20–22 per cent before the leaner, quarter-by-quarter; around 15 per cent, it seems, after the leaner.

What of the non-standard format? Though the Resolve poll asks respondents to classify themselves as either “definitely” or “probably” (Yes/No), the Sydney Morning Herald and Age have never published a set of results for any of the samples that separates the “definitely” from the “probably.” Looking at the figures, and the limited detail about the polls that the papers choose to publish, a reader could be excused for thinking that Resolve used the standard rather a non-standard response architecture. A reader could certainly conclude that its publisher didn’t think the distinction mattered.

In Newspoll, those who described themselves as “partly” in favour (28 per cent) or “partly” against (13 per cent) represented a much bigger proportion of the electorate than is represented by the “undecided” (even before the leaner) in polls that used the standard format. If we add those who answered “Don’t know” (8 per cent), we get a combined figure of 49 per cent — half the electorate — who are neither strongly Yes nor strongly No.

Craven speculates that “Once someone congeals [sic] to No” — after shifting from “Don’t know,” presumably — “they will not be shifted.” This implies that even someone only partly against the Voice should not be considered “undecided.” But in support of his opinion, he offers no evidence.

The use of Likert items lifts the proportion of the electorate we might regard as “undecided” to a slightly higher level still. Adding in those only somewhat in support (21 per cent), those neither in support nor opposed (23 per cent) and those only somewhat against (9 per cent), we reach a number of 53 per cent for the most recent four months; that is, over half.

“Undecided”: Further questions, different answers: Some questions in the polls have sought to establish how many respondents are “undecided” about the Voice not in any of these ways but by asking respondents how sure they are that their preferences won’t change. In response to a question Freshwater asked in December 2022, and repeated in April and in May 2023, only 39 per cent (on average) of those who favoured a constitutional change were “certain” they would “vote this way”; among those opposed to a constitutional change, the average was 61 per cent; these are figures not previously published.

Nonetheless, the proportions that said they “could change” their mind or were “currently undecided” remained substantial: 34 per cent (December), 31 per cent (April), 31 per cent (May). Of these, about a third could change their mind, the other two-thirds being currently “undecided.” Among those who could change their mind, the proportion was consistently higher among those who intended to vote Yes than among those who intended to vote No: 17–11 per cent (December), 12–6 per cent (April), and 10–7 per cent (May).

The number of voters who are persuadable could be even greater. Common Cause is reported to have “identified” 20 per cent of the non-Indigenous population as “strong Voice supporters,” 15 per cent as “opponents,” with the other 65 per cent “open to being persuaded either way.”

Two polls also asked respondents how likely they were to actually turn out and vote. Here, too, the response architecture mattered, with JWS using the non-standard response architecture and Resolve using the standard architecture. In February, when JWS asked how likely respondents were “to attend a polling booth (or source a postal vote) and cast a formal vote in this referendum,” more than a third of its respondents said “somewhat likely” (17 per cent), “unlikely” (8 per cent) or “can’t say” (10 per cent). In April, when Resolve asked how likely it was that respondents would “be registered to vote” and would “turn out to cast a vote in this referendum about the Voice,” similar proportions said they were unlikely to cast a vote (10 per cent) or were “undecided” (9 per cent); in the absence of the other JWS categories — extremely likely, very likely and somewhat likely — the rest of the sample (81 per cent) could only say that they were likely to cast a vote.

How different were the likelihoods of Yes and No supporters actually turning out? In the JWS poll, fewer of the Yes (48 per cent) than the No supporters (56 per cent) said they were extremely likely to cast a formal vote — though the gap narrowed (72–69) when those very likely to do so were added. Between those in the Resolve poll who intended to vote Yes (89 per cent of whom said they were likely to turn out) and those who intended to vote No (87 per cent of whom said they were likely to turn out), there was hardly any difference. In both polls, more No supporters than Yes supporters said they were unlikely to turn out. In the JWS poll, 11 per cent of No supporters compared with 4 per cent of Yes supporters said they were unlikely to turn out; in the Resolve poll, the corresponding figures were 10 and 8.

More striking than either of these sets of figures were Resolve’s figures for those “undecided” about whether they favoured Yes or No: 44 per cent of these respondents said they were either unlikely to vote (14 per cent) or were “undecided” about whether they would vote (30 per cent). If nearly half of the “undecided” (on the standard measure) were not to vote (JWS did not publish its figures), allocating the “undecided” to either the Yes or No side would be defensible only if the allocation didn’t assume that these respondents would cast their lot with the No side (Morgan’s hunch) or with the Yes side (Burney’s hope).


The government’s explanation for the “narrowing of the gap between committed Yes and No voters,” as reported by George Megalogenis, is not borne out by any of our measures. On the standard format, the “narrowing of the gap” between May 2022 and May 2023 appears to have been due to respondents moving from Yes (down 12 percentage points) to No (up 16.5); the shift to No from among the “undecided” (down 5) appears to explain much less of what has happened. In the non-standard architecture, the combined support for Yes has slipped (down 5) over the last eight months while the combined support for No has grown (up 4.5), the “undecided” (down 1) having hardly moved.

Moreover, any narrowing of the gap between those “strongly” committed to a Yes vote and those “strongly” committed to a No vote has been due to the number “strongly” Yes shrinking and the number “strongly” No expanding; it has not been due to a reduction in the proportion that “neither supports nor opposes” having the Voice inscribed in the Constitution. Responses to the Likert items over the last year also suggest a decline in support (down 4) and a rise in opposition (up 6.5) without a marked reduction in the proportion registered as “neither… nor” (down 1.5). Binaries, posed hot on the tail of questions that have offered a non-standard set of responses, have not narrowed the gap between Yes and No; except for the most recent of these questions, they have widened it.

Every measure leads to the same conclusion: the gap has narrowed because the Yes side has lost support and the No side has gained support. Each of these measures, it has to be conceded, is based on cross-sectional data — data derived from polls conducted at a particular time that reveal only the net movement across categories. Since the gross movement is certain to have been bigger, panel data — data derived by interviewing the same respondents at different times — might tell a different story. But every claim about how opinions have moved has appealed, if only implicitly, to the evidence provided by the cross-sectional data; panel data have not rated a mention. (So far as we know, no panel data exist.)

The choice architecture makes no difference in establishing that the gap between the Yes and No has narrowed. It makes some difference in showing whether the narrowing is due to a gain of support on the No side rather than a loss of support on the Yes side (suggested by the standard architecture and by the non-standard architecture) or a loss of support in almost equal measure on both the Yes and the No sides (the Likert items). And it makes a big difference in determining the size of the Yes and No vote (the binary architecture being particularly powerful), in estimating the proportion of respondents’ undecided (less so with the standard architecture compared with Likert items), and in identifying the proportion that might be persuaded to change their minds.

To say that the choice architecture makes a difference is also to say that it may not be possible to express one form of the architecture in terms of another; when Newspoll switched from the non-standard to the standard form of response, the previous results could not be converted into the standard form. It follows that changes in support may be difficult to track when the choice architecture changes.

This should not be read as an argument against changing architectures; the more closely the response architecture mimics a referendum, the better it is likely to be. Gallup’s  standard architecture — with or without a leaner — is to be preferred to a binary, a form that offers too restricted a range of choice. The standard architecture is also to be preferred to the non-standard architecture or to a Likert item, forms that offer too wide a choice.

This analysis also does not mean that other, more direct measures of uncertainty should be discarded or not introduced. On the contrary, different measures may serve well as forms of validation and as sources of insight. •

The post “Undecided” on the Voice appeared first on Inside Story.

]]>
https://insidestory.org.au/undecided-on-the-voice/feed/ 0
Losing ground? https://insidestory.org.au/losing-ground/ https://insidestory.org.au/losing-ground/#comments Fri, 09 Jun 2023 02:28:33 +0000 https://insidestory.org.au/?p=74412

Support for the Voice may not have dropped as much as the latest Newspoll suggests

The post Losing ground? appeared first on Inside Story.

]]>
The latest Newspoll — headlined “Less Than Half Aussies Intend to Vote ‘Yes’ on Voice” on the Australian’s front page — has created something of a stir.

At the beginning of April, when Newspoll last reported on support for putting a Voice into the Constitution, it estimated the level of approval at 53 per cent and opposition at 39 per cent; 8 per cent said “Don’t know.” Two months later, the corresponding figures are rather different: 46–43–11.

On the face of it, this looks like support has declined by seven points, the opposition has risen by four points, and the “Don’t knows” have gone up by three. And it looks like that’s the result of a couple of months in which the No side has campaigned hard and the Yes side has been on the back foot, with some of its erstwhile supporters either switching to No or putting off a firm decision and “parking” their vote, as Newspoll’s former boss Sol Lebovic used to say, under “Don’t know.”

Thus, Dennis Shanahan, in a comment for the Australian: “The latest Newspoll figures… suggest there is an across-the-board movement against the voice and a surge in uncertainty.”

Not so fast. There are two reasons for caution when comparing the June results with the April results: a change in Newspoll’s question and a change in what we might call, borrowing a phrase from Richard Thaler and Cass Sunstein’s Nudge, its “choice architecture.”

The question: The Australian notes that the question asked in its latest poll is not the same as the question asked in its previous polls. The obvious implication is that its figures need to be interpreted with care.

In April, Newspoll explained that “There is a proposal to alter the Australian constitution to establish an Aboriginal and Torres Strait Islander Voice to Parliament.” It then asked:Are you personally in favour or against this proposal?”

In its latest poll, Newspoll used a slightly different preamble: “Later this year, Australians will decide at a referendum whether to alter the Australian Constitution to recognise the First Peoples of Australia by establishing an Aboriginal and Torres Strait Islander Voice”(with those italicised words underlined in the questionnaire). It then asked: “Do you approve this proposed alteration?” This made it “the first Newspoll survey to present voters with the precise question they will be asked at the ballot box when the referendum is held later this year.”

If the differences in the wording of the two questions explains, at least in part, the differences in the two sets of responses, it is not clear how it does. Did the reference to “recognition” deflate support? That seems unlikely: since “recognition” has wide public support, its inclusion is more likely to have boosted support than deflated it. Did the prospect of having to vote at a referendum boost opposition? Again, that seems unlikely, though at a time when voters may have more pressing things to worry about, it’s probably the better bet. Perhaps the heavy black underlining of the proposal caused concern.

According to a quote in the Daily Telegraph, another News Corp masthead, polling analyst Kevin Bonham believes Newspoll is “likely more accurate” than many other polls because it has been the first to use the exact wording of the referendum proposal. However commendable that might have been, we cannot assume that the wording necessarily makes a difference to respondents.

A polling purist might baulk at Newspoll’s switch from: (a) asking respondents whether they are “in favour or against” (balanced alternatives) a proposal to alter the Constitution to establish a Voice; to (b) asking respondents whether they “approve” this proposed alteration, with no balancing alternative (“disapprove”). It might also have been better practice to ask respondents how they intended to act (that is, vote) rather than how they felt (“in favour or against”; “approve”).

The choice architecture: What the Australian overlooks — and what Newspoll itself fails to note — may be something more important than the change in the question: the change in the poll’s choice architecture. In April, Newspoll not just posed a different question; it also offered a different array of response options: “Strongly in favour,” “Partly in favour,” “Partly against,” “Strongly against,” “Don’t know.” In its most recent poll, by contrast, the options offered to respondents were simply: “Yes,” “No,” “Don’t know” — a set of responses, it should be acknowledged, better suited to a referendum than the set Newspoll previously offered.

How might this change have affected the results? With a wider number of response options, the proportion that chose “Don’t know” was relatively small; in April’s Newspoll, it was 8 per cent, with the numbers in February (7 per cent) and in March (9 per cent) having been almost the same. Polls by other companies in February, March or April that offered the same sort of choices as Newspoll offered in its latest poll reported higher figures for “Don’t know,” just as Newspoll now does.

The assumption that we can compare polls that use different architectures (Yes/No/Don’t know as against Strongly in favour/Partly in favour/Partly against/Strongly against/Don’t know) simply by collapsing categories (Yes = Strongly in favour + Partly in favour) is mistaken.

It is difficult to say how much the change in the Yes and No responses can be explained as an effect of the change in the choice architecture. But this doesn’t leave us without any bearings. As we would expect, the “Don’t know” number in June (11 per cent) is higher than it was in April (8 per cent); the “surge in uncertainty” is therefore almost certainly an illusion — an effect of changes in the response categories.

If the “Don’t know” number is higher, then the Yes and/or No vote has to be lower. In this Newspoll, the Yes vote is lower but it is also lower than we might have expected on the basis of a switch in choice options alone. And the No vote, far from being lower, is higher.

Allowing for changes in the choice architecture, this suggests that, over the two months since Newspoll’s last survey, the Yes side has lost support and the No side has gained support.

This is hardly news: a tightening of the contest is what almost all the polling has shown for some time. The intriguing question is how much of a tightening would Newspoll have shown — with or without its new question — had it not changed its response options.

Nor is it news that fewer than half of those polled intend to vote Yes. Since March, none of the polls that use the standard architecture (Yes/No/Don’t know) — Freshwater, Morgan, Resolve — have reported Yes majorities. The only way of conjuring Yes majorities from these polls has been by assuming either that the “Don’t knows” won’t vote or that enough of them will vote — and vote Yes — to get the proposal over the line.

According to Simon Benson, who wrote the Australian’s main story, the Newspoll results “suggest the debate is now shaping up as one being led by elites on one side and everybody else on the other.” What this means is unclear. There are “elites” in both camps. But even if the “elites” were only on the Yes side, the polls don’t show “everybody else” on the other. Benson has reprised a dichotomy, pushed by some on the No side, without thinking it through. The poll results, he says, “stand as a warning sign for advocate business leaders that their customer base and employees may not necessarily be signed up to the inevitability of the referendum’s assumed success.”

Is the Australian’s clearest contribution to the debate its headline? In February, the website run by Fair Australia, the name under which senator Jacinta Nampijinpa Price’s Advance is campaigning against the Voice, advertised its plans to “build an army of Aussies” to “defend our nation.” Now, told by the Australian that most “Aussies” don’t intend to vote Yes, the undecided may draw some reassurance that it’s okay to vote No. •

The post Losing ground? appeared first on Inside Story.

]]>
https://insidestory.org.au/losing-ground/feed/ 2
From Indigenous recognition to the Voice, and back again https://insidestory.org.au/from-indigenous-recognition-to-the-voice-and-back-again/ https://insidestory.org.au/from-indigenous-recognition-to-the-voice-and-back-again/#respond Mon, 15 May 2023 00:47:11 +0000 https://insidestory.org.au/?p=74047

There are signs of a shift in strategy by the Yes forces, but are the polls keeping up?

The post From Indigenous recognition to the Voice, and back again appeared first on Inside Story.

]]>
With several months still to run before we get to vote, a new Yes23 advertisement suggests a remarkable shift in the Yes side’s framing of the referendum proposal. The advertisement advocates “recognition” without mentioning that the effect of that recognition would be to authorise parliament to legislate for the Voice.

If the Yes campaign continues to frame voters’ choice as one between recognising and not recognising Indigenous Australians in the Constitution, and if the attempt gains public traction, then the debate about how the proposed amendment refers to the Voice will become less significant.

But the words of the amendment — minutely examined and debated by Australia’s finest legal minds and endorsed on Friday by a joint select parliamentary committee — matter little to Yes23’s judgement about how the referendum should be presented. Its ad, running mainly on social media, attempts to persuade voters that the campaign is “really” about “recognition.”

Between February 2012 and early 2017 the Australian government funded Reconciliation Australia to promote “recognition.” What form it would take was not specified, but the campaign helped “recognition” gain wide acceptance — but only if it is detached from some of the forms that recognition could take.

Meanwhile, the debate about alternative forms of constitutional recognition had failed to reach any agreement. Then, after the Referendum Council’s report to the Turnbull government in July 2017, “the Voice” entered the debate and quickly became the only form of constitutional recognition under consideration. For their part, Coalition governments under Malcolm Turnbull and Scott Morrison argued that the Voice was not the right form for constitutional recognition to take.

Five years later, in an address to the Garma Festival in July last year, prime minister Anthony Albanese committed his government to a referendum on an Indigenous Voice “in this term of parliament.” His speech began by recognising “all the elders, leaders and families” who had “made great contributions to our nation,” but “recognition” was not among the seventy words the prime minister wanted added to the Constitution.

Now that the campaign has stepped up a notch, however, “recognition” is back — in fact, for Yes23, it has moved to the centre. Pushed into the background is the fact that recognition will take the form of the Voice.

In the first of the Yes campaign’s online ads, rolled out on 26 April, the emphasis was on recognition. Its thirty seconds contrasted Indigenous occupation (65,000 years) with the period in which Australia has had a Constitution (122 years) and played with the notion of coming together and making the nation complete. Viewers were invited to “join us” — the “us” being Indigenous Australians, the viewers being overwhelmingly non-Indigenous Australians.

The ad’s theme of Indigenous exclusion implicitly recalled the 1967 referendum, when over 90 per cent of the formal vote endorsed the idea that “Aboriginals” should be “counted.” The closest the advertisement came to mentioning the Voice was in calling for Indigenous Australians to be able to have “a real say,” something that surely was “fair enough.” “#Voice” appeared in small type at the end.

Perhaps the emphasis on recognition reflected nothing more than the fact that the ad was sponsored by Australians for Indigenous Constitutional Recognition, or AICR, just one of several organisations that have come together under the banner of Yes23. But in the run-up to a referendum that has seen much more emphasis on the “practical” implications of the Voice than on the “symbolic” act of recognition, even the AICR might have been expected to argue, above all, for the Voice.

Then, a few days after the ad’s release, the prime minister issued a statement to say that the national cabinet had “reaffirmed” its “commitment to recognising Aboriginal and Torres Strait Islander peoples in our Constitution.” Not a word about national cabinet (re)affirming its commitment to the Voice — though the prime minister and all the premiers are committed to it — and not even a commitment to Indigenous Australians having “a real say.”

Has the Yes campaign just wrong-footed the No side? A letter to the Australian Electoral Commission from Advance Australia, one of the organisations campaigning for a No vote, suggests it has. The AICR ad, Advance Australia complained, omitted “any reference to the Aboriginal and Torres Islander Voice to parliament” — an element “so integral that it is the title of the bill.” This meant that “Yes23 may be intentionally misleading the Australian public on the nature of the referendum.” Senator Jacinta Nampijinpa Price — the Coalition’s newly appointed shadow minister for Indigenous Australians, the Country Liberal Party’s senator for the Northern Territory, and the most prominent No campaigner in the National Party–CLP alliance — attacked the ad as “deceptive” shortly after it went to air.

Responding to the complaint, Yes23 reportedly said that it welcomed Advance Australia “drawing attention” to its campaign. That it feared an adverse finding from the Electoral Commission is to be doubted. As the AEC’s website shows, its remit appears not to stretch to the kind of complaint Advance Australia has made.

If that is the case, the AEC won’t feel bound to consider a complaint from Yes23 that an advertisement attacking the Voice — produced by Fair Australia for the No campaign and focused on Senator Price and her family — omitted any reference to “recognition” other than Price’s remark about her “recognising what we have in common.” But perhaps, in the name of publicity, the No side is as happy to welcome any comments on its campaign as the Yes side is to make them.

While the Voice is “integral” to the bill to amend the Constitution, so is “recognition.” Indeed, the heading of the Constitution’s proposed Chapter IX (within which falls section 129, “Aboriginal and Torres Islander Voice”) reads “Recognition of Aboriginal and Torres Islander Peoples.” Advance Australia is not contesting that; what worries it is the Yes campaign’s omission of one element in order to emphasise the other.


The No campaign has reason to be worried. “Recognition” offers Yes23 a stronger way of framing the referendum than does the Voice. It does this because the Indigenous demand for “recognition” is more widely known and a good deal more widely supported than the Indigenous demand for the Voice.

Polling conducted online last September by Resolve Strategic for the Melbourne Age and the Sydney Morning Herald estimated that 85 per cent of the electorate were “definitely aware or knew at least some detail” of a “campaign for Indigenous recognition in the Constitution.” Awareness of a referendum to “enshrine the Voice in the Constitution” was much lower, at 65 per cent.

Since then, the gap is likely to have narrowed but not necessarily closed. In a poll conducted by Resolve in January, no more than 77 per cent indicated that they had “heard of the ‘Indigenous Voice’” — and even fewer, presumably, had heard of the referendum on the Voice. In another online poll, conducted as recently as last month (9–12 April) by Freshwater Strategy, 75 per cent of those who responded — up from 63 per cent in December — indicated that they were “aware that there will be a referendum on whether Australia should change its constitution to allow for a body, called a Voice to parliament, to have the right to advise the Australian Government on matters of significance to Aboriginal and Torres Strait Islanders.”

Awareness of the push for recognition is unlikely to have declined in the past six months or so, though we can’t be sure how it has moved because questions in the public opinion polls about recognition (rather than the Voice) have come to a stop.

More important than levels of awareness are levels of support. The last time any of the polls gathered data on support for constitutional recognition, estimated support outran opposition by at least three to one. Asked whether they would vote “for or against” if a referendum “was held to include recognition of Aboriginal and Torres Strait Islander peoples in the Australian Constitution,” 57 per cent of those who were polled online in June–July 2021 by Essential Media said they would vote “for” and no more than 17 per cent said they would vote “against.”

In the Australian Election Study, meanwhile, conducted between 24 May and 30 September 2022, no fewer than 80 per cent of the respondents who expressed a view on the matter said that “If a referendum were held to recognise Indigenous Australians in the Constitution” they would “support… such a change”; only 20 per cent said they would “oppose” it.

Recognition is supported not only by Labor but also by some, if not all, of the parties that constitute the parliamentary opposition. A referendum on recognition (without the Voice) is something the opposition leader Peter Dutton (Liberal National Party) has said he would support. Nationals’ leader David Littleproud has said his party would “help print the ballots” for a referendum purely on constitutional recognition.

Senator Price took a slightly different line at the media conference the Nationals called to announce their opposition to the Voice. She was quoted as saying that “Indigenous Australians are recognised,” an indication of her sense that the matter was relatively unimportant compared with taking “practical measures,” and that the matter was already settled. (Earle Page, leader of the Country Party from 1921 to 1939, believed that for a referendum proposal to pass it should do no more than enshrine a set of practices in place and accepted already.)

The ratio of support to opposition for the Voice — three to two — is no more than half the corresponding ratio in favour of “recognition.” In the polls conducted in April 2023, levels of support for inscribing a Voice in the Constitution outran levels of opposition by margins that were generally even smaller than that: 42–34 (Freshwater, online); roughly 46–31 (Resolve, online, numbers derived from its graph); and 46–39 (Morgan, SMS). The two polls that forced respondents to choose between Yes and No, both online, also produced a distribution in which Yes outran No by no more than three to two: 58–42 (Resolve) and 60–40 (Essential).

Since Labor came to office in May last year promising to “embrace the Uluru Statement from the Heart” and “answer its patient, gracious call for a Voice enshrined in our Constitution,” support for the Voice has not remained steady, as one polling analyst is reported to have said. Nor has it increased, as another has claimed. Support for the Voice has decreased.

On the polls’ standard approach — with respondents asked whether they favour or oppose putting the Voice into the Constitution but given the opportunity to say they “don’t know” or are “undecided” — the fall has been quite sharp; so, too, has the rise of opposition. In the three polls taken in the first four months after Labor’s victory (between June and September last year) support averaged 59 per cent, and opposition 16 per cent; in the two polls taken in December (the only such polls conducted in the next four months) the support average had declined to 51.5 per cent (opposition 28.5 per cent); and in the five polls taken since February 2023, the average in favour dropped to just 44.5 per cent (opposition 33 per cent). (These calculations are based on reported results before those without an opinion were asked — as they occasionally were — to which side they were “leaning.”)

Binary questions — with respondents restricted to answering Yes or No — produced a less dramatic decline. In the three questions asked from August to September, support was 65 per cent (35 per cent opposed); in the four from October to January, it was 61 per cent (39 per cent opposed); and in the six asked since February, it has been 59.5 per cent (40.5 per cent opposed). How many respondents baulked at this forced choice, none of the pollsters say.

Where polls have presented respondents with response options arranged in what survey researchers call a Likert scale — typically from “strongly support” and “somewhat support,” through “neither support nor oppose,” to “somewhat oppose” and “strongly oppose” — the decline in support for constitutional change was more modest and less even. In the four questions of this kind asked from May to September 2022, support (“strongly support” plus “somewhat support”) was 57 per cent (with 17.5 per cent either “somewhat” or “strongly” opposed); in the two between October 2022 and January 2023, 51 per cent (24.5 per cent either “somewhat” or “strongly” opposed); and in the five asked since, 53 per cent (32.5 per cent being either “somewhat” or “strongly” opposed).

With these different measures of public opinion showing that support for the Voice is slipping and opposition rising, the gap between support for “recognition” and support for the Voice is likely to have widened. If it has, Yes23’s framing of the referendum as a decision about recognising Indigenous Australians makes sense.

About the trend in support for “recognition” we can only speculate. Not only have standalone questions about awareness of recognition disappeared from the polls, but so too, until very recently, have questions that mention “recognition” in the context of the Voice.

Since May 2022, thirty-three national public polls have been conducted: twelve of the binary kind, eleven of the Likert kind and ten of the standard kind (including two polls our analysis has put to one side as flawed). Yet of all the questions polls have asked about the Voice, only the three most recently taken by Essential and Resolve have included a statement about the referendum as a proposal to “alter the Constitution to recognise the First Peoples of Australia by establishing an Aboriginal and Torres Strait Islander Voice” (emphasis added). In none of the others does the word “recognise” even appear. Clearly, the No campaigners are not the only ones to have let the question of Indigenous recognition disappear.

Most of the polls have been unhelpful in other ways, too. Considering how much debate there has been about the whether to include the word “Executive” in the second sentence of Albanese’s draft, it is surprising that, when explaining to the respondents what the Voice would do, few polls have referred to either “the executive government” (the exceptions being Resolve’s polls and those taken by JWS in August and February) or the “government” (apart from the two Freshwater polls taken in December and April). Keeping questions reasonably short while hoping that respondents share a common understanding of the key terms is a difficult challenge to meet.


One strength of the No campaign ad featuring Senator Price is that it includes the names and faces of prominent Indigenous individuals. According to a YouGov study conducted in March for the Uluru Dialogue, only 40 per cent of voters believe the majority of Aboriginal and Torres Strait Islander people support the Voice.

Dee Madigan, who ran Labor’s 2022 election advertising campaign, saw the inclusion of Indigenous figures in the Yes23 ad as a “good strategic start by the Yes camp,” according to the Australian. The ad was “about inoculating against accusations that [the Voice is] Canberra-centric and foisted on Indigenous people and that Indigenous people aren’t supportive,” she was quoted as saying. But Madigan’s observations, almost certainly correct, may not capture what is most significant about the ad. For Toby Ralph, who worked on John Howard’s election campaigns, it was a “reasonable opening shot” that avoided “the contentious stuff.” Assuming “the contentious stuff” is a reference to the Voice, his observation seems closer to the mark.

Whether a focus on “recognition” is the opening shot or the shot that keeps being repeated remains to be seen. But this framing appears to have wide appeal among the key players attempting to mobilise a Yes vote. Lawyer Danny Gilbert, an adviser to From the Heart and co-chair of AICR, suggests that the campaign should avoid legal questions about the wording of the Voice and concerns about whether “it’s constitutionally unsafe.” He wants to focus instead on the idea that “it’s about time we recognise the First Peoples of this country,” that what has “happened to date has not worked” and that “it’s time to give them the opportunity to have a say in the future of their lives.”

If support for recognition is high, so too is support for allowing Indigenous Australians to have “a say.” Asked in July–August 2022 whether it was “important or not for First Nations people to have a voice/say in matters that affect them,” almost everyone interviewed for Reconciliation Australia by Polity Research considered it “fairly important” (33 per cent) if not “very important” (60 per cent).

If Yes23 can persuade voters that the referendum is about “recognition” and Indigenous Australians having “a say” rather than about an Indigenous Voice, the polls might be at risk of asking the wrong questions or of not asking enough questions.


What, then, are the sorts of questions pollsters could ask if they wanted to better understand voters? Perhaps something along these lines, with “Voice” and “say” offered to different respondents in questions two and three to test their relative impact:

1. At a referendum on whether to recognise Aboriginal and Torres Strait Islander people in the Constitution, would you vote in favour or against?

2. At a referendum on whether to have an Aboriginal and Torres Strait Islander peoples’ [Voice or say] in the Constitution to advise the national parliament and the Australian government on matters to do with Indigenous Australians, would you vote in favour or against?

3. At a referendum to recognise Aboriginal and Torres Strait Islander people, would you be more likely or less likely to vote in favour of recognition if recognition meant adding to the Constitution an Aboriginal and Torres Strait Islander peoples’ [Voice or say] to advise the national parliament and the Australian government on matters to do with Indigenous Australians?

Differences in the levels of support elicited by these questions would go some way to telling us how attractive “recognition” is compared with either the Voice or “a say”; hence, how much there is for the Yes campaign to leverage and the No campaign to fear.

To understand what voters themselves think the referendum is about, pollsters could also ask respondents whether they think it is about (a) Indigenous recognition, (b) having an Aboriginal and Torres Strait Islander peoples’ [Voice or say] in the Constitution to advise the national parliament and government, or (c) both Indigenous recognition and having an Aboriginal and Torres Strait Islander peoples’ [Voice or say] in the Constitution to advise the national parliament and government.

Polls could also ask an open-ended question along the lines of the one Roy Morgan asked in 1967: “What would you say the chief effect will be if the referendum on Aboriginals receives a ‘Yes’ vote and is carried?”

If Yes23 thinks its best chance of persuading waverers is to keep the campaign as low-key and unthreatening as possible — a matter of being civil and accepting an “invitation” — then it might well present voters at polling places with a slogan like “Vote YES for Recognition” or “Vote YES for a Say.” Since it pitches itself as a campaign “talking to everyday Australians about the opportunity to be part of a successful referendum,” then giving “everyday Australians” a sense that they are on to a winner — with luck, creating a bandwagon — could be very much part of its play.

The No side couldn’t try to mobilise last-minute deciders with a slogan remotely like “Vote NO to Recognition” or “Vote NO to a Say”; it would need to come up with something that didn’t refer to “recognition” or “a say” at all.

Many more ads are yet to come. But these opening shots might well have set the tone of both campaigns. •

The post From Indigenous recognition to the Voice, and back again appeared first on Inside Story.

]]>
https://insidestory.org.au/from-indigenous-recognition-to-the-voice-and-back-again/feed/ 0
The Resolve poll that resolves very little https://insidestory.org.au/the-resolve-poll-that-resolves-very-little/ Mon, 05 Jul 2021 00:38:22 +0000 https://staging.insidestory.org.au/?p=67475

How skilfully has the Age and the Sydney Morning Herald’s new pollster gauged opinion on quarantine, cutting emissions, and China?

The post The Resolve poll that resolves very little appeared first on Inside Story.

]]>
In The Pulse of Democracy, his 1940 defence of the nascent polling industry, George Gallup insisted that polls were important for democracy because politicians needed to understand public opinion, even if they chose not to follow it. The primary purpose of the polls was not to predict an election outcome; it was to “test public sentiment on single issues… when public interest is at its height.”

Testing “public sentiment” in Australia has almost as long a history as in the United States; in September, it will be eighty years since the first Gallup poll, run by Roy Morgan, started gathering Australians’ opinions on a range of issues. Since 1971, when the Australian Sales Research Bureau (for the Sydney Morning Herald and the Age) and the Australian Nationwide Opinion Poll (for the Australian) broke the Morgan monopoly, Australian newspapers have commissioned various polling companies to test opinion when public interest in an issue has been “at its height” but also when public interest has barely been engaged.

What is new this year is the arrival of the Resolve Political Monitor. Until now, issue-based polling has been dominated by the Essential Report, whose findings appear fortnightly in the Guardian Australia. In April, to some fanfare, the company that produces the Monitor, Resolve Strategic, run by Jim Reed, began polling monthly for the Sydney Morning Herald and the Age. Newspoll remains dominant in what it, and most of the political class, sees as the main game — calculating two-party-preferred voting intentions.

Until April, neither the Herald nor the Age had commissioned regular polling since the May 2019 election, when both mastheads — and the Australian Financial Review — predicted a Labor win. All three relied on Ipsos, which estimated that Labor led 51–49 on the two-party-preferred vote, an error slightly less egregious than that recorded by other pollsters, but an error nonetheless.

Resolve, which assures potential clients that it does “the best work,” having been set up “to introduce the advanced research techniques practised by political parties to the communications industry,” wasn’t around for that debacle. Reed insists that survey research questions need to be “understood,” response categories need to be “appropriate,” and there could be “no proxy for proper testing.”

For its latest Monitor, conducted 8–12 June, Resolve was commissioned to “test public sentiment” on Australia’s quarantine capacity, carbon emissions and relations with China, and the uptake of the Covid vaccines. To work one’s way through the Herald’s coverage of the results is to find the odd question without tables or graphs, the odd graph that doesn’t report the response distribution for the sample as a whole, and accounts of the questions that differ between print and online versions if you have sufficient ingenuity to find the two. It is also to become increasingly aware of the poll’s weaknesses (including its polling on individual behaviour around  the vaccine, to which we’ll return); its capacity to mislead readers; and, to the policymakers Gallup privileged, its limited utility.

Some of the weaknesses of the poll should be clear to anyone who has even a passing awareness that polls shouldn’t ask questions many respondents will be in no position to answer. Some of the weaknesses might be evident only to a reader who knows something about how questions should be asked. And some of its weaknesses can be illustrated by reference to other polls — the most recent Essential Media, but also the annual Lowy Institute Poll, whose 2021 poll, conducted 15–29 March, was published in the same week as the Monitor.

QUARANTINE

With the federal government under pressure to allow more Australian citizens back into the country and provide alternatives to the hotel quarantine provided by the states, the Monitor saw an opening: “There has been some debate in the media recently about whether Australia should increase or decrease its quarantine capacity to allow more people to enter the country, and if so how this is best handled,” it told respondents. “On this, which of the following comes closest to your own view?” The responses? “I think the number of people entering Australia should be reduced (36 per cent); I think the number of people entering Australia is about right now (19 per cent); I think we should increase hotel quarantine capacity so more people can enter Australia (7 per cent); I think we should increase purpose-built quarantine camp places so more people can enter Australia (30 per cent); Undecided (9 per cent).” For David Crowe, the Herald’s chief political correspondent, those percentages showed that “there is only minority support for increasing arrivals, even if it is done with more purpose-built facilities.”

But piling the various preferences (fewer, the same, more) and possibilities (“purpose-built quarantine camp places,” “hotel quarantine”) into a single question may not have done justice to what respondents actually wanted — or might have been enticed to consider. Those who wanted fewer arrivals might have been happy to accept the present number if more quarantine places (of either kind) had been on offer. Those who wanted “purpose-built quarantine camp places” may have been equally happy with “hotel quarantine capacity,” and vice versa, had they been allowed to say so — and the response may have changed again if “purpose-built quarantine” had not been described as “camp places.” Some may have wanted to increase the numbers entering Australia but not wanted either more “purpose-built quarantine camp places” or an increase in “hotel quarantine” places.

In the latest Essential poll, also conducted online, in this case on 16–20 June, no fewer than 65 per cent favoured “purpose-built quarantine facilities” as “Australia’s long-term approach to safely quarantining international travellers,” compared with 16 per cent who favoured “home quarantine” (a possibility the Monitor did not entertain), and 9 per cent who favoured “hotel quarantine.” While the two questions are not the same, some of the differences — the much clearer preference for “purpose-built quarantine camp places” over “hotel quarantine capacity,” and the reference to “purpose-built quarantine camp places” rather than “purpose-built quarantine facilities” — are instructive.

CARBON EMISSIONS

According to David Crowe’s lead story accompanying the first results of the June Monitor, “A majority of Australians want the federal government to cut greenhouse gas emissions to net zero by 2050 but do not want a carbon price.” Crowe based his conclusion on a question that asked respondents whether their “preferred method for Australia to reduce its carbon emissions” was by putting “a cost on emissions” (preferred by 13 per cent) or by using new technologies (61 per cent). In the print version of the story, the question is prefaced by the words “while both these methods can be used”; but “both methods” was not an option the question went on to offer.

Given the choice, respondents chose “new technologies” over the alternative — the alternative that involved a “cost.” Who would have thought? Not for nothing does the prime minister promote the idea of “technology not taxes.” Surely, he didn’t need the Monitor, or Crowe, to tell him he was “tapping into community sentiment with his vow.” Not many people would choose to pay for a lunch — assuming they might be forced to pay — when they could get one free.

If no one else is paying for your lunch, however, it doesn’t follow that you would not be prepared to pay for it yourself. The possibility that Crowe — and Resolve — had overlooked was that someone who prefers a new technology, especially when their attention is not drawn to any associated costs, may still be willing to have a “cost” put on emissions if new technologies (alone) won’t solve the problem. While the conclusion that people “do not want a carbon price” may have been correct, the reasoning behind it was invalid.

What of the timeline for any “cut”? And how far should the “cut” go? Responding to a separate question — a response that would become the premise for Crowe’s conclusion — 55 per cent of respondents supported “the federal government adopting a 2050 ‘net zero’ emissions target,” a figure revealed in the text of Crowe’s article but not in the accompanying table. The proportion of respondents either opposed to this proposal or “neutral/undecided” (45 per cent) was almost as great as the proportion in favour. In short, there was nothing like the consensus implied by either the front-page headline “Net Zero: Public Is Ready for CO2 Cuts,” or the online headline “Voters Want Australia to Set a Net Zero 2050 Emissions Target, but No Carbon Tax.”

And what did the very large proportion of those who classified themselves as “neutral/undecided” — about a third of the respondents (the report provides no precise number) — understand by words like “adopting,” “emissions targets,” and “net zero” — especially when the prime minister, no less, chooses his words around “net zero by 2050” so carefully? Respondents may have been less clear about what the question meant than the Monitor assumed they would be or the Herald imagined they were.

Perhaps respondents who were reluctant to commit to net zero by 2050 wanted the government to commit to a more modest target but one that could be achieved more quickly. “Asked whether it was more important to concentrate on meeting Australia’s 2030 commitment or to adopt a new 2050 goal [zero emissions?], 42 per cent of voters preferred to concentrate on the earlier target while 29 per cent wanted more importance [sic] on 2050,” Crowe reported; the exact question was published neither in print nor online. The reporting tells us nothing about those who were “neutral/undecided” about net zero by 2050: the 55 per cent who supported net zero may have included those who would have preferred “to concentrate on the earlier target”; but the 42 per cent, for the most part, may have been a different group. As to what, if anything, respondents were told about “the 2030 commitment” — that remains a mystery.

The fact that so many respondents (26 per cent) were “undecided” when asked to choose between “new technologies” and putting “a cost on emissions” may have reflected another problem with this question: it didn’t allow for respondents who did not want Australia to reduce its emissions or didn’t believe that it needed to.

In the Lowy Institute Poll, the majority of respondents (55 per cent) said that the government’s “main priority” in relation to “energy policy” should be “reducing carbon emissions” rather than either “reducing household bills” (32 per cent) or “reducing the risk of power blackouts” (12 per cent). On this evidence, the majority of those who wanted net zero by 2050 may have been prepared to countenance a “cost.” In addition to his reasoning being invalid, Crowe’s conclusion — and Reed’s — that the majority of Australians do not want a carbon tax may have been misleading.

CHINA

Most of the issue questions in June’s Monitor were about China. First, respondents were told that “Australia has taken a number of actions in relation to China in recent years, including those listed below. For each, please tell us whether you support or oppose the action that was taken.” The order of the list, Reed tells me, was randomised or rotated. The options were: strongly support, support, neutral/undecided, opposed, strongly oppose.

Published in descending order of support, Australia’s actions were described in these ways: “Cancelling visas of Chinese citizens suspected of being covert agents” (supported by 71 per cent, opposed by 6 per cent); “Speaking out against human rights issues involving the Uighur” (69–5); “Calling for an investigation into the source of COVID” (66–8); “Launching trade restriction cases against China via the WTO” (63–7); “Reviewing the 99-year lease of Darwin Port” (60–12); “Criticising China on its approach to Hong Kong and Taiwan” (59–10); “Banning Huawei from Australia’s 5G network” (58–9); “Criticising China on its taking over of the disputed Spratly Islands” (56–9); “Cancelling Victoria’s ‘Belt and Road’ agreement” (54–8); and “Warning of the chances of armed conflict with China” (45–19).

The first thing to say about most of these actions is that, unless they were prepared to endorse whatever Australia had done simply because Australia had done it — a point to which we will return — large numbers of respondents would have had little or no basis on which to answer. How many respondents would have heard of or known much about: covert agents and the cancelling of visas; the Uighur; the WTO, even had the acronym been spelled out; the ninety-nine-year lease of Darwin Port; China’s approach to Hong Kong and Taiwan; Huawei or Australia’s 5G network; the Spratly Islands; or Victoria’s Belt and Road agreement, let alone the cancelling of it. And how many would have heard about Mike Pezzullo (secretary of the Department of Home Affairs) or Peter Dutton (defence minister) “warning of the chances of armed conflict”? Most Australians’ knowledge of, or interest in, the finer points of China’s or Australia’s foreign policy is unlikely to be particularly extensive — something that a poll putting words into people’s mouths should not be allowed to disguise.

Just how many respondents had little or no basis on which to answer these questions we cannot say, but the proportions that ticked the box marked “neutral/undecided” provide one clue. The proportions that declined to register a judgement ranged from 24 to 26 per cent (the three actions most widely supported) to 36 to 38 per cent (the three actions least widely supported). These are high numbers; a comparison with the single-digit figure for the “undecided” on the Monitor’s quarantine question is striking.

Those self-identified as “neutral/undecided” in the Monitor no doubt included respondents who knew something of the matter at hand and were genuinely undecided about its merits; but the greater number are likely to have been respondents who hadn’t heard of the matter or given it much thought. And since many respondents will have been unwilling to admit that they knew little if anything about what was being asked, and simply indicated their support for whatever the government had done, the real number of those not in a position to answer is likely to have been much greater than the “neutral/undecided” figures suggest — very likely, over half. Reed himself concluded, on the basis of a quite different survey, that “no matter how inane and ill-conceived your question, and regardless of the inappropriateness of your response categories, a large proportion — perhaps all — survey respondents will try to give you an answer if compelled to do so.”

Had a preliminary question been asked along the lines Gallup once suggested — “Have you read or heard anything about…” — readers (politicians included) would have been much better served; even better, had the substantive question included “Do you have an opinion on this?” or, better still, “Have you thought much about this issue?” Any of these questions may have shown that support for Australia’s actions, not just opposition to them, was the preserve of minorities not majorities; and that the gap between supporters and opponents was narrower than the Monitor figures suggest. Properly pre-tested, these questions may not have been asked at all.

The second thing to say is that the Monitor’s question format lends itself to acquiescence, also known as agreement tendency or yea-saying. Having been told that these were all actions that “Australia” had taken — “Australia” being a cue, for most respondents, likely to carry a high positive affect — and knowing little or nothing about the substance of the actions, a substantial number of respondents are likely to have gone down the list, ticking “strongly support” or “support,” one after the other. Note that in relation to the top nine actions, the range of both “strong support” (33 to 41 per cent) and total “support” (54 to 71 per cent) is quite narrow. Opposition to any of the actions fluctuates even more narrowly (5 to 12 per cent). Both are precisely what we would expect if acquiescence loomed large and cognitive engagement was low.

On what basis would respondents, with little knowledge of these things, dissent? As Reed told the Herald ahead of the Monitor’s launch in April, “What people are thinking about [right now] is how they’re travelling themselves, in their own families and households, as we adapt to life under a global pandemic, and they’re thinking about how our leaders are performing in their response to this extraordinary challenge.”

Had respondents been told that these were the actions of the “Liberal–National Party government,” rather than the actions of “Australia,” respondents would have been given a rather different cue. We might then have expected some respondents to have supported or opposed the ten actions according to whether they were Labor or Coalition voters — more, if Labor objections to any of the Coalition’s actions had been noted; less, if Labor’s support for any of the Coalition’s actions had been noted. Had such a cue polarised the response, it would have narrowed the gap between the proportion that supported and the proportion that opposed the action. While the desire to avoid such a cue is understandable, more thought might have been given to the potential skew introduced by the cue that was chosen in its stead.

Having been taken through this list of Australia’s actions, respondents were then asked: “Do you think Australia should compromise on any of these points if it meant better trade and diplomatic relations with China? Please either pick ‘no’ or choose as many of the options as you like.” Most (56 per cent) of the respondents picked “no.” But “no” wasn’t just one option among many; it was the easiest option — physically, cognitively and emotionally — and in each of these senses the set of options was biased, however unwittingly, in its favour.

Only 12 to 17 per cent went to the trouble of ticking one or more of the other ten boxes, each identifying a different action on which they would be prepared to compromise — the same determined respondents, possibly, ticking more or less all of them. Again, the lack of discrimination — roughly the same low proportion willing to compromise over “criticising China on its taking over the disputed Spratly Islands” and “criticising China on its approach to Hong Kong and Taiwan,” for example — suggests low cognitive engagement. How many ticked no box at all (as many as 27 per cent, potentially, but no doubt fewer) was not reported. The full table was made available online but not in print.

Some people have said that Australia should not antagonise China as it is a major trading partner with a large military, while others say that Australia should stick to its values, speak up or act against neighbours like China when we feel they are doing the wrong thingWhich of these views comes closest to your own?” This question on China is notable for being the only one that presented Australia’s dispute with China, and what to do about it, in terms of argument and counterargument rather than support for or opposition to a particular government response.

Whether the argument and counterargument were “balanced” is another matter. On the one hand, respondents were presented with a statement about China as “a major trading partner” (an implied risk) with “a large military” (a threat); on the other, they were given a rather longer statement about “Australia sticking to its values” (principled behaviour) against those who are “doing the wrong thing” (unprincipled behaviour). The outcome, surely, cannot have been in doubt: less than a quarter (23 per cent) thought that Australia should “think twice before antagonising” while nearly two-thirds (62 per cent) thought Australia should “stick to values & speak up,” to quote the labels on the pie-chart. On this question, as one would also expect, relatively few (15 per cent) were “undecided.”

Is this a finding worthy of an online headline of this kind — “Australians Wants Nation to ‘Stick to Its Values’ in China Dealings” — in an upmarket daily? Perhaps. But one would have to search quite a way through the annals of polling to find a majority in any country that favoured surrendering to an immoral bully — let alone doing so in the absence of a serious threat of war. Even if respondents imagined that the chances of war were substantial — and the public may be wont to exaggerate such threats — most respondents (75 per cent according to the latest Lowy poll) would have drawn comfort from their belief that “the United States would come to Australia’s defence if Australia were under threat.” Before crafting the question, it might have been a good idea to have considered the pattern of response it was likely to generate, and what — if anything — the pattern would mean.

Having asked about past actions, the Monitor moved on to test the water about future actions. “The Chinese government has said that it may block or place tariffs on other Australian imports in the future. If such trade sanctions were to occur, would you support or oppose the following potential courses of action?” In descending order of support, Australia’s “potential courses of action” were described as: “Focus on finding new export markets outside China” (supported by 79 per cent, opposed by 4 per cent); “Continue to seek a quiet diplomatic solution with China” (63–8); “Take each case to the WTO to try and reverse China’s actions” (56–7); “Restrict or place tariffs on the import of Chinese goods in retaliation” (53–12); “Add export [sic] tariffs on exports to China to compensate affected industries” (53–10); “Push for compensation from China for starting the COVID pandemic” (41–21); “Boycott the 2022 Winter Olympics, to be held in China” (33–30); “Break off diplomatic relations with China, including expelling diplomats” (29–30); “Do nothing so as not to antagonise China and make the situation worse” (15–48).

Here, clearly, respondents discriminated — at least at the top (63 to 79 per cent) and the bottom of the range (15 to 29 per cent). It helped that the two suggestions that were most widely supported referred to actions that no one in Australian public life had opposed: “focus[ing] on finding new export markets outside China”; and “continu[ing] to seek a quiet diplomatic solution with China,” a question that told respondents what Australia was (ostensibly) doing already and essentially invited them to endorse it. It also helped that the two least popular suggestions were actions that no one of any consequence had proposed: “Break off diplomatic relations with China, including expelling diplomats,” and “Do nothing so as not to antagonise China and make the situation worse,” a proposal that might have been understood as rejecting all of Australia’s past actions as well as precluding the search for “new export markets,” and the pursuit of “a quiet diplomatic solution.”

Every proposal that won majority support had to do with trade. Designed to respond to a Chinese tariff wall, proposals that ventured beyond trade — demands for compensation for Covid-19, boycotting the 2022 Winter Olympics, or breaking off diplomatic relations — failed to win majority support.

Noteworthy, too, is that without the comfort of knowing what “Australia” had already done, the levels of support for various future actions were lower at both the top of the range (56 per cent) and the bottom (33 per cent) — ignoring the two most popular and the two least popular suggestions — than they were for past actions (69 per cent and 45 per cent, respectively).

One of the most remarkable features of the Herald’s coverage of the poll is that the high “neutral/undecided” responses — and the exceptions — formed no part of its narrative. Overall, the proportion of respondents who classified themselves as “neutral/undecided” was lower in relation to past actions (an average of 31 per cent) than in relation to future actions (34 per cent); but it was still extraordinarily high. The only question that most respondents could relate to in the list of “potential courses of action” was the one that engaged with the wholly familiar and widely accepted idea of finding new markets; here, the “neutral/undecided” dropped to 17 per cent.

“I think the prejudice,” said Reed, commenting on the results of the questions on future actions in relation to China, “is ‘if this gets resolved and China starts buying our beef and barley again, that’s excellent.’ People see value in the trade relationship and they realise there’s an issue here.” This mercantilist framing of public opinion may be correct, but it is not one that sits particularly well with what the other questions on China purport to show. Nor does it fit well with the findings of the Lowy poll, where “Chinese investment in Australia” was also seen as a negative — and a big one (by 79 per cent) — as were “China’s military activities in our region” (93 per cent). Both were seen as “negative” by substantially largely proportions than in 2016, the last time the Lowy poll checked. And if there were still any doubt, one could look at how China’s favourability ratings have tanked across much of the First World.

DOING THINGS BETTER

All of these questions in the Monitor — the one on quarantine, at a time when the government was pondering whether to move beyond hotels; certainly, the ones on emissions, built around the prime minister’s slogan; but also, those on China — could have been written in the prime minister’s office. The fact that they weren’t tells us that those involved in constructing the poll held strong views of their own; the Herald’s Peter Hartcher, in particular, has just written a book on China. On seeing the results of all the questions, Hartcher wrote of his hope that they would “encourage the federal government in standing against Beijing’s list of 14 demands, and Labor to continue to stand with the government.”

Perhaps that was the point of the polling: to show that public opinion backed the prime minister. In this sense, polling that found majority support for “cancelling visas of Chinese citizens suspected of being covert agents,” for “speaking out against human rights issues involving the Uighur,” for the very public calling-out of China on Covid (though the question wasn’t exactly phrased this way), and so on, while at the same time finding majority support for what the poll, without a hint of irony, described as “continu[ing] to seek a quiet diplomatic solution,” could hardly have been bettered.

If providing cover for government policy wasn’t the point — if the Age and Herald would shudder to think of their polling as a form of propaganda — then the two papers need to reconsider how polls should be done. Crafting questions on matters that are keenly contested — questions that are worth asking in an appropriate manner — means having to take account of more than one view.

An important limitation of the Gallup model, which conceives of polling on an issue as a kind of referendum on that issue — a “sampling referendum” Gallup called it — is that referendums typically involve a single proposition with voters limited to either supporting or opposing it. A question in the Lowy poll, which didn’t follow the referendum model, found that while the majority (56 per cent) supported the proposition that “China is more to blame for the tensions in the Australia-China relationship,” and hardly anyone (4 per cent) agreed that “Australia is more to blame,” more than a third (38 per cent) supported the proposition that “they are equally to blame.” No doubt, had the Monitor asked this question, it would have found something similar.

One way of having polling that acknowledges alternative ways of framing issues is to involve those who hold alternative perspectives in the process of constructing the questions. In the case of China, what the Herald might have done was to have Hartcher sit down with someone like Geoff Raby, whose views on Australia’s relations with China are rather different. The fact that Hartcher and Raby barely reference each other in their respective books might make an exchange between them all the more refreshing. Raby is not necessarily better on China than Hartcher; that question isn’t relevant when it comes to constructing a poll. But Raby is at least as well credentialled. On China, as there are on Covid or on climate policy, there are any number of people who could have helped.

The job of the pollster is to work out how to ask the questions, to advise on the use of argument and counterargument as against approve/oppose, to think about the various assumptions the question makes about respondents or the demands it puts upon them, to pre-test or to build in filters, to contemplate the use of split samples, to organise the sequencing/rotation of questions, and so on.

According to its website, Resolve sees its work as “Always quality,” “Always insightful,” “Always practical” — this last, a dig at “academics, researching for the sake of knowledge or debating theory.” But it’s not just academics who might beg to differ. It would be difficult for anyone concerned with standards in the industry to say that the Monitor’s questions on quarantine, climate or China exemplified “quality” or “insight”; and if they fell well short on either, that the results offered something that was particularly “practical.”

The questions in the Monitor on the Covid vaccine — asking respondents whether they had been vaccinated, whether they were “likely” or “unlikely” to get vaccinated, and so on; and seeking reasons why they may have hesitated — were somewhat better. But these were questions of a different order. First, because asking respondents to report on their own actions, past or planned, is quite different from asking them about issues of public policy, even if people are not particularly good at predicting their own behaviour, especially in unusual circumstances, and response categories can still make a big difference. Second, because following up with a list of fourteen possible reasons, which allows for multiple responses, seems to cover almost all the possibilities, even if respondents are not necessarily very good at explaining their own motivations for doing — or not doing — things; a notable absence from the list is “don’t know.”

The Australian Polling Council, set up in the wake of the 2019 debacle to lift standards in the polling industry, and pollsters’ accountability, is not a body that Resolve wants to join, Reed tells me; apart from not wanting to join a club whose members include some he sees as beyond the pale, he doesn’t want to have to divulge “trade secrets.” If Resolve were to join the APC it might be obliged to lift its standards — if the APC can be persuaded to match the demands of the British Polling Council — and to raise its level of transparency not just by making available its computer tables with the questions, answers and question order but also by revealing some of its other “trade secrets.” The Herald and the Age, endlessly concerned with holding others to account, should insist on nothing less.

“We can’t put all of it in the data centre because of the scale of the results,” Tory Maguire, the Herald’s national editor explained, when announcing the launch of the Monitor, “but we will report on as much of it as our readers find interesting.” As it happens, none of the answers to any of the issue questions (or vaccine questions) polled in the last three months have found their way into “the permanent data centre.” How the Herald judges what its readers find “interesting,” only it would know. But as anyone may judge, there is nothing about “the scale of the results” that would prevent the “data centre” functioning as a repository for every one of the Monitor’s questions and the top-line results. What had sounded promising when it was announced pales by comparison with the repository established by the Lowy Institute. It’s not just the Monitor that needs to reconsider what it does; fifty years after breaking the Gallup monopoly in Australia, and showing that there are other ways of conducting polls, it’s also the Age and Herald that need to reconsider. •

The post The Resolve poll that resolves very little appeared first on Inside Story.

]]>
Did late deciders confound the polls? https://insidestory.org.au/did-late-deciders-confound-the-polls/ Thu, 19 Sep 2019 05:06:10 +0000 http://staging.insidestory.org.au/?p=56920

Predictions of the 2019 election result were way off the mark. But we still don’t know why

The post Did late deciders confound the polls? appeared first on Inside Story.

]]>
Everyone who believes the polls failed — individually and collectively — at the last election has a theory about why. Perhaps the pollsters had changed the way they found respondents and interviewed them? (Yet every mode — face-to-face interviewing, computer-assisted telephone interviewing via landlines and mobiles, robopolling, and interviewing online — produced more or less the same misleading result.) Perhaps the pollsters weighted their data inadequately? Did they over-sample the better-educated and under-sample people with little interest in politics? Perhaps, lemming-like, they all charged off in the same direction, following one or two wonky polls over the cliff? The list goes on…

But the theory that has got most traction in the post-election polling is one that has teased poll-watchers for longer than almost any of these, and has done so since the advent of pre-election polling in Australia in the 1940s. This is the theory that large discrepancies between what polls “predict” and what voters do can be explained by the existence of a large number of late deciders — voters who don’t really make up their minds until sometime after the last of the opinion polls are taken.

In 2019, if that theory is right, the late deciders need to have either switched their support to the Coalition after telling the pollsters they intended to vote for another party, or shifted to the Coalition after telling the pollsters that they didn’t know which party to support. It was, after all, the Coalition that the polls underestimated, and Labor that they overestimated. On a weighted average of all the final polls — Essential, Ipsos, Newspoll, Roy Morgan and YouGov Galaxy — the Coalition’s support was 38.7 per cent (though it went on to win 41.4 per cent of the vote) and Labor’s 35.8 per cent (though it secured just 33.3 per cent of the vote). Variation around these figures, poll by poll, wasn’t very marked. Nor was there much to separate the polls on the two-party-preferred vote: every poll underestimated the difference between the Coalition’s and Labor’s two-party-preferred vote by between 2.5 and 3.5 percentage points.

The most recent, and most widely reported, research to have concluded that late deciders made the difference is a study of voters interviewed a month before the election and reinterviewed a month after it, published recently by the ANU Centre for Social Research and Methods. According to Nicholas Biddle, the author of the study, the group that  determined the result was comprised of those who were “undecided in the lead-up to the election and those who said they were going to vote for the non-major parties but swung to the Coalition.” At the beginning of July, the Australian’s national affairs editor, Simon Benson, also argued that those who were only “leaning” towards Labor ahead of the election had “moved violently away from Labor” once they entered the polling booths and had a pencil in their hands; the “hard” undecided, those who registered in the opinion polls as “don’t knows,” also decided not to vote Labor. At the beginning of June, an Essential poll, conducted shortly after the election, had presented evidence for much the same point.

Over the years, the idea that polls may fail to pick the winner because they stop polling too early had become part of the industry’s stock-in-trade. Especially in the period before 1972, when there was only one pollster, Roy Morgan, the argument had been difficult to refute. By 2019, it was an oldie — but was it also a goodie?

The ANU study: For the Biddle report, two sets of data were collected from the Life in Australia panel, an online poll conducted by the Centre for Social Research and Methods. The first was collected between 8 and 26 April, with half the responses gathered by 11 April, five weeks ahead of the 18 May election; the second between 3 and 17 June, with half gathered by 6 June, three weeks after the election. The analysis was based on the 1844 people who participated in both.

In the first survey, respondents were asked how they intended to vote. Among those who would go on to participate in the June survey, the Coalition led Labor by 3.8 percentage points. In the second survey, respondents were asked how they had voted; among those who had participated in the April survey, the Coalition led Labor by 6.4 percentage points. These figures, based on first preferences, included those who said they didn’t know how they were going to vote (April) and those who didn’t vote (June).

Although Biddle says that the data “on actual voting behaviour and voting intentions” were collected “without recourse to recall,” this is misleading. While the data on voting intentions were collected “without recourse to recall” — this is axiomatic — the same cannot be said for the data on voting behaviour. The validity of the data on voting behaviour, collected well after the election, is wholly dependent on the accuracy of respondents’ recall and their willingness to be open about how they remember voting. It can’t be taken for granted.

Among those who participated in both waves and reported either intending to vote (April) or having voted (June), support shifted. The Coalition’s support increased from 38.0 to 42.2 per cent, Labor’s increased from 34.1 to 35.4 per cent, while support for “Other” vote fell from 14.4 to 8.7 per cent. Only the Greens (13.6 per cent in April and 13.7 per cent recalled in June) recorded no shift.

The panel slightly overshot the Coalition’s primary vote at the election (41.4 per cent) and, as the polls had done, also overshot Labor’s (35.4 per cent). More importantly, it overshot the Greens (10.4 per cent), and undershot the vote for Other (14.9 per cent), and did so by sizeable margins. It overestimated the Greens by 3.3 percentage points, or about one-third, and underestimated Other by 6.2 percentage points, or more than a third. These are errors the polls did not make. A problem with “Australia’s first and only probability-based panel,” as the ANU study is billed, or a problem with its respondents’ recall of how they really voted? None of these figures — or the comparisons with the polls — are included in Biddle’s report; I’ve derived them from the report’s Table 3. Of course, the total shift in support across the parties was much greater than these numbers might indicate.

From his data, Biddle draws three conclusions: that “voter volatility… appears to have been a key determinant of why the election result was different from that predicted by the polls”; that part of the “swing towards the Coalition during the election campaign” came “from those who had intended to vote for minor parties,” a group from which he excludes the Greens; and that the swing also came from those “who did not know who they would vote for.”

None of these inferences necessarily follows from the data. Indeed, some are plainly wrong. First, voter volatility only comes into the picture on the assumption that the polls were accurate at the time they were taken. And before settling on “volatility” to explain why they didn’t work as predictions, one needs to judge that against competing explanations. Nothing in the study’s findings discounts the possibility that the public polls — which varied remarkably little during the campaign, hence the suspicions of “herding” — were plagued by problems of the sort he notes in relation to the 2015 polls in Britain (too many Labour voters in the pollsters’ samples) and the 2016 polls in the United States (inadequate weighting for education, in particular), alternative explanations he never seriously considers.

Second, while positing a last-minute switch to the Coalition among those who had intended to vote for the minor parties might work with the data from Biddle’s panel, it cannot explain the problem with the polls. Had its vote swung to the Coalition, the minor-party vote would have finished up being a good deal smaller than that estimated by the polls. But at 25.3 per cent, minor-party support turned out almost exactly as the polls expected (25.7 per cent, on a weighted average). In its estimate of the minor-party vote — the Greens vote, the Other vote, or both — the ANU panel, as we have seen, turned out to be less accurate (21.4 per cent).

Click to enlarge

Third, in the absence of a swing from minor-party voters to the Coalition, a last-minute swing by those in the panel “who did not know who they would vote for” can’t explain the result. That’s the case even if the swing among panel members, reported by Biddle, occurred entirely on the day of the election and not at some earlier time between April and 18 May, the only timeline the data allow. In the final pre-election polls, those classified as “don’t know” — 5.7 per cent on the weighted average — would have had to split about 4–1 in favour of the Coalition over Labor on election day in order to boost the Coalition’s vote share to something close to 41.4 per cent and reduced Labor’s vote share to something close to 33.3 per cent (unavoidably rendering the polls’ estimate of the minor-party and independent vote slightly less accurate). In the ANU panel, those who had registered as “don’t know” in April recalled dividing 42 (Coalition), 21 (Labor) and 36 (Other) in May. That is certainly a lopsided result (2–1 in favour of the Coalition over Labor) but nowhere near as lopsided as would be required to increase the gap between the Coalition and Labor in the polls (roughly three percentage points) to eight percentage points, the gap between the Coalition and Labor at the election.

The C|T research: Biddle wasn’t the first to argue that there was a late swing — a swing that the polls couldn’t help but miss — and to produce new data that purported to show it. Already, the Australian’s Simon Benson had publicised another piece of research — “the most comprehensive and intelligent analysis so far” — said to show the effect on the election of “[hard] undecided and ‘soft’” voters who had swung late.

This research was conducted by “a private research firm” (the C|T Group, in fact — the political consultancy that polled for the Liberal Party) and “provided to senior Liberals and shown to the Weekend Australian.” Its findings — released without the knowledge or endorsement of any senior member in the Group — were said to show that: (a) ahead of the election, “many Labor voters” had been “only leaning towards Labor” — having been classified initially as “don’t knows,” nominating Labor, presumably, only after being pressed about the party for which they were likely to vote (in the jargon of the trade, after being asked “a leaner”); (b) “on the day of the election,” these Labor “leaners” — plus the “‘hard’ undecided” who remained “don’t knows” after the “leaner” (“about 5 per cent”) — “couldn’t bring themselves to back Labor” and “largely went with a minor party”; and (c) via the minor parties, the preferences of both “came over to the Coalition.” Benson quotes the “research briefing” as saying, “Rather than Newspoll results suggesting Newspoll ‘got it wrong,’ a more informed interpretation is that the ‘“hard” undecided’ voters (those still undecided on May 17) did not support Labor on election day.”

But the story doesn’t survive the most cursory of checks. If the “soft” Labor voters — the “leaners” — and the “don’t knows” (the “hard” undecided) moved to the minor parties on election day, Newspoll’s estimate of the vote for the minor parties must have been an underestimate. In fact, Newspoll’s estimate of the vote for “others” was an overestimate: “others” in the final Newspoll were 16 per cent; at the election, they accounted for 14.9 per cent of the vote. (I exclude the Greens from what the analysis calls “others,” almost all of them supporters of Pauline Hanson’s One Nation or the United Australia Party, simply because it makes little sense to assume that many “softly” committed Labor supporters switched to the Greens and then preferenced the Coalition.) Newspoll didn’t underestimate the vote for “others,” and neither did the final polls from Essential, Galaxy, Ipsos and Morgan.

“Labor and Shorten,” Benson says, may “have made it very difficult for their soft supporter base to stick with them” — and, he might have added, difficult for the “hard” undecided to swing to them. But not all the polls overestimated Labor’s vote, as Newspoll did. Ipsos, which provided a better estimate than Newspoll of the Coalition’s first-preference lead over Labor, estimated Labor’s support at 33 per cent — almost exactly the proportion that voted Labor. Roy Morgan was also closer than Newspoll in estimating Labor’s first-preference vote; so, too, was Essential.

Newspoll estimated the “don’t knows” (the “hard” undecided) at 4 per cent — not 5 per cent, the estimate in the post-election private polling. By ignoring the “don’t knows,” as all the polls did, it was effectively assuming that they would split in much the same way as those who had nominated a party: 38 (Coalition), 37 (Labor), 9 (Greens) and 16 (Other). If we assume, for the sake of the argument, that three times as many of the “don’t knows” voted Coalition as voted Labor, then a more accurate Newspoll would have been one in which the “don’t knows” were split 60–20–10–10. Had that happened, Newspoll would have estimated the Coalition’s first-preference vote at 39 per cent, Labor’s at 36 per cent (allowing for rounding) — a Coalition lead of three percentage points compared with the lead of one percentage point it actually reported. Since the Coalition’s winning margin was 8.1 percentage points, an estimate of three percentage points would have been better than an estimate of one percentage point, but not much better. On the other hand, Newspoll’s estimate of the Coalition’s share of the two-party-preferred vote would have been 50.4 per cent (50.5 per cent if it stuck to 0.5 per cent as its smallest unit) not 48.5 per cent. Compared with the actual tally of 51.5 per cent, this estimate of the two-party-preferred would have been considerably better.

Newspoll was at liberty to adjust its figures along these lines. It didn’t, presumably because it wasn’t persuaded that there were good reasons to do so. But if Newspoll’s figures are to be rejigged, why not those of the polls that don’t appear in the Australian? Most of them had a higher proportion of “don’t knows” to redistribute (5 per cent, Morgan; 7 per cent, Ipsos and YouGov Galaxy, and all (except Morgan on the two-party-preferred) had results as close, if not closer, to the mark than Newspoll’s. Rejigged, their figures would benefit even more substantially than Newspoll’s. For some reason, the research Benson cites appears not to have noticed this; and if Benson noticed it, he didn’t draw it to the attention of his readers.

Like all the polls, Newspoll got the election wrong. But Newspoll’s performance, on some measures, was worse than other polls: both its overestimate of the Labor vote and its underestimate of the Coalition vote were greater than any other poll’s. Having stayed in the field later than all the others — it ran its final poll from Tuesday through to Friday — and having boosted its sample size to 3038, nearly twice the number used by anyone else, Newspoll had given itself the best possible chance of picking up the late swing to the Coalition for which, no doubt, both the Australian and its readers were hoping. But a late swing to the Coalition is something it did not pick up. Its final poll detected what Benson described, literally though ludicrously, as a “half-point break” not to the Coalition but “towards Labor.”

The fact that C|T’s findings surfaced in the Weekend Australian is no great surprise. Where better for the C|T researcher to drop the findings than into the hands of the Australian, the newspaper that commissioned Newspoll, hence the newspaper where the research was most likely to get a run and least likely to be critically examined? The findings, as Benson wrote, offered “another explanation” for why Labor hadn’t done as well as the polls’ expected. And they seemed to get Newspoll off the hook: “Even polling on Friday night would not have picked up what was going to happen.”

The Essential poll: The C|T Group wasn’t the first to produce research purporting to show a late swing either. That honour belongs to Essential Media — a firm that conducted polls on its own account and then placed them with the Guardian.

Immediately after the election, Essential’s Peter Lewis wondered whether the polls had erred by simply “removing” the “don’t knows” — what the C|T research would call the “hard” undecided — from the poll; “removing” them, as all the pollsters had done, meant, in effect, assuming that they would split along the same lines as the respondents who said that they did know how they were going to vote. Essential had not only “removed” that 8 per cent of the sample categorised as “undecided” — a figure Lewis revealed for the first time — which was “nearly double the number from previous elections,” it had also given insufficient thought to another 18 per cent of respondents who “told us they hadn’t been paying a lot of attention to the campaign.” As a result, Lewis conceded, the company may have missed the “possibility” that “the most disengaged 10 per cent” — why 10 per cent? — had “turned up on election day and voted overwhelmingly for the Coalition.”

To test this theory, Essential conducted another poll. According to this poll (or at least the Guardian’s reporting of it — Essential has not responded to a request for a copy), the result “underscore[d] the fact that undecided voters broke the Coalition’s way in the final weeks of the campaign, with 40 per cent of people who made up their minds in the closing week backing the Coalition, compared to 31 per cent for Labor.” Of those who had been “undecided” on election day — 11 per cent of the post-election sample — “38 per cent broke Morrison’s way and 27 per cent Bill Shorten’s way.” From this, the Guardian inferred that the Coalition did especially well from late deciders. Though the story didn’t say it, Lewis’s theory, it seemed, had been confirmed.

But had it? One problem with the analysis is that those who made up their mind either during the last week or on election day weren’t necessarily those categorised as “don’t know” in Essential’s final poll; that group may have included respondents who indicated a party preference but hadn’t made their minds about whether that was the way they would actually vote. Another problem is that the report doesn’t say what proportion of respondents made up their minds in the final week. And we are not told what proportion was changing from one party (which?) to another party (which?) rather than simply confirming an earlier decision to vote for one of the parties and not another. Without knowing any of this, there is no way of estimating the impact a 40–31 split among the “undecided” in the final week would have had on the distribution of overall voting-intention figures.

The figures for election day itself give us more to work with; but they don’t do much to confirm the thesis. To see what difference a 38–27 split would have made (38–27–35, allowing for “others”) requires us to compare it to the 40–36–24 split when it was assumed that the “undecided” would divide in much the same way as the rest of the sample. Since the proportion of “don’t knows” in Essential’s final pre-election poll was 8 per cent (not 11 per cent), the new ratios imply that 3 per cent (unchanged) would have voted for the Coalition, 2 per cent (rather than 3 per cent) would have voted Labor, and 3 per cent (instead of 2 per cent) would have voted for some other party.

On these figures, had the final Essential poll been conducted at the same time as the final Newspoll, the Coalition’s share of the distribution would have remained unchanged (about 40 per cent) and Labor’s would have come down from 36 to 35 per cent. In terms of the two-party-preferred vote, the Coalition’s share would have risen from 48.5 to 48.8 per cent (49 per cent, if we round up to nearest 0.5 per cent) — the two-party estimate produced by YouGov Galaxy and Ipsos. For Essential, this would have been a better set of figures — but no cigar.

Evidence, post-election, that the “don’t knows” favoured the Coalition — let alone did so by a wide margin — is not unequivocal. A poll conducted by JWS, in the two days after the election, shows virtually no difference among those who said they had decided on their vote in the last week, including election day, between the size of the Coalition vote (39 per cent) and the size of the Labor vote (37 per cent). Moreover, among those who had voted Greens, 49 per cent also said they had decided late.

The Guardian’s account of Essential’s analysis, like all the arguments for a “late swing,” fails to mention the exit poll conducted by YouGov Galaxy for Channel 9, which purported to show that the government was headed for defeat — and by a similar margin to that predicted by the pre-election polls.


Oldies can be goodies. Given that the polls, using different techniques, missed the mark by roughly the same amount, all pointing in the wrong direction, the idea of a late swing might be especially tempting. But if the decisions of late switchers are to explain why the polls performed poorly, these voters would have to have switched from Labor to the Coalition — not from minor parties to the Coalition. For respondents who said they “didn’t know” how they were going to vote to have made the difference, they would have to have voted for the Coalition — and to have done so overwhelmingly.

Responding to pollsters, respondents can be strategic or they can be sincere. If they are strategic, “late deciders” may not be late deciders at all; for the most part, they will simply be respondents who dissemble. Mark Textor, co-founder of the C|T Group, and Australia’s most internationally experienced pollster, insists that respondents now “game” the polls. “Knowing the results will be published,” he observed after the 2015 British election (referring, presumably, to Britain’s public rather than its private polls), “leads many respondents to give the most dramatic choice that sends the message of the day… they are using their answers to ‘tickle up’ one party or another.”

Misleading pollsters, though not necessarily in this way, has a long history. As early as 1940, the American Institute of Public Opinion used a “secret ballot” to encourage respondents to be honest about how they intended to vote. The Australian Gallup Poll, worried about its under-reporting of the DLP vote, would later introduce a similar device. More recently, it has become fashionable to talk knowingly about the “shy Tory” — respondents who may be perfectly certain that they are going to vote for a party on the right but don’t feel comfortable admitting it to a pollster, not only in a face-to-face interview, apparently, but also in a robopoll or in response to an online questionnaire. If the polls were “gamed” in 2019, it won’t have shown, and it won’t have mattered, provided it affected the Labor and Coalition vote in equal measure. That voters didn’t drift from the minor parties to the Coalition at the last minute is clear from a comparison of the final polls and the election results. The final destination of the “don’t knows,” however, cannot be established in this way.

A “late swing,” including claims about the “don’t knows” dividing disproportionately in favour of one party or another, has long been invoked by pollsters who believe their polls were fine at the time they were taken — only to be overtaken by events. This line of defence can take a pollster only so far. Always, the challenge has been to find a narrative that fills the gap between what the last poll showed and how the electorate actually voted — the last-minute intervention of Archbishop Mannix in the 1958 campaign, for example, or the death of President Kennedy shortly before the 1963 election — and to do so plausibly, if not always persuasively.

Reviewing the performance of the polls this time, Graham Young, a pollster who claims to have “pioneer[ed] the use of the Internet for qualitative and quantitative polling in Australia,” also concluded that “undecided voters, and a late swing to the government, rather than problems with methodologies’ explained the polls’ ‘failure.’” His narrative? That Bill Shorten had “pulled-up stumps” too early and gone “for a beer on Friday, while Morrison was still working hard, just as people were making their final decision.” Benson’s narrative turned out to be very much the same. The idea that over 350,000 voters responded to the last minute campaigning — or absence of it — by switching their vote choice from Labor to the Coalition, stretches belief.

Strikingly, none of those responsible for actually producing the polls sought refuge in Benson’s or Biddle’s or Young’s line of argument — certainly, not on its own. For Lewis, “the quality of poll sampling” also merited examination — not only in the online polling of the kind Essential used but also in the modes used by other pollsters. He  thought that the problems pollsters encountered around the weighting of their data, especially data gathered from groups reluctant to respond, warranted investigation as well.

John Utting, another pollster, though not one involved in this election, wasn’t buying the last-minute-change argument at all. He thought more structural factors were at work. Had the kinds of problems that had brought the polls undone, he wondered, existed undetected for a long time? “Did polling create a parallel universe where all the activity of the past few years, especially the leadership coups and prime ministerial changes, were based on illusions, phantoms of public opinion that did not exist?”

Not, apparently, in some of the private polling. The last of the Liberal Party’s “track polling” — polling conducted nightly during the campaign using rolling samples in twenty seats — put the Liberals ahead of Labor, on the final Thursday, 43–33, according to a report that quotes Andrew Hirst, the party director, as its source. Exactly which seats — five of them Labor, fifteen Liberal — were polled is not disclosed, and Hirst has declined to name them. Nor are we told how the polling was conducted. But if the polling was as accurate as the story implies, it follows that: we don’t have to posit a last-minute swing; we don’t have to worry about the need to track down “shy Tories” or similar respondents (and non-respondents) who may have gamed the polls; and we can accept that whatever mode C|T chose, its polling worked. Not only did it work during the campaign; the polling showed that the government’s fortunes had “turned around immediately after the [early April] budget.” This suggests that the problems encountered by those pollsters that used the same mode — and there must have been some that did, given the range of modes deployed during the campaign — could have been overcome had they (or their impecunious paymasters) been both willing and able to invest in them properly.

The fact that none of the post-election surveys has succeeded in identifying any last-minute swing suggests that a swing of any great significance simply didn’t happen. While it’s conceivable that evidence of a swing will still emerge, this line of inquiry seems to have reached a dead end for now. It’s one thing to go back to the past in search of explanations; it’s another thing to be trapped in it. •

Murray Goot is a member of the Association of Market and Social Research Organisations panel inquiring into the 2019 federal election polls. The views expressed here are his own. For comments on an earlier draft of this article, he is indebted to Ian Watson and Peter Browne.

The post Did late deciders confound the polls? appeared first on Inside Story.

]]>
Who controls opinion polling in Australia, what else we need to know about the polls, and why it matters https://insidestory.org.au/who-controls-opinion-polling-in-australia/ Wed, 15 May 2019 04:58:17 +0000 http://staging.insidestory.org.au/?p=55118

The decision by former Fairfax papers to sack one of their market researchers raised thorny questions about pollsters and their polls

The post Who controls opinion polling in Australia, what else we need to know about the polls, and why it matters appeared first on Inside Story.

]]>
Most polling stories during the campaign have focused on the horse race — or horse races, given that polling is being done in particular seats as well as nationally, and in some cases for the Senate. But one story, at the very beginning of the contest, focused on what publishers ought to know about pollsters before commissioning their work, what the public has a right to know, and what respondents should know if they are to give informed consent.

According to the story, James Chessell, group executive editor of the Sydney Morning Herald and the Age, had said that his papers would “no longer commission uComms” to carry out polling now that he had become aware of who owned it. The story, which was published by ABC Investigations on the day parliament was prorogued, also reported that “some other uComms clients now intend to stop using the company after being made aware of its ownership.” Clients mentioned in the report included GetUp!, Greenpeace, the Australian Youth Climate Coalition, the Australia Institute, the Animal Justice Party, and a number of political candidates — none of them conservative — in the federal electorates of Higgins and Wentworth, and in the state electorate of Sydney.

Who owns uComms (or rather UComms, as it appears in the company registry)? Effectively, three shareholders: Sally McManus, ACTU secretary; Michael O’Connor, national secretary of the CFMMEU, an affiliate of the ACTU; and James Stewart, a former executive of another polling operation, ReachTEL. Both McManus and O’Connor are “listed as shareholders on behalf of their organisations,” said the ABC. It also noted that “[b]efore being contacted by the ABC, uComms’s business address was listed in company documents and on published polls as being the same as the Melbourne CBD office building as the ACTU.” Subsequently, its listed address had changed “to a nearby Melbourne CBD address.”

The ABC had discovered much of this by searching UComms’s records, lodged with the corporate regulator ASIC. Even then, it had been forced to dig “deep,” since UComms’s records made “no explicit reference to the ACTU or the CFMMEU.” An initial search revealed only one shareholder, a company called uPoint Pty Ltd. McManus, O’Connor and Stewart were the (non-beneficiary) shareholders of uPoint, making them directors of UComms only indirectly.

UComms styles its polls — or it did until shortly after the story broke — as “UComms/ReachTEL” or “UComms Powered by ReachTEL.” This was not because it is the polling company ReachTEL, renamed, but because UComms uses ReachTEL’s original robo-polling and SMS technology. ReachTEL, founded by Stewart and Nick Adams in 2008, was acquired in September 2015 by Veda, a data analytics company that is also Australia’s largest credit reference agency. The sale of ReachTEL to Veda, said Stewart, would allow the two “to grow the business with Veda… [and] enhance our research offering, and the union of our collections and marketing platforms [would] expand our market leading solutions.” (Veda itself was acquired by Equifax, a member of the S&P 500, in February 2016.)

Whether Chessell held himself or his newspapers responsible for not checking on UComms’s ownership, or blamed UComms for not presenting him or his editors with a statement of ownership, is not entirely clear. The former seems unlikely. According to the report, “none of the eleven uComms clients contacted by the ABC said they had thought to make a paid search of the company’s structure” — a search that would have set them back all of $17. “We do not routinely do ASIC searches of all companies with which we do business,” said the NSW Nature Conservation Council, one of UComms’s (now former?) clients.

If clients aren’t responsible for checking these things, should the company have told them? UComms thinks not. But rather than argue that it is up to clients to protect themselves from reputational damage, UComms says that any concern on that score would have been without foundation. “The notion there would be a conflict of interest is ludicrous — the whole point of establishing the company in the first place was to provide a quality service at lowest possible cost for both unions and the broader movement,” a representative was quoted as saying. “We’re growing the union movement to fight for fairer workplace rules and that means we need to make use of the latest technology. uComms is a part of that effort.” The trouble with this defence, of course, is that clients like the SMH and the Age are not part of any union or “broader movement”; if anything, just the opposite.

The ABC’s story noted that UComms had “received widespread media coverage in the past twelve months for its polls, including a recent front-page splash commissioned by the Sydney Morning Herald predicting a Labor win in the New South Wales state election” — an election Labor lost. Was the SMH concerned that UComms had predetermined the result in Labor’s favour — on the assumption, perhaps, that a good set of figures for Labor would discourage the Coalition’s volunteers or deflate its vote? Apparently not. “There is no suggestion,” the report was careful to say, that “the outcome of uComms polling is influenced by its ownership structure.”

So, why the fuss? Because in the same way that justice not only needs to be done but must be seen to be done, polling that purports to give an objective measure of public opinion to any reader needs to be neutral and be seen as neutral — meaning, among other things, that it is conducted by those who don’t have a conflict of interest or agenda, however unconscious. (One reason why the uranium lobby dropped its own polling, in the late 1970s, and had some of its questions incorporated into other polls, was that Max Walsh of the Australian Financial Review, noting the provenance of the polls, discounted them.) While UComms’s state election poll may have been conducted in a thoroughly professional manner, if the company was “controlled by two of the most powerful forces on the left-side of politics,” as the report put it, there was a very real risk — if the ownership of UComms became known — that it would fail to satisfy the requirement that it be seen to be conducted in a thoroughly professional manner.

“Polling experts,” the ABC’s report insisted, “say uComms should have made clear to its clients, survey respondents and anybody reading their results that the Labor-aligned groups co-own the company.” But in the long history of polling in Australia, just when exactly have polling companies made it clear to their clients, to respondents, or to “anybody reading their results” who it is that owns them?

The rise of in-house polling. Where media outlets that publish poll results also own the company that produces them, the need to have the company clarify its ownership to its client hardly arises. From the early 1940s to the early 1970s, the only national poll — the Gallup Poll, formally known as Australian Public Opinion Polls (The Gallup Method), or APOP — was owned by a consortium of newspapers whose members, and associated mastheads, had exclusive rights to publish its results. The consortium consisted of the Melbourne Herald, whose managing director Keith Murdoch was responsible for bringing the Gallup Poll to Australia and for organising the group; the Sydney Sun; the Brisbane Courier-Mail; the Adelaide Advertiser; the West Australian in Perth; and the Hobart Mercury.

In 1971, when the Australian started publishing the results of APOP’s first rival poll as a national poll, Australian Nationwide Opinion Poll, or ANOP, it was publishing the results of a poll that News Ltd, majority owned by Keith’s son Rupert, had created, only this time as a joint venture with Associated Newspapers in Britain, and with its British subsidiary, National Opinion Polls. After the 1974 election, ANOP’s managing director, Terry Beed, bought out Associated Newspapers’s half, sold because ANOP wasn’t making money; Murdoch lost interest, though the company’s losses were attractive for tax purposes; and the company was sold to two of ANOP’s other employees, Rod Cameron and Les Winton. By year’s end, the relationship between ANOP and the Australian had come to an end.

In 1985, News — this time with a local market research firm, Yann Campbell Hoare Wheeler, or YCHW — created a new poll, Newspoll, via Cudex, a joint venture company in which News and YCHW were equal partners. Newspoll’s findings were published by the Australian. In May 2015, after Cudex was dissolved, News was left without a direct stake in any polling organisation. Newspoll now became a brand within the stable of Galaxy, a company founded by David Briggs, a former Newspoll executive, who struck out on his own (with his wife as co-owner) in 2004. Since December 2017, Galaxy has been owned by the British polling organisation YouGov.

While in-house polling of the kind associated with two generations of the Murdochs may now be largely a thing of the past, in-house polling has not disappeared. The SMH publishes the results of its weekly “readers’ panel,” in which 2000 or so readers are asked to give their “feedback” on questions that touch on issues of public policy, politicians in the news, and so on. For the election campaign, the AFR, its stablemate, has also established a “reader panel,” though a much smaller one. But the most prominent of the in-house polls is the ABC’s Vote Compass. First run in 2013, it attracts more than a million participants. While the ABC’s reach is undoubtedly bigger and more diverse than the SMH’s, not to mention the AFR’s, respondents to Vote Compass self-select — not only in the sense of deciding whether to participate (a feature, not sufficiently recognised, of all polls), but also in the sense that respondents (as in all viewers’ or readers’ polls) are not brought into the poll through a process of sampling.

External pollsters come to the fore. The first newspapers to commission national polls from companies they didn’t have a stake in were the Age and the SMH. When Age Poll (known in Sydney as the Herald Survey) was created in 1970, the polling was done by Australian Sales Research Bureau (subsequently Irving Saulwick & Associates) with samples drawn initially from voters in Sydney and Melbourne. From 1972 until the arrangement came to an end in 1994, polling was conducted nationally. Between 1996 and mid 2014, Fairfax — the Age, the SMH, and the AFR — used AGB McNair, subsequently ACNielsen McNair, ACNielsen and finally Nielsen, for its national polls.

The ownership of these companies was not something to which the newspapers drew their readers’ attention; Fairfax was satisfied that no conflicts of interest were involved. Following Nielsen’s decision to withdraw from the field, Fairfax turned to another foreign-owned provider. Since October 2014 the old Fairfax mastheads (now owned by the Nine Entertainment Co.) have depended on the French-owned Ipsos — the third-largest market and public opinion company in the world — for their national polling, and UComms and ReachTEL (with occasional exceptions) for their state and single-seat polling.

In 1973, after losing the APOP contract to McNair — a consequence of an ill-advised National Press Club speech by Roy Morgan shortly before the 1972 election, in which he claimed not to have “read a textbook on statistics, nor on sampling… nor on public opinion polls,” and boasted of his very special ability to interpret the figures from his computer — Morgan Research, through Gary Morgan, began a long association with Sir Frank Packer’s (later, Kerry Packer’s) Bulletin. Again, the magazine saw no reason to say who owned the poll. In 1992, Morgan switched to Time magazine — he was replaced at the Bulletin by AGB McNair — before switching back to the Bulletin in 1995. But after the Morgan Poll badly misread opinion ahead of the 2001 election, its contract came to an end. The Morgan Poll has not been signed up by any media company since.

After the axing of APOP in 1987, when the Herald & Weekly Times — and hence APOP — was acquired by News Ltd, the various mastheads involved in the APOP consortium made new arrangements. Some polled in-house, others engaged outside suppliers. On occasion, they sang from the same song sheet; for the 1998 election, Quadrant, run by Ian McNair, the last custodian of APOP, ran their polls. Again, there were no declarations of interests — or the absence of any conflicts — as is now the norm for contributors to some academic journals and online sites like the Conversation.

Since 2013, all News Ltd’s metropolitan mastheads — the widest-circulating newspapers in every state — have used (YouGov) Galaxy. Again, none makes any mention of YouGov’s interests. As with other outlets that don’t disclose such details, declarations of ownership are deemed irrelevant, and disclosing irrelevant information would simply waste valuable space. Ultimately, however, it is the mastheads — not the suppliers — that have to take responsibility for what questions are asked, when they are asked, and by whom they are asked.

Surveys for free. Other media outlets have established arrangements through which they get first access to polls they have neither purchased from an outside provider nor conducted in-house. The two most prominent pollsters to have come to arrangements of this kind are JWS Research, which produces a series called “True Issues,” and Essential or Essential Media (formerly Essential Media Communications), which publishes the Essential Report — originally weekly, now fortnightly, though more frequently during the campaign. JWS has a relationship with the AFR, Essential with Guardian Australia; previously, Essential had an arrangement with another online publication, Crikey.

Presumably, the AFR knows that JWS numbers the Mineral Council of Australia, the Australian Coal Association, and the Property Council of Australia among its clients; and the Guardian knows that Essential describes itself — a bit like UComms — as “a public affairs and research company specialising in campaigning for progressive social and political organisations.” If they don’t know about any of this, it’s not because either JWS or Essential keeps it a secret: the information is on their websites. The pollsters’ backgrounds and connections, far from discouraging the arrangements with their respective publishers, may serve to recommend them.

What’s in this kind of arrangement for the pollsters is publicity, their results being published on the front page of an important newspaper or its e-equivalent. What’s in it for the publishers is editorial material that is “exclusive” and free. The Roy Morgan Research Centre also gives away its poll findings, not via an intermediary but by posting them on its website and sending them to its clients.

OWNERS AND PLAYERS

When interviewers first ventured into the field in 1941 to conduct a poll for APOP, they didn’t tell respondents that the company was owned by a group of newspapers, much less tell them who owned the papers or managed them; typically, market research is conducted on the basis that respondents are not to be told for whom the research is being conducted lest it influence the results. (Telling respondents where they could read the results would have been a different matter.) Nor, when they published APOP’s results, did newspapers tell their readers who owned the poll or that newspaper executives had helped determine the questions.

Yet the fact that APOP was owned by a group of newspapers led by Keith Murdoch, an important political player on the conservative side of Australian politics, occasioned controversy; in particular, it caused concern to those who didn’t share Murdoch’s politics or trust him to conduct a proper poll. In the Worker, readers were warned that “the ‘polls’” were “financed by newspapers whose interests were opposed to the interests of the Labor Movement.” The stridently anti-Murdoch Smith’s Weekly, noting that APOP required its interviewers to “not be known as ardent supporters of a particular political party,” asked whether “the same qualifications” had been laid down “for its newspaper proprietor subscribers?” There were even demands that the government should set up an organisation — perhaps as “a branch of the Statisticians Department,” suggested one Labor MP — to conduct polls devoid of “political gerrymandering,” rather than leave polling to private enterprise.

None of the newspapers that had come together to create APOP (a not-for-profit company) and publish its findings were sympathetic to Labor. As Sally Young shows in her recently published history of “Australia’s newspaper empires,” Paper Emperors, between 1922 and 1943 none of these newspapers had editorialised in favour of Labor at a federal election; none, as she also shows, would do so until Fairfax broke ranks in 1961. In 1946, a member of the Tasmanian parliament alleged that Gallup interviewers had been conducting polls for the Liberal Party. Did the Mercury, a stakeholder in APOP, ask Roy Morgan whether this was true? Whether true or not, Morgan appears to have said nothing about it.

In 1959, while employed as APOP’s managing director, Morgan stood as a “Progressive Independent” for election to the Melbourne City Council; once elected, it was a position he would hold until after his contract with APOP came to an end. Councillors representing business interests formed a non-official party, the Civic Group, which largely controlled the council. By the time he was defeated, in 1974, Morgan had become its leader. The only official party on the council, Labor, had seen its influence decline. By contrast, Morgan’s first mentor as a public opinion researcher, George Gallup, far from seeking public office of any kind, made a point of not even voting. The Melbourne Herald covered Morgan’s 1959 campaign, including the fact that he conducted a survey of electors in his ward. But it went on publishing APOP findings on party support and political issues without mentioning Morgan’s political involvement.

By the late 1960s, suspicions within Labor’s ranks that APOP was under-reporting Labor’s vote encouraged Rupert Murdoch to establish ANOP. In those days when being an “underdog” was not considered an advantage, Murdoch was keen to do what it took to see Labor win. How he would have reacted if ANOP had done work for the Labor Party while being published by the Australian is difficult to say; while ANOP did some work for the Whitlam government, possibly brokered by the party’s secretary, Mick Young, it did not work for the Labor Party. ANOP’s work for the party would come after its connection with News was severed.

Failures to disclose how polls are conducted. What should polling companies — or, more to the point, those who publish their findings — disclose to those trying to make sense of the polls? During the current campaign, with its focus on the vote, the Australian (Newspoll), and the SMH and the AFR (Ipsos) have published the date(s) on which their polling was conducted, the size of the sample, and the sampling variance, or “margin of error,” due to sampling. But sampling variance, sometimes misrepresented as “the maximum sampling error,” rarely becomes part of any discussion of what the figures produced by the poll mean, and helps drive out any mention of non-sampling error. Under Morgan, APOP was not required to disclose the date(s) on which the polling was conducted, the size of the sample, or the sampling variance; under McNair, APOP at least disclosed the size of its sample. Saulwick disclosed the date of the fieldwork and the size of the sample (usually 1000), but said nothing about sampling variance. The same is true currently of YouGov Galaxy, and of the Morgan Poll. ReachTEL, which rejoices in publishing its results to the first decimal point, also says nothing about sampling variance.

Not every polling company or its client publishes the actual questions the poll has asked. Even with the question(s) on voting intention, there is a lack of disclosure. While Newspoll (and, in turn, the Australian) publishes the question it asks all respondents about how they intend to vote, as does YouGov Galaxy, no one following the Morgan Poll online, or Essential, or reading the Ipsos results in the AFR or SMH would know the question respondents had been asked. In particular, they wouldn’t know whether respondents had been presented with a list of parties from which to choose.

Presenting respondents with a list of parties may well prompt certain responses and repress others; not presenting respondents with a list may have different consequences. While the use of both approaches during the current campaign hasn’t attracted much attention from poll-watchers, Newspoll’s decision to add the United Australia Party to its list generated a discussion about how to compare polls that list a particular party with polls that do not. The AFR and SMH (and presumably the Age) publish the Ipsos figures for the Coalition, Labor and the Greens only; support for the other parties, which Ipsos also gathers, is swept out of sight by the papers and hidden under “other.”

Overlooked by most newspapers — the Australian, reporting Newspoll, is a notable exception — is the pollsters’ practice of posing a follow-up question to respondents who say they “don’t know” or are “unsure” how they will vote. This question is designed to get respondents to say to which party they are currently “leaning”; hence, the term “leaner.” Only after these respondents have been pushed — Essential pushes them twice — and the “don’t knows” reduced to a minimum, are the final voting-intention figures calculated and made public.

What do pollsters do with the remaining “don’t knows” — a figure that neither Ipsos nor YouGov Galaxy publishes? Newspoll makes it clear, as does Essential: “don’t knows” are excluded. In the past, however, not all pollsters have excluded them. Some pollsters have distributed them to one or other of the parties on the basis of which leader these respondents prefer or how they recall having voted at the last election.

There is also the not-so-small matter of the two-party-preferred figures. At the beginning of the campaign, Newspoll calculated these on the basis of preference flows at the 2016 election; so did Essential. How they distributed the first preferences that went to parties that didn’t exist in 2016 (the UAP, above all), they didn’t say. More recently, Newspoll has distributed preferences “based on recent federal and state elections,” an approach that has problems as well. Whether YouGov Galaxy, its stablemate, adopted this method for its national poll, conducted during the second week of the campaign, is hard to say from newspaper reports. Ipsos uses two methods: it looks at preference flows at the 2016 election, and it asks respondents who support minor parties to indicate whether they “will give a higher preference to the Labor Party candidate or the Liberal/National Party candidate?” In its last poll, happily, the two methods produced the same result. In its latest release, Morgan says it uses “respondent’s [sic] stated preferences.”

Where on the interview schedule the voting-intention questions are asked is something else few polls disclose. If the results of these questions are the most important results the poll generates — and no results are more closely scrutinised during a campaign — best practice suggests that the questions should be asked early in the interview; this ensures that the answers aren’t affected by the questions raised or answers given later. Ipsos asks its voting questions up-front. Under Morgan, perhaps to keep things low-key, APOP put them towards the end. Whatever it is that other pollsters do, they don’t advertise.

Even something as basic as the medium through which the interviews were conducted is not always clear; indeed, with some of the new technologies, it is not obvious that “interview” is still the appropriate word. Once upon a time, almost all interviewing was conducted face-to-face; in America, and beyond, face-to-face interviewing appeared to be part of “the Gallup method.” By the late 1970s, when more than 80 per cent of Australian adults had access to landlines, the industry shifted, largely, to telephones — interviewers dialling numbers at random, asking for someone in the household who met a set of demographic specifications (age and gender, typically), reading out the questions, and recording the answers. These were either recorded manually, to be punched into cards as code and processed by a computer, as Newspoll originally did; or, with Computer Assisted Telephone Interviewing, or CATI, recorded on screens and fed directly to a computer, soon becoming the industry standard.

The main hold-out was the Gallup Poll, which maintained its commitment to face-to-face interviewing; under McNair it continued to interview face-to-face until the end. In an industry that has largely moved on, Morgan still uses face-to-face interviewing for much of its work; during this election, all of Morgan’s national polls have been conducted face-to-face. Valuable in its own right, face-to-face interviewing helps Morgan — one of the country’s biggest market research firms — build a database of respondents that can be reached for other purposes, and by other means.

With the tweaking of telephone technologies and the rise of the internet — both of which have reduced costs massively — the pollster’s toolkit has become increasingly diverse. Ipsos, polling for the old Fairfax mastheads, continues to use what it only describes as “random digit dialling.” In its first poll of the campaign (though neither the AFR nor the SMH noted the fact), it managed to combine landlines with mobile phones — whether via CATI or by some means it didn’t say. The Australian says nothing at all about how Newspoll conducts its interviews; last time, respondents either answered online or were reached by robo-polling — questions asked on the telephone, but not by a live interviewer, and answered by someone in the household, though not necessarily the person from whom the pollster wants to hear — the data from the two methods somehow being combined. YouGov Galaxy appears to have moved its national polling online; at the last election, Galaxy (in line with its other brand, Newspoll) combined online polling with robo-polling — a mode of polling that YouGov doesn’t use in Britain. Essential has always polled online.

Whatever the mode, raw responses are never wholly representative of the population from which they are drawn. This is because some demographics are easier to reach than others, with those of non-English-speaking background and young men traditionally posing the biggest challenge, and not all of those reached agree to an interview. It is also because response rates, falling for years, are now typically in single digits — a change that may be more marked among some groups than others. And it is because, within a particular demographic, those who do respond may not be representative of those who do not; with weighted data one has to hope that this isn’t true — even when, as with Vote Compass (which we are told weights by gender, age, education, language, religion and even respondents’ unreliable recall of their past vote), it almost certainly is true.

If the actual distribution of a population’s relevant characteristics is known — location, age and gender are the parameters pollsters usually look at — weighting the data so that it better matches the distribution of these characteristics in the population at large addresses only the first two reasons. If other or additional demographics matter — characteristics that are overlooked (ethnicity or education, for example) or for which there are no population data that can be used (income, possibly) — the ability of weighting to fix even these problems can be severely limited.

A longstanding mystery is what pollsters actually do to weight their numbers. Newspoll acknowledges its numbers are weighted, but doesn’t say what variables it has used or what weights it has applied. Ipsos applies weights, but its first poll of the campaign didn’t adjust for all the variables the AFR says it did — age, sex, location. YouGov Galaxy weights its data, but the report of its most recent national poll, carried by News Ltd’s Weekly Times, doesn’t actually say that it does. Morgan, too, doesn’t say whether it weights its data, though it surely does.

In their failure to disclose almost anything about their polls, the political parties are in a class of their own. On Anzac Day, when the Coalition and Labor had agreed to a truce on advertising, the UAP declared in the News Ltd press — via one of its full-page ads, repeated several times since — that its polling showed that “15 per cent of Australians” had “decided to vote for the United Australia Party” and that “the majority” of those “undecided” (“over 28 per cent of Australians”) would also “vote for the United Australia Party and bring real change to Australia.” If these were the answers, any reader might have asked, what were the questions?

But in reporting poll findings that are unsourced — and, in this case, also completely implausible — the UAP is hardly alone. Where would newspapers be, especially in this campaign, without stories sourced to one party or another claiming to reveal what “internal polling” is showing in this electorate or that? Whether journalists ever see the original reports, or even summaries, is doubtful. No polling company is ever mentioned, no account of the methods is ventured, no data… no nothing. Reports of polls conducted by interest groups are almost never so bare.

Since November 2004, Britain has had an umbrella organisation to which virtually every polling organisation of any importance has belonged; members include both Ipsos and YouGov. Companies that join the British Polling Council agree to “fully disclose all relevant data about their polls.” Indeed, they agree to “publish details of the questions that they have asked, describe fully the way the data have been analysed and give full access to all relevant computer tables.” The council’s “objects and rules” require members to post both the unweighted data and a description of the weighting procedures on their websites within two working days of the original public release of the findings. This doesn’t offer pollsters many places to hide. The defence of proprietorial privilege and claims to intellectual property get short shrift.

An attempt to establish something much more modest for Australia was made more than thirty years ago, ahead of the 1987 election, by Jim Alexander (then at AGB McNair) and Sol Lebovic (at the time, running Newspoll). Their initiative was inspired, in part, by the formation of two groups said to have operated during the 1987 British election — the British Market Research Society Advisory Group on Opinion Polls and the Association of Professional Polling Organisations. But because they didn’t want to go it alone, and not everyone was up for it — Morgan Research, in particular, would not have supported it — nothing came of the proposal.

An initiative of this kind need not rest with the pollsters. There is nothing to stop media outlets or other Australian clients requiring polling companies to fully disclose their practices along the lines mandated in Britain. Some companies, no doubt, would rather forfeit the business than enter into a voluntary arrangement of this kind. But why would companies like Ipsos or YouGov, which have signed up to this sort of arrangement in Britain, decline to comply with such a request here?

Conflicts of interest. Ownership of polling companies, and of the companies that pay for their polls, routinely involves conflicts of interest that go beyond having a mission, like UComms, or a political position, like the consortium once built by the Herald & Weekly Times. Companies that conduct polls and companies that publish them employ labour — or, in the case of pollsters, as Gary Morgan is wont to insist, hire contractors. As a result, they stand to be affected by wage rates, payroll taxes, industrial disputes, leave entitlements, and so on. Does the polling they commission or conduct, however unwittingly, reflect this?

In the 1940s, Arthur Kornhauser, a researcher at Columbia University, set out to explore one aspect of American polling — was it “fair to organised labor?” After looking at the choice of topics on which they polled and the wording of their questions, he concluded that across the period he examined — the war years, 1940 to 1945 — pollsters had shown “a consistent anti-labor bias.”

There were no technical impediments to overcoming this bias, Kornhauser argued; “necessary safeguards” could be put in place to ensure that the job was done “objectively.” There were, however, “more formidable hurdles.” Polling organisations were “sizable business organisations” in their own right, he noted. In addition, they had business clients to satisfy — newspaper and magazine publishers, among them. “How far these influences will persistently stand in the way of balanced inquiry and the reporting of opinions about labor must be left for the future to answer.” But he wasn’t optimistic. One solution was for organised labour to do its own public opinion research; seventy years later, the mission UComms set itself might be seen as part of this. Another solution, “urgently” needed, was “research centres devoted to thoroughgoing, continuing attitude studies in the labor relations field.”

Opinions sympathetic to organised labour may not be the only views that a newspaper might be less than keen to publish. Companies responsible for commissioning polls sometimes have other interests to protect. For a newspaper to suppress the results of a question asked in a poll, after its executives have been involved in deciding whether to ask it, is unusual. But it has happened. In October 1958, after Roy Morgan had written up the results of an APOP question on newspaper readership, the Herald suddenly took fright and refused to publish it. The instructions to Morgan were unambiguous: “completely kill, destroy and otherwise wipe.”

Polling organisations may also have interests that may threaten their integrity — or appear to do so. During the debate about Indigenous land rights in the early 1990s, Gary Morgan agreed that the following words should accompany the Morgan Poll published in Time magazine: “Statement of Interest: The executive chairman of the Roy Morgan Research Centre, Gary Morgan, is also chairman of the WA mining company Haoma North West NL.” Until this statement appeared in small print, on 14 February 1994, Time had been publishing Morgan’s polls on land rights for over a year without any acknowledgement that the company had a potential conflict of interest.

Morgan’s interest in goldmining was hardly news; until February 2018, Haoma was a publicly listed company and Morgan makes no secret of his mining company. But since Time appears to have had no idea that Morgan was invested in mining — like UComms’s clients, presumably, it didn’t “routinely do ASIC searches of all companies with which we do business” — then in the absence of Morgan’s statement few of its readers would have had any idea either.

Identity matters. If material interests matter, at least potentially, so might identity; typically, of course, the two are connected. Since the emergence of polling, no Indigenous Australian, so far as I know, has been in charge of a poll or worked as part of a media team commissioning a poll in the mainstream media. Since APOP asked not a single question on Indigenous (or “Aboriginal”) issues until 1947 and no further questions until 1954, and after thirty years of polling and the asking of over 3600 questions had included only twenty-two questions on Indigenous issues, it is difficult not to conclude that some Indigenous involvement in the process of determining what questions to ask might have made a difference. And not just under Morgan’s stewardship; from 1973, when APOP turned to McNair, it asked just six questions out of nearly 2000 on Indigenous issues. What was true of APOP was true of the polls more generally. Over the same years, the Morgan Gallup Poll asked at least 600 questions in total, no more than two on Indigenous issues. ANOP, polling for the Australian from 1971 to 1974, asked just five out of nearly 600; Saulwick, from 1970 to 1979, just six out of nearly 1000.

With Indigenous involvement, not just the number of questions but also the nature of the issues — or the terms in which they were asked — might have been different. A question, for example, about whether “Aborigines should have the right to vote,” included for the first time by APOP in 1954, might have been included earlier; it might have been repeated sometime before the Commonwealth extended voting rights to Indigenous people in 1962; and the question of whether “Aborigines… should or should not be given the right to vote at federal elections” might not have been asked in November 1963, since their right to vote had already been “given” more than a year earlier.

Overwhelmingly, polling organisations in Australia — like the media companies to which they have usually had to answer — have been run by men. In recent years, this has changed, but not dramatically. At Ipsos, Jessica Elgood is in charge of what used to be called the Fairfax–Ipsos or Ipsos–Fairfax poll; at Morgan, Michele Levine, chief executive since 1992, once managed the Morgan Gallup Poll; and at ANOP, Margaret Gibbs built a formidable reputation, though as a qualitative researcher rather than as a pollster.

Having few, if any, women involved in constructing the polls can make a difference. For a few years during the war, two organisations sampled opinion for the press. One, of course, was APOP. The other was Ashby Research Service, run by Sylvia Ashby, the first woman to own a market research firm — not only in Australia but very likely the British Empire. Ashby sampled opinion in New South Wales for Packer’s two Sydney newspapers, the Daily Telegraph and the Sunday Telegraph. Polling in early 1942, she asked: “Should the Government form a People’s Army to fight in co-operation with the AIF and Militia if the Japanese invade Australia?” Respondents thought the government should. The men Ashby interviewed said that if a “people’s army” was formed, they wanted to join it; so, once she decided to ask them, did the women. Later that year, APOP asked its first question about a “merger” of the Australian Imperial Force and the Australian Military Forces. But it didn’t ask about the possibility of “a people’s army.” Even if it had, what are the odds that APOP would have asked whether women wanted to join?

Like the people they survey, pollsters — and those who pay them to ask some questions, not others, and to ask them in certain ways — range in their social attitudes from liberal to conservative, and in their political views from left to right. Whether these predispositions are conscious or unconscious is a separate matter. Among pollsters, diversity of outlook is much greater than diversity of ethnicity or gender. A similarly diverse media may well hire pollsters that make a good fit.

Polling in the 1970s, on issues the women’s movement was raising — the pill, abortion, prostitution, rape, divorce, child care, women in the workforce, how women should be addressed, and so on — and that other movements were raising — homosexual relations, the age of consent — provides one window into these predispositions at work. McNair, especially, but also Roy Morgan Research, commissioned by the Herald & Weekly Times (McNair), and by the Bulletin and the Women’s Weekly (Morgan), were inclined to ask about a more limited range of issues or to frame their questions in a more conservative way, than Irving Saulwick & Associates or ANOP, commissioned by the SMH and the Age (Saulwick) and by the Australian (ANOP). The pattern wasn’t wholly consistent; many of the questions asked by each of the pollsters were relatively neutral. And a number of topics — rape crisis centres, and the gender pay gap, for example — were ignored by all the men. But there was a pattern nonetheless.

AN OBLIGATION TO DIVULGE?

Knowing who owns a polling organisation can raise doubts about the bona fides of the polls it produces. In the case of UComms, Nine raised concerns about an independent operator, but when polling first began in Australia, much wider concerns were expressed about the in-house poll that Keith Murdoch had organised. While the UComms connection lasted no time at all, and its most controversial polling was conducted for the SMH only in New South Wales, APOP’s connection with the Herald & Weekly Times lasted for forty-six years — and for more than half of this time it was the only organisation conducting polls for the press nationwide.

Any list of the things that require fuller disclosure by the polls and by those who commission polls — if not to respondents then to readers — should not stop at naming who owns what or identifying who controls what they do. Pollsters and their paymasters are in the business of gathering information, publishing it, and using it to shape public deliberation and political debate. As a consequence, they should be under some obligation to reveal: anything that might pose, or appear to pose, a conflict of interest; the questions they ask and how they gather their data; and what they do to the data before they publish the results. •

The post Who controls opinion polling in Australia, what else we need to know about the polls, and why it matters appeared first on Inside Story.

]]>
Polls and the pendulum https://insidestory.org.au/polls-and-the-pendulum/ Fri, 17 Jun 2016 06:42:00 +0000 http://staging.insidestory.org.au/polls-and-the-pendulum/

It’s wise to take care in interpreting the two-party-preferred poll figures and the 2016 electoral pendulum, writes Murray Goot

The post Polls and the pendulum appeared first on Inside Story.

]]>
Much analysis of the national opinion polls, which have Labor and the Coalition running “neck and neck,” is misconceived. It takes the figures at face value and then assumes they map on to a corresponding proportion of seats. If the polls move from a two-party-preferred 50–50 to 49–51 or 51–49, the conversation about what they mean will change dramatically, despite the fact that no statistically significant shift in support has occurred, and despite the fact that the pollsters are reporting not just something they have measured (respondents’ first preferences) but something they have had to guesstimate: the distribution of the “don’t knows” and the direction of the minor parties’ and independents’ second preferences.

Elections are not determined by the parties’ vote shares; they are determined by the number of seats they win. A party can attract more than half the two-party-preferred vote but not win the majority of seats. This is what happened to Labor under Bert Evatt in 1954, Arthur Calwell in 1961, and Gough Whitlam in 1969; it happened to the Liberal’s Andrew Peacock in 1990; and it happened to Labor under Kim Beazley in 1998 and Julia Gillard in 2010. In each case, they won a two-party-preferred majority of votes but failed to gain the largest number of seats.

It’s wrong to assume that if the two sides are level-pegging in their shares of the two-party vote then they must be level-pegging in their shares of the seats. Differences in the size of enrolments across the seats and in the shape of electoral boundaries, and the continuing phenomenon of parties piling up votes in their own safe seats, ensure that vote shares don’t necessarily translate to seat shares.

A number of other assumptions are equally mistaken or misleading. One has to do with the number of seats that will determine the outcome, with many commentators imagining the figure to be as low as the nineteen that Labor needs to add to its present tally if it is to win in its own right. Another is the nature of the seats Labor needs to focus on, with some believing that if the overall swing is towards Labor then the party will win seats while not being at risk of losing any of its own. A third assumption is about the characteristics of the voters in the seats that the parties believe to be in play, with almost everyone thinking that what will be decisive in these seats are forces that affect swinging voters in these seats rather than swinging voters overall.

Some of these misunderstandings reflect a simplistic reading of the electoral pendulum, which was devised by Malcolm Mackerras for the 1972 election and has featured in campaigns ever since. The pendulum ranks the seats held by the Coalition and by Labor in order of their two-party-preferred margin, from those requiring the lowest swings to those requiring the greatest (see, for example, Antony Green’s version). Note that the pendulum doesn’t include the five seats held by other parties or by independents – Melbourne (Victoria), held by the Greens; Fairfax (Queensland), held by the Palmer United Party; Kennedy (Queensland), held by Katter’s Australian Party; and Indi (Victoria) plus Denison (Tasmania), held by the two independents, Cathy McGowan and Andrew Wilkie.

The Australian Electoral Commission, or AEC, defines marginal seats, somewhat arbitrarily, as those requiring a swing of less than six percentage points to change hands; others, equally arbitrarily, fix on a bigger figure or a smaller figure – typically five percentage points or less. Importantly, the fact that one seat requires a smaller two-party-preferred swing than another to change hands – whether it is a “marginal” or not − is no guarantee that the seat requiring the bigger swing won’t change sides if the seat requiring a smaller swing doesn’t.

If the two-party-preferred vote on 2 July is 50–50, the pendulum would lead us to expect not a line-ball result but a Coalition win. This is because it shows that Labor needs to win 50.5 per cent of the two-party-preferred vote to gain the required number of seats (leaving aside, as any pendulum should, the seats not held by either side). At the last election, the Coalition won 53.5 per cent of the two-party-preferred vote and Labor 46.5 per cent. On a swing of 3.5 percentage points to Labor, which would leave the two sides level-pegging in two-party preferred terms, the pendulum predicts that Labor would fall short by five seats, assuming the seats that notionally switched to Labor after the last redistribution start in the Labor column; or by six seats, assuming the Coalition picks up Fairfax from the Palmer United Party, as everyone expects.

If the government and opposition are tied 50–50 in the polls but the government (according to both parties) is ahead in the seat count, there is no need to conclude, as much of the analysis has, that Labor must be experiencing a bigger swing in its own safe seats and/or the safe seats held by the Coalition, while suffering a smaller swing in the seats it doesn’t hold and needs to win. On the contrary, a 50–50 split with Labor falling short by five or six seats is exactly what would happen under a uniform swing. Of course, Labor might be piling up “wasted” votes in safe seats while still benefiting from swings where it matters most, in the additional seats it’s after; in fact, it would be truly remarkable were that not the case.

What matters most is the total number of seats each side needs to win to form government, whether on its own or with the support of others. If the seats Labor picks up and those it doesn’t pick up turn out to be exactly the seats you would expect it to have won or not won, assuming there’s a swing to Labor, then the validity of the pendulum won’t be affected; statements, endlessly repeated, about the pendulum depending on a “perfectly uniform swing,” as Antony Green puts it in his blog, are simply untrue. What the pendulum requires is that the number of seats that don’t swing, even though their margin is less than the overall swing, and the number of seats that do swing, even though their margin is greater than the overall swing, are equal so that the anomalies on each side cancel out.

If we want to use the pendulum to work out which side ought to form government, what matters is the net number of seats that is required. Labor needs seventy-six seats to govern in its own right – the fifty-five it currently holds (or the fifty-seven that are notionally Labor after the redistribution, which the published pendulums acknowledge) plus twenty-one (or nineteen). If Fairfax falls to the Coalition, Labor would require an overall swing of 4.3 percentage points on the pendulum − or a two-party-preferred result of 50.8–49.2 – a substantially bigger challenge than 50–50. If the Coalition wins Fairfax but loses a seat to Nick Xenophon in South Australia, Labor would need 50.5 per cent of the two-party-preferred vote. It would also need 50.5 per cent if the Coalition wins Fairfax, doesn’t lose a seat to Xenophon, but loses New England to Tony Windsor – or loses a seat to Xenophon but doesn’t lose New England. Of course, if the Liberals lose two seats to Xenophon, Labor’s target on the pendulum might drop a touch to 50.4 per cent. And so on. As the number of such possibilities increases – possibilities that involve the minor parties or independents − so the utility of the pendulum declines.


If we look at what the pendulum said would happen in past elections, how useful has it proved to be? Over the last eighteen federal elections, it has missed the mark by no more than two seats on six occasions and by three or four seats on another ten – including the 1969 election, against which the British psephologist David Butler tested the pendulum. Only twice has the performance of the pendulum been less reliable than this in predicting the net shift in seats between Labor and the Coalition: in 1987, when Labor won six more seats than predicted, but would have won office anyway; and in 1998, when the Coalition won twelve more seats than the pendulum would have led one to expect, and the country elected a Howard government with a majority of twelve instead of a Beazley government with a majority of ten.

The closeness of an election is no guide to how good a bet the pendulum is likely to prove. Since 1969 the net difference between the number of seats the pendulum said would change sides given the size of the two-party-preferred swing and the number that did change sides has been not much greater in the eight elections where the swings were relatively large (from 3.6 to 7.4 percentage points) than in the ten elections where the swings were relatively small (from 0.9 to 2.6 percentage points).

Can the pendulum’s various errors – ranging from the 1998 disaster to those elections when it erred by just a seat or two − be explained by the superior campaigning skills of either Labor or the Coalition? It appears not. If Labor’s skill in targeting vulnerable seats were responsible for the party’s winning more seats than it should have in 1987, according to the pendulum, why was that skill not evident in 1984 when the Coalition beat the pendulum, and why was it not sustained in 1990 when the Coalition, again, did better than the pendulum suggested it would, given the size of its two-party-preferred vote? If the Coalition’s campaign superiority were responsible for the party’s winning so many more seats than it should have in 1998, why did it not win more than it should have in 1996, when it underperformed the pendulum by three seats, or in 2001, when it again underperformed the pendulum? It is to factors other than the skills of the campaigners that we should look for explanations – the (un)popularity of state governments, the impact of members’ retiring, the advantage enjoyed by candidates who are first-term incumbents, and so on, including luck.

The voters that parties target in the seats they think they can win but might lose – regardless of whether they hold these seats at the time – are sometimes described as “softly committed” voters, and more often as “swingers,” labels that cover voters of diverse kinds. While exact definitions differ – some of the parties’ market researchers, certainly in the past, have refused to disclose their definition on the grounds that it’s their intellectual property – what these voters have in common is their willingness, if not propensity, to change or to consider changing from their current preference or from their past vote.

How different from swingers in every other seat are the swingers in the seats that count? While the efforts of the parties are focused on a relatively small number of seats that appear vulnerable, it doesn’t automatically follow that the swingers in these seats are very different from the swingers in other seats. According to the Australian’s Phillip Hudson, Bill Shorten needs “to convince just 30,000 voters” in “exactly the right twenty-one marginal seats… to switch from the Coalition to Labor.” Statements of this kind, as much a feature of elections now as are pendulums, are quite misleading. Apart from overlooking Labor’s need to defend its own seats and not just win others, it rests on three shaky foundations: the concept of the “right” seats (a sufficient number of any of the seats Labor doesn’t hold will do); the number (twenty-one includes two seats that boundary changes have made notionally Labor); and, above all, the fact that it is virtually inconceivable that 30,000 voters in the “right” seats would shift without several times their number also shifting in seats Labor already holds or doesn’t need to win. If this weren’t the case, Australia would be a very different place. Election campaigns would be quite different as well.

It is true that the election will be won or lost, by and large, in the marginals, however defined; this is a characteristic of the single-member electoral system we have in the lower house. But the outcome will not necessarily be determined by factors that are relevant only in those seats or even by factors that are more relevant in those than in seats of other kinds. If that were not the case, the pendulum, contingent as it is on the national swing, would almost never work. Pork-barrelling, intense campaigning and a range of other factors may be peculiar to marginal seats and other seats that the parties think of as being in play. (Among the latter are a number of Liberal seats in Western Australia, which would normally be considered fairly safe; Mayo and Sturt in South Australia, where Xenophon poses a threat to Liberals in seats that in other circumstances would be safe; and some Labor seats.) But the parties’ images, policy promises, leaders’ appeal, perceived competence and records shift votes in every seat, as do state and local factors. The electoral significance of marginal seat politics, including claims of other voters being “disenfranchised,” is easily exaggerated.

The more accurate the pendulum in predicting the net shift in seats nationally, the less important the special characteristics of the marginal seats to any explanation of the result. Historically, as we have seen, the pendulum has predicted overall outcomes fairly well; one disaster in eighteen isn’t a bad record. Former Labor senator John Black’s claim that pendulums “are useful only as retrospective devices to rank votes” is mistaken. But to maximise its utility the pendulum requires a reliable estimate of the overall swing. And for that, election watchers follow the national polls.


How good are the polls? Not as good as one might imagine – certainly not as good as the close analysis and breathless interpretations of their every report on the state of play might suggest. Over the seven elections held between 1993 and 2010 − when the polls with the longest continuing records were conducted by interviewers using telephones connected to landlines − the mean difference between the pollsters’ final estimates of the two-party-preferred vote and the election-day figure recorded by the AEC was 1.4 percentage points for Newspoll, 1.8 percentage points for Roy Morgan Research and 2.0 percentage points for Nielsen. In other words, if Morgan’s estimate of the two-party-preferred vote had been 50–50 at some election, the true two-party-preferred, on average, would likely have been 49.1–51.9, either in Labor’s favour or in the Coalition’s.

This is not to criticise how particular polls are conducted; rather, it is to alert us to the limits not just of one poll but of all polls. Since the technologies that most pollsters (including Morgan and Newspoll) now use – the internet, mobile phones, automated telephone calls on landlines − have been around for only a short time, it’s impossible to say whether the accuracy of the polls is likely to be any better now than when they used the older technologies. Given the nature of surveys, however, a better record in the making seems unlikely.

Polls that report their findings in terms of a two-party-preferred vote – as all polls do − allocate respondents’ second preferences on the basis of assumptions that may or may not be justified, regardless of whether the second preferences are nominated by respondents or imputed by the pollster based on the preference flow at the last election. Pollster John Utting notes that the method by which individual pollsters allocate preferences can change the two-party-preferred result by as much as 2.5 percentage points. With up to a quarter of respondents saying that they intend voting for minor parties or independents – some that will not be on the ballot paper − there are a lot of second preferences to guesstimate and a lot of scope for non-sampling error.

Preferences apart, the sampling variance associated with a poll means that no one can be confident about the polls’ published figures beyond their “margin of error” − the variance associated with sample surveys. The differences, from 1993 to 2010, between the two-party-preferred vote predicted by the final pre-election polls and the final two-party-preferred figures, differences already noted, speak to this if to nothing else. When pollsters report a 50–50 split on the basis of 2000 interviews – some sample sizes are considerably smaller – what they are saying with great (95 per cent) confidence is no more than that Labor’s two-party vote is somewhere in the range of 48 to 52 per cent; and, correspondingly, that the Coalition’s two-party vote is also somewhere in the range 52 to 48 per cent. Note that if 50–50 is the result of rounding up from as low as 49.6 and rounding down from as high as 50.4, then 48 to 52 is the result you can get from rounding up 47.6 and rounding down from 52.4. While pollsters sometimes note the “error” associated with the size of the sample, it is easy to understand why they – and, more particularly, the newspapers, radio stations and television channels that carry their reports − are reluctant to spell out what this really means. A statement that support for Labor (or the Coalition) could be anywhere between 48 per cent, a clear loss, and 52 per cent, a comfortable win, is not what the media that pay for this intelligence usually want to hear.

And while much is made of the parties’ being “neck and neck,” a conclusion one can reach with greater confidence by pooling the polls, as some analysts have done, the chances that any polling organisation will come up with the same two-party figure – 50–50, for example – on two successive occasions, as a number of pollsters have done, are small. The chances of producing the same 50–50 two-party figure three times in a row, as the Essential poll did a couple of months ago, are even smaller. This is true, it should be stressed, even if the electorate’s support for the parties hasn’t changed. What is widely accepted in the media as evidence of no net shift might be taken, at best, as evidence of the pollsters enjoying extraordinary luck.

Rather than relying exclusively on the national polls, it might be useful to listen out for the whispers from the party bunkers. If the word spreads that Labor’s gains in the marginal seats, where the parties poll relentlessly, are not sufficient for Labor to win, it is unlikely that Labor has lifted to a two-party-preferred figure of 50 per cent nationally; on the pendulum, falling short by ten seats translates to a two-party preferred of 49.6 per cent for Labor and 50.4 for the Coalition. Some of the national polls, keen to boost their claims for accuracy, may be adjusted to reflect this, too. If Labor’s two-party-preferred figure doesn’t exceed 50 per cent but it goes on to win, it will be at least as big a blow to the credibility of the pendulum as the result in 1987. If Labor’s two-party vote gets to 51 per cent but the party falls well short of winning, it will be the biggest setback the pendulum has suffered since 1998.

Neither a Labor two-party result of 51 per cent nor a Labor victory seems likely. The Coalition may well finish up winning more seats than the pendulum predicts. And a number of seats may change hands − to or from the minor parties and independents – for which the pendulum doesn’t allow. Whatever the outcome, the polls need to be read with their sampling and non-sampling errors in mind, the nature of the pendulum properly understood, and the difference between the parties’ share of the vote and the parties’ share of the seats carefully noted. •

The post Polls and the pendulum appeared first on Inside Story.

]]>
Israel and the Palestinians: public opinion and public policy https://insidestory.org.au/israel-and-the-palestinians-public-opinion-and-public-policy/ Wed, 22 Feb 2012 07:14:00 +0000 http://staging.insidestory.org.au/israel-and-the-palestinians-public-opinion-and-public-policy/

The evidence on Australian attitudes is much less clear than protagonists argue, writes Murray Goot, and the implications for public policy are far from straightforward

The post Israel and the Palestinians: public opinion and public policy appeared first on Inside Story.

]]>

THAT Australians are less well disposed to Israel than they once were is almost certainly true. Israel’s friends are no less likely to deny the truth of this than are Israel’s enemies. Identifying the turning points in public opinion is much more difficult. So, too, given the dearth of decent data, is the task of estimating the breadth or depth of the change. Even if that could be done, there remains the question of how governments should respond.

Recently, Peter Manning, the respected journalist and author of Us and Them: A Journalist’s Investigation of Media, Muslims and the Middle East, argued that the “overwhelming trend” in data compiled from various opinion polls “shows a sharp swing since the 1980s against Israel's image and actions among ordinary Australians.” This account of the polls is misleading. Manning went on to argue that since public opinion had shifted, government policy should also change. But is a swing in public opinion sufficient reason for public policy to swing too?

The trend Manning claims to have identified is in respondents’ attitudes towards the conflict between Israel and the Palestinians. In a 1981 McNair poll, he notes, 28 per cent said their sympathies in the Middle East were “mainly with the Jewish people” while 4 per cent said they were “mainly with the Arabs.” The poll was taken in the first two weeks of July, before Israel bombed the PLO’s headquarters in Beirut causing a large number of civilian casualties. When the question was repeated in August 2006, by McNair Ingenuity, 13 per cent said their sympathies were “mainly with the Jewish people,” while 10 per cent said they were “mainly with the Arabic people.” This poll was taken shortly after the 2006 Israel–Lebanon conflict had begun. This is the only clear evidence of a shift from a margin in favour of “the Jewish people” of twenty-four percentage points to a margin in favour of “the Jewish people” of just three.

But it is hardly evidence of a “trend.” We need more than two data points to show when opinion shifted. Over a quarter of century there may have been a number of movements, the earliest perhaps quite soon after the 1981 poll. And that poll may not have been the point at which opinion was most polarised.

Since 2006, the figures have continued to show roughly equal numbers of sympathisers on each side. In 2006, a UMR survey found 24 per cent feeling “more sympathy” for the “Israelis” (compared with McNair’s 13 per cent for “the Jewish people”) and 23 per cent feeling “more sympathy” for the “Palestinians” (compared with McNair’s 10 per cent for “the Arabic people”). Similar results to UMR’s were reported in 2009 and 2011 by Morgan.

The differences in the figures produced, in 2006, by the McNair Ingenuity and UMR polls are puzzling. Perhaps the answer lies in the different ways the two sides are described. The “Jewish people” (McNair Ingenuity) may not have been as well regarded as “the Israelis” (UMR), the “Arabs” (McNair Ingenuity) not as well thought of as the “Palestinians” (UMR). Ethnicities might be less appealing than nationalities. In 1981, had the question been about “the Israelis” and “the Palestinians” rather than about the “Jewish people” and the “Arabs,” McNair might have produced a closer result.

Feelings of “sympathy” are very general and rather vague. Especially where they are not strongly held, sympathies may not be a very reliable guide to how respondents judge specific circumstances. When the question of “Israel’s recent military action in the Gaza strip” was raised in the 2009 Morgan poll, fewer (28 per cent) said it was “justified” than said it was “not justified” (42 per cent). But in other polls, conducted in 2006, more respondents sympathised with Israel (33 per cent in July, 27 per cent in August) than with Hezbollah (15 per cent and 12 per cent respectively) in the fighting then taking place; the majority response was “neither,” “both” or “can’t say.”

Polls that touch on Israel and the Palestinians – and few have – often fail to produce majorities in favour of one or the other. In 1946, a Morgan poll found opinion evenly divided (44:44) not over “whether Palestine should be partitioned,” as Manning claims, but on “limiting the number of Jews who enter Palestine.” In 1967, another Morgan poll didn’t find a “large majority” supporting Israel, as Manning reports, but a plurality (44 per cent) in favour of Israel keeping “the old city of Jerusalem” rather than giving it to the United Nations to be “internationalised” (36 per cent) or giving it “back to the Arabs” (6 per cent). And in 1974, after the 1973 war, the polls didn’t report “a large majority” that was “pro-Israel,” as Manning believes; they reported pluralities whose sympathies were with “the Israelis” rather than with “the Arabs” – 44:5 in a Morgan poll, 37:5 in a Saulwick poll.


WHAT should we make of polls where opinion is not so much evenly divided as widely spread – as it has been on the general question of “sympathy” in recent times – between pro/anti, neither/both, and unsure? Of the recent items to which Manning refers the proportion typically recorded as “can’t say,” “don’t know” or “unsure” is around 20 per cent or more. Figures this high suggest relatively low levels of knowledge about the issues or engagement with them. In an online poll conducted by Research Now in May 2010, only a quarter of the respondents rated their “own understanding of the Israel–Palestine conflict” as “very good” or “good.”

Even if the results are meaningful, where public opinion treads governments may not feel compelled to follow. Manning’s depiction of a plurality or minority as “a large majority” in 1967 and 1974, and his reference to an “overwhelming trend” since the 1980s, as if earlier majorities had now become minorities, is important. His point, after all, is that rather than “snubbing” public opinion the government should change its policy. And the democratic argument for policy change in response to public opinion is much stronger when the opinion is that of the majority rather than that of a minority.

Manning frames the consequences for government of not following any shift in public opinion in terms of votes. In particular, he warns of a loss of votes to the Greens – a relatively new threat to the government, since previously “progressive voters had nowhere else to go.”

Although the balance of opinion is now less lopsided, a loss of votes doesn’t necessarily follow. None of the polls show – in fact, top-line results cannot show – whether the issue is a vote-changer. In any event, governments that judge an issue to be a vote-changer need to consider, if they are prudent, whether changing their policies might lose votes not just gain them. A shift by the government away from its present stance might please supporters of the Palestinians’ claims but anger those committed to Israel’s.

On an issue like this, where the votes of very particular minorities may be at stake, governments are less likely to consider national poll data than the distribution of sympathisers in particular seats. In marginal seats, which are going to be won either by the government or by the Coalition, “progressive” voters who shift to the Greens and want to cast a valid vote will have nowhere else to go but back to Labor; a second preference for the Coalition won’t make much sense to these voters if the opposition hasn’t shifted on this issue either. Only in the two or three seats where the winner is likely to be either the government or the Greens does the issue stand even the remotest chance of deciding the result.

If Labor were to follow Manning’s advice the government might strengthen its hold on a marginal seat like Reid, in Sydney’s western suburbs, with the fourth highest proportion of Islamic residents. But it would weaken its hold on Melbourne Ports, which has the highest proportion of Jewish voters, and where the Liberals might pre-select a Jewish candidate strongly committed to Israel, as they have done in the past.

Changing its stance could help the government retain the marginal seat of Grayndler in Sydney’s inner-West, where the threat is more likely to come from the Greens than from the Liberals. But given what happened to the Greens’ candidate for Marrickville, which takes in part of Grayndler, at the 2011 New South State election we shouldn’t count on it; the candidate embraced the BDS (Boycott, Divest and Sanctions) movement, registered a precipitous drop in the level of her support (according to a Galaxy poll) after doing so, and failed to take a seat from Labor that the Greens should have won.

Whether or not governments judge their position to be a net electoral liability, they need to weigh up things other than votes. These, in relation to the Middle East, would have to include Australia’s alliance with the United States, on the one side, and perhaps its quest for a seat on the Security Council, on the other. The merits of the issue might also warrant attention.

If a government feels the public is hostile to its position, and sees the issue as a vote-changer, it can either fall into line, ignore the polls (especially if the opposition isn’t biting) or adopt some other strategy – arguing against it, trying to reframe it or simply displacing it on the list of public concerns by talking up some other issue.

On the question of Israeli settlements, the government might well want to criticise Israel if not publicly then in private. If it did it would be in line with the Research Now poll, noted by Manning, which reports that most respondents (77 per cent) agreed that “Israel should withdraw from the settlements it has constructed on Palestinian land.” But on the question of whether Palestine should be admitted to the United Nations as a full member, the government might disagree – notwithstanding the Morgan poll of September 2011 in which the majority (62 per cent) agreed that the United Nations, in the teeth of opposition from “Israel and the USA,” should “recognise Palestine as one of its member states.”

Faced by a series of polls that appeared to show the Palestinians – not the Israelis – losing the PR battle, it is unlikely that commentators sympathetic to the Palestinian position would warn the government to heed such polls or face an electoral backlash. •

The post Israel and the Palestinians: public opinion and public policy appeared first on Inside Story.

]]>
Howard’s victories: which voters switched, which issues mattered, and why https://insidestory.org.au/howards-victories-which-voters-switched-which-issues-mattered-and-why/ Fri, 23 Jul 2010 05:19:00 +0000 http://staging.insidestory.org.au/howards-victories-which-voters-switched-which-issues-mattered-and-why/

The reasons for the Howard government’s electoral success are widely misunderstood

The post Howard’s victories: which voters switched, which issues mattered, and why appeared first on Inside Story.

]]>

DESPITE winning at four successive polls, the Howard government’s electoral performance has attracted less attention from political scientists, and fewer satisfactory explanations, than we might expect. The consequence – as we can see in the current election campaign – is that speculative accounts of why the Coalition won those elections have continued to dominate discussion of Australian electoral politics. “Howard’s battlers” are still at war, “big pictures” are out of fashion again, and both main parties are chasing “aspirational voters.”

In 2007 we set out to fill what we saw as a gap in the political science literature by looking in detail at the issues and forces that influenced the results of the elections held between 1993 (Labor’s last victory before John Howard became prime minister) and 2004 (the Howard government’s last victory). Our source for data was the Australian Election Study, or AES, a detailed questionnaire completed by at least 1700 voters after each of these elections. Our detailed findings were published in the Australian Journal of Political Science.

The AES is an indispensable source for the analysis of Australian electoral behaviour, but it is also a frustrating one. Accounts of the Coalition’s success, in which Howard’s appeal to the “battlers” looms large or in which his attack on “political correctness” is central, can be interrogated only in part through the AES surveys, and then only with difficulty. To assess what might be called the foundation myth of the Howard years, that Labor was thrown out of office because Paul Keating’s “big picture” ignored “the battlers” and their concerns, we need to consider policy issues – a republic, reconciliation with Indigenous Australians and stronger relations with Asia – that the AES ignored almost entirely. And the more recent wisdom that the government’s hold on power hinged on Howard’s appeal to “aspirational” voters also raises questions that the AES is ill-designed to answer. (More details about the AES are given at the end of this article.)

Within these constraints, our article used the AES to address three questions either not previously asked or not persuasively answered. First, to what extent did the Coalition’s victories depend if not on Howard’s appeal to “the battlers” then on his appeal to the “blue-collar” vote? Second, what demographic variables other than occupation helped drive voting behaviour after Howard came to office and what were the most marked changes in the importance of these variables from the Hawke and Keating years? Third, to what extent did the political issues that the AES touches on – relations with Asia, taxation, interest rates, privatisation, health, education, terrorism and the war in Iraq – help explain changes in major party support, and to what extent did economic circumstances account for the Coalition’s success?

We found that although “blue-collar” respondents did shift to the Coalition in 1996, they were hardly a loyal band; at least in terms of their first preferences, they deserted in large numbers in 1998 and didn’t return to the Coalition in anything like the same numbers until 2004. Other demographic factors made a difference as well. Comparing the Labor years for which there are AES data (1987–93) with the Howard years, we see a shift towards the Coalition among respondents aged over 60 years, among respondents aged 30–39 years (in comparison with those under 30 years of age), and among respondents who were Catholic (compared with respondents who were non-believers). Over the same period there was a shift towards Labor among respondents from non-English speaking backgrounds (compared with the Australian-born).

We also identified the issues that mattered: defence, terrorism, taxation (as distinct from the goods and services tax) and interest rates – all of which at one time or another worked for the Coalition – and health, education, the environment, and privatisation – all of which at one time or another worked for Labor. In addition, the government of the day benefited from the support of respondents who sensed either that the economy had improved over the previous twelve months or that it would improve over the next twelve months; whereas the opposition of the day benefited from respondents who regarded unemployment as an extremely important issue, thought the economy likely to go backwards over the next twelve months or thought their household’s finances had gone backwards over the previous twelve months.

At the same time, our analysis casts doubt on a number of the factors often considered important to Howard’s success. In 1996, it was not Keating’s arrogance, his emphasis on Asia or the importance of immigration that helped the Coalition across the line. Nor were those respondents who voted for the Coalition especially worried about interest rates. In 2001, it was immigration rather than refugees that mattered, and terrorism rather than defence. And in 2004, respondents who attached great weight to education were not especially likely to have voted for the Coalition, despite Labor’s caning over its promise to redistribute private school funds. Nor was Labor’s promise to introduce Medicare Gold as damaging to its electoral appeal on health as is commonly supposed. And the Iraq war, far from being neutralised as an issue by Latham’s pledge to bring the troops home by Christmas, appears to have cost the Coalition votes.

Howard’s battlers?

One of the most remarkable things about changes in party support between the Hawke–Keating years and the 2004 election were the changes at both ends of the occupational spectrum, with the Coalition losing support among managers (down from 67 per cent in 1987–93 to 61 per cent in 1996–2004) and gaining the support of blue-collar workers (up from 34 per cent to 39 per cent), deserting Labor not only for the Coalition but for minor parties. But Labor gained virtually none of the support the Coalition lost from the professional–managerial respondents (its support rose among managers but declined among professionals), its white-collar support declined (from 41 per cent to 38 per cent) and its support among respondents in blue-collar jobs fell from 55 per cent to 45 per cent. Most of the Coalition’s losses showed up not as Labor gains but as gains for the minor parties – the Australian Democrats, One Nation and the Greens. Most of Labor’s losses, too, showed up as gains for other parties; but the Coalition also got a boost, especially from Labor’s blue-collar base.

Although both sides lost support in their demographic heartlands, the Coalition’s losses were much smaller than Labor’s. And although the Coalition registered important gains that more than compensated for its losses, Labor made no compensating gains – not in its area of traditional strength, blue-collar workers; not among white-collar voters, wooed so effectively in the Whitlam years; not even among the professional middle class, to which Labor is said to have pitched its policies with disregard to its blue-collar base. Among all these groups Labor’s support went backwards over the period.

The defections from both heartlands decreased the distinctiveness of each side’s electoral support. In the period 1987–1993, the gap between the level of support for the Coalition parties among managerial respondents (67 per cent) and blue-collar respondents (34 per cent) was 33 percentage points; in 1996–2004, this gap (22 points) dropped by one-third. In 1987–1993, the gap between the level of support for Labor among managerial respondents (25 per cent) and blue-collar respondents (55 per cent) was 30 points; in 1996–2004, this gap (19 points) was almost halved. For both Labor and the Coalition the gradient in support, from one end of the occupational scale to the other, became less steep under Howard than it was under Keating or Hawke.

When exactly did these changes occur? To answer this question we need to look at the data election by election. In the case of blue-collar workers, we need to look at trades people in particular. And we also need to look at the difference between blue-collar workers who were self-employed and blue-collar workers who were not.

Managers: Although support for the Coalition among managers declined in the Howard years, this decline dated not from 1996 but from 1990. In 1987, 71 per cent of respondents who were managers said they had voted for either the Liberal or National Party; in 1990 through to 1996, the Coalition’s vote among managers hovered at around two-thirds (64 – 67 per cent); subsequently it dropped to 60 per cent or 61 per cent. Labor, unlike the minor parties, gained little from this slide.

Professionals: Support for the Coalition among professionals did not decline – but it did fluctuate. It dipped in 1990 (41 per cent) when Labor, too, lost votes to the Democrats. The Coalition lost further ground in 1993 (39 per cent) when Labor recovered. But it gained – as did Labor – in 1996 (42 per cent). In 1998, when support for the Coalition (43 per cent) held firm, Labor’s support fell by 8 points, largely owing to the Greens (up 3 points) and One Nation (6 points). In 2001, the election that followed not only the attack on the United States but also the turning back of the Tampa, support for the Coalition among professionals declined by 4 percentage points – notwithstanding that the Coalition actually increased its share of the nationwide vote by 3.5 per cent – whereas support for the Greens jumped from 4 per cent to 11 per cent. In 2004, when support for One Nation and the Democrats both collapsed, the Coalition’s vote among professionals rose by 3 points and Labor’s by 5 points.

White-collar workers: If there was a swing to the Coalition in 1996 among white-collar respondents, it was barely noticeable when almost half (48 per cent compared to 47 per cent in 1993) said they had voted either Liberal or National. But among white-collar respondents the 1998 election and the rise of One Nation saw a big swing away from the Coalition. The only swing towards the Coalition of any size – a swing that lifted its share of the vote from 40 per cent (1998) to 49 per cent – came in 2001. This returned it to roughly where it had been in 1996.

For Labor, on the other hand, 2001 marked a new low with white-collar respondents shifting to the Democrats and the Greens; in 1998, 41 per cent of white-collar respondents said they had voted Labor, but in 2001 no more than 36 per cent said they had done so. Worse, in 2004, when the combined support for the Democrats and Greens returned to its 1998 level, support for Labor (36 per cent) remained unchanged.

Blue-collar workers: The jump in the Coalition’s support among blue-collar respondents dates precisely from 1996. In the Liberal Party’s 1996 exit poll, conducted across the “52 most marginal seats,” 47.5 per cent of blue-collar respondents voted for the Coalition; compared with a vote of 43 per cent in 1993, this represented a gain of almost 5 percentage points.

The AES data, derived from respondents in safe seats as well as those in the marginal seats, tell a story that is less dramatic in terms of the size of the Coalition’s blue-collar vote but more dramatic in terms of the size of the blue-collar swing. In 1996, 44 per cent of blue-collar respondents said they had voted for the Coalition; this compares with only 33 per cent in 1993. For the first time, more blue-collar workers said they had voted for the Coalition than said they had voted for Labor.

By 2004, the days when Labor could command the majority of the blue-collar vote, or outpoll the Coalition by nearly two votes to one, had receded into an increasingly distant past. Not since 1993, after John Hewson threatened to introduce a GST and dismantle Medicare, had Labor won an absolute majority of blue-collar respondents. In 1990, as it chased the environmental vote, its support among blue-collar respondents dropped to less than half (48 per cent). In 1996, its support fell even lower (41 per cent).

In 1998, contrary to expectations that Hanson would split the Labor vote, more blue-collar respondents switched to Labor than moved away; Labor’s share of blue-collar respondents rose to 46 per cent. It was the Coalition’s share, not Labor’s, that Hanson hit; Liberal–National Party support fell to just over one-third (35 per cent) and remained at that level in 2001, notwithstanding that One Nation’s electoral support had passed its peak. Not until 2004 did the Coalition’s vote recover (up by 6 percentage points). Conversely, Labor’s share of blue-collar respondents declined – its loss of support among blue-collar respondents apparently bigger than its loss of support in the electorate as a whole. For the first time since 1996, blue-collar respondents were divided almost evenly between the Coalition (41 per cent) and Labor (44 per cent).

What can we say about those blue-collar respondents who worked in a trade? Tradespeople did swing to the Coalition in 2001, and in 2004 were evenly split (42:42). Compared to 1996, when it turned a 36:52 deficit into a 46:40 lead, the Coalition’s grip appears to have weakened. This is largely because of the damage inflicted in 1998 by Hanson, when the Coalition’s support, and Labor’s, split 34:45. And although 1996 is an election from which Labor found it difficult to recover, its losses were not without precedent; in 1990, tradespeople were also evenly split, 40:38.

Increasingly, those who work in blue-collar jobs are self-employed. Among blue-collar respondents, the self-employed averaged 19 per cent between 1987 and 1993. In 2004, among AES respondents, it had grown to 25 per cent.

Did self-employment make a difference? In 1996 the swing to Howard appears to have been more marked among blue-collar respondents who were self-employed (a gain of 11 percentage points) than among those who were not self-employed (a gain of 7 points); nonetheless, the shift from Labor was no more marked among the self-employed (a fall of 17 points) than it was among those who were not self-employed (a loss of 15 points). In 1998, the shift away from the Coalition among blue-collar respondents – partly to Labor, partly to Hanson – was almost entirely due to the massive desertion of those who were self-employed (down by 26 points compared to a drop of just 2 points among those who were not self-employed).

In 2004, again, it was blue-collar respondents who were self-employed who swelled the Coalition’s ranks; support for the Coalition among these respondents jumped by 19 points whereas among those not self-employed it rose by just 2 points. Nonetheless, in 2004 support for the Coalition among blue-collar respondents who were self-employed (59 per cent) was substantially lower than it had been in 1996 (69 per cent); support for Labor was much higher (32 per cent compared with 19 per cent). Among blue-collar workers not self-employed, the Coalition’s share of the vote in 2004 (36 per cent) was about the same as it had been in 1996 (35 per cent).

In short, if Keating lost a large swag of the blue-collar self-employed to Howard in 1996, Howard had considerable difficulty holding them; in 1998 and 2001, he lost all of them – and a lot more besides. Over 1996–2004 Howard was much more successful holding on to blue-collar workers who were not self-employed.

Victories compared: 1987–93 to 1996–2004

To understand which of the many characteristics associated with employment actually shaped the choices of our respondents, we need to move from bivariate to multivariate analysis. This allows us to plug in a range of other variables: characteristics of the respondents, like their age, gender, place of birth and religion; and their position on a whole raft of issues.

We started by aggregating the AES data from 1987 to 1993, Labor victories, and from 1996 to 2004, Coalition wins. We then ran two multinomial logit models to see if there are any notable trends in the socio-structural basis of support for the parties. In the first set of results the independent variables were gender, age, occupation, private sector employment, and trade union membership, place of birth, religion and marital status. (Full tables are available in the AJPS article).

In the pre-Howard period the odds of a male (compared with a woman) voting for the Coalition rather than for Labor was 0.86, whereas in the Howard years the odds were 0.93. If the odds had shifted to 1.00 this would have meant that men were just as likely as women, other things being equal, to have voted for the Howard government. The modelling suggests that although the Coalition under Howard may have improved its performance among men relative to its performance among women in 1996, as Andrew Robb observed, the improvement is not statistically significant net of other factors.

To find the categories in which the Coalition clearly improved its position or to see where it had slipped we had to look elsewhere. First, to the category of respondents aged 30–39 years, in which the Coalition reduced its disadvantage (compared with those aged 18–29 years), and to the category of respondents aged 60-plus, in which it increased its advantage substantially. If the gains among the 30 to 39-year-old group are surprising, the gains among older voters are not; a study of the Howard decade, undertaken by the National Centre for Social and Economic Modelling, showed “the nation’s most favoured voters” were “part-pensioners with private incomes of $250 to $500 a week.” Second, we looked to Catholics (compared to non-believers), for whom the Coalition reduced its disadvantage as well. Third, we looked at the category comprising those voters with lower levels of formal education, in which the Coalition also reduced its earlier disadvantage.

Although the prime minister also targeted “low- and middle-income families,” the odds of low-income earners or middle-income earners (compared to high-income earners) voting for the Coalition, which was never high, were no higher in 2004 than they had been in 1993. As for the blue-collar vote, there was no statistically significant change between the two periods. But if the Coalition improved its position, it lost ground as well. Its slight disadvantage in the pre-Howard years among migrants with non-English-speaking-background (compared to the Australian-born) grew with Howard in office. Multiculturalism was something regarded with suspicion, if not hostility, by the Howard government.

To see what factors drove the vote when Howard came to office in 1996, what factors drove the vote in 1993, and what factors have driven the vote thereafter, we modelled a wider range of variables and looked at each election in turn. (Appendix Table A2 in the AJPS article provides more details.)

Our analysis drew on four broad categories of independent variables: variables that touch on campaign issues; variables that measure respondents’ household finances – looking back twelve months and forward twelve months – and the way they thought the broader economy had changed over the past twelve months and might be expected to change in the coming twelve months; a variable designed to assess whether respondents changed their vote from the previous election; and a variable to measure whether they cared a great deal about the outcome of the election. For ease of exposition, we present our key findings as changes in the predicted probabilities of voting for each of the parties. For example, for an “average” respondent in 1993 who rated health issues “extremely important,” the predicted probability of voting for the Coalition was 12 percentage points lower than for an “average” respondent who did not regard health issues as “extremely important.” Note that these percentages are changes in the predicted probabilities for the “average” respondent, and not changes in the party’s share of the total vote. Although a particular attitude might have a considerable impact on the predicted probability, the attitude itself might not be widely shared. To help the reader keep this in mind, we also show the proportion of respondents who shared that particular attitude.

1993: More than just the GST

It is often assumed that the GST rescued Labor in 1993. But there is a good deal more to the story than this. For the average respondent for whom the GST was “extremely important” – and more than half the respondents fell into this category – the predicted probability of voting Labor rose by 14 percentage points compared to the average respondent for whom the GST was not extremely important. But this was not the only issue that mattered. For the average respondent for whom health was “extremely important” – and two-thirds of the respondents fell into this category – the predicted probability of voting Labor rose by 11 per cent; for the average respondent for whom education was “extremely important” – almost half the respondents – the predicted probability of voting Labor rose by 11 per cent; and for the average respondent for whom the environment was “extremely important” –more than a third of the respondents – the predicted probability of voting Labor rose by 9 percentage points (although minor parties benefited equally at the Coalition’s expense).

As one might expect, adverse economic impacts worked in the opposition’s favour. Thus, the average respondent for whom unemployment was “extremely important” – and two-thirds of the sample fitted this category – was 11 percentage points more likely to have voted for the Coalition. And the average respondent who thought “the general economic situation in Australia as a whole” was likely to be worse “in twelve months’ time” – two respondents out of five – was more likely, by 44 percentage points, to have voted for the Coalition. There were other, smaller, impacts as well.

If the Coalition picked up some of the economic losers in 1993, Labor benefited from some of the economic winners. For the average respondent who thought things would be better in twelve months’ time (and they constituted nearly a third of the sample), the probability of voting Labor rose by 56 percentage points. And the average respondent who thought the “general economic situation” was better than it had been twelve months ago was also more likely (by 16 points) to have voted Labor; no fewer than 63 per cent shared this view. But interest rates, mentioned by half the respondents as “extremely important,” did not shift votes either way.

1996: Taxes, defence and whether the election mattered

Even in defeat, Labor still held the advantage on health and, more narrowly, on the environment (with most of the Coalition losses benefiting the minor parties rather than going directly to Labor). And, despite the best efforts of the Coalition to neutralise the issue, Labor also enjoyed a modest advantage on the issue of privatisation. But Labor no longer held a statistically significant advantage on education. The Coalition held on to its advantage on unemployment, was preferred by those (two-in-five) for whom taxation was “extremely important” (the GST item was dropped for this election so we can’t say whether its impact lingered), and enjoyed a clear advantage on defence (rated “extremely important” by one-quarter of the sample). Again, interest rates counted for nought in influencing the vote, as did industrial relations, although both were rated “extremely important” by nearly half of those interviewed. Immigration and links with Asia, each rated “extremely important” by about a quarter of the sample, also left no mark.

On the health of the economy as a whole, and on the health of household budgets in particular, the pattern of advantage and disadvantage was much as it had been in 1993. But although Labor increased its advantage (38 compared to 16 points) among those who thought the “general economic situation” better than it had been twelve months ago, the proportion who felt the economic situation was better than it had been twelve months ago was lower (46 per cent, down from 63 per cent); and the proportion who felt the “financial situation” of their household was better than it had been twelve months ago was also lower (35 per cent, down from 44 per cent). Although the Coalition benefited from those who thought “the general economic situation in Australia” would be better in twelve months’ time (giving it an advantage over Labor of 44 percentage points among this group), its advantage was not as great as Labor’s in 1993 and the proportion (23 per cent) who shared this view of the economic outlook was not as great as it was in 1993 (30 per cent).

What does stand out (although not shown as predicted probabilities) is the impact of “concern” about the outcome of the poll. For those who “cared a good deal which party won the federal election,” or who voted one way in 2001 and another in 2004, the impacts were large. Among these respondents, the odds of voting for the Coalition over Labor were about 3 to 1. And notwithstanding the view among commentators that Keating’s arrogance was a key factor in the Coalition’s win, this factor shows no statistically significant relationship with the vote.

1998: A referendum on the GST?

If the 1998 election was a “referendum” on the GST, it appears to have ended in a tie; as a vote-shifter it was statistically insignificant. Nor were Labor’s advantage on health and the Coalition’s disadvantage on the environment statistically significant. But education (now “extremely important” to two-thirds of the respondents) remained a Coalition weakness; unemployment, a negative for Labor in 1996, was now a problem for the Coalition; and privatisation remained a Coalition negative as well (although only one-third rated it as “extremely important”). The two issues raised in the AES on which the Coalition enjoyed an advantage were interest rates (for the first time) and, more importantly (because two-thirds rated it “extremely important”), taxation – an issue, unaffected apparently by the GST, on which there had been little change since 1996. Again, industrial relations, immigration and links with Asia left no mark.

The economy was now an advantage to the Coalition in a way that it had not been in 1996 when many more respondents (46 per cent) thought economic conditions had improved in the past twelve months than thought they had gone backwards (18 per cent). In 1998, on the Coalition’s watch, the difference in the proportions who thought the economy had improved and who thought it had deteriorated was small; but the Coalition enjoyed a clear advantage among those who thought things had improved and suffered no statistically significant disadvantage among those who thought things had gone backwards. Again, whereas more respondents after the 1996 election thought the economy would do worse (37 per cent) in the next twelve months rather than better (23 per cent), after the 1998 election more respondents expected it to do better (41 per cent) rather than worse (24 per cent).

Those (20 per cent) who felt their household finances had gone backwards, like those who felt the economy had gone backwards, moved to Labor and the minor parties. More respondents (30 per cent) felt that their household’s finances had improved over the past twelve months, but this did not affect the probability of their voting for the Coalition.

2001: Terror (and immigration), not Tampa

The 2001 election might have been characterised as the “Tampa election” but, on the evidence of the AES, it was not; although half the sample (49 per cent) thought refugees an “extremely important” issue, the probability that those who thought this way had voted for the Coalition was not significantly greater than the probability that they had voted for Labor. What did work for the Coalition was the related issue of immigration; having not worked for the Coalition in 1996 or 1998, the issue of immigration – “We will decide who comes to this country and the circumstances in which they come,” as John Howard put it – increased the predicted probability of voting for the Coalition by 11 percentage points among those respondents (48 per cent) for whom it was “extremely important.” Again, although defence did not work for the Coalition (despite being nominated as “extremely important” by 50 per cent), the related issue of terrorism did; identified as “extremely important” by half (52 per cent) of the respondents, terrorism increased the predicted probability of voting for the Coalition by 18 percentage points, taking support not only from Labor but also from the minor parties. In short, the main issues were not refugees and terrorism but immigration and terrorism. Nonetheless, on neither issue was the Coalition’s advantage as great as many commentators imagined.

On domestic issues, the news for Labor was generally good. It regained the edge on health, retained an even stronger edge on education and on unemployment, and on the GST – despite the limited nature of its proposed “rollback” – it enjoyed a remarkable advantage (26 percentage points) among those (45 per cent of the sample) who rated the issue “extremely important.” On the more general issue of taxation, the Coalition also made no headway. One area in which Labor lacked strength was the environment. And for the third time in succession, industrial relations left no mark.

On the economy and on household finances, the news for the Coalition was much better as, for the most part, rosy assessments outnumbered gloomy ones. Those who thought things had improved in the past twelve months (41 per cent) or would improve in the next twelve months (37 per cent) were more likely to have voted for the Coalition. Those who thought things had got worse in the past twelve months (only 25 per cent) or would get worse in the next twelve months (42 per cent) were more likely to have voted Labor. But the latter was a relatively small group and Labor’s overall advantage on these two measures (7 percentage points) was relatively narrow. Similarly, those who thought their household finances had improved (41 per cent) were more likely to have voted for the Coalition than for Labor. And although those who thought their household finances had gone backwards (21 per cent) were more likely to have voted Labor, their number was only half as great as those who thought their household circumstances had improved.

Nor should we overlook the advantage the Coalition enjoyed among respondents for whom it really mattered which party won. Although the advantage was not as great as it was in 1996, it was significant nonetheless.

2004: Who can you trust?

There seems little doubt that interest rates won the 2004 election for the Coalition. Among respondents who rated interest rates as extremely important (46 per cent of the sample), the probability of voting for the Coalition was 24 percentage points higher than the probability of voting for Labor. Those who thought the economy was better than it had been twelve months earlier favoured the Coalition by a similar margin; but they only accounted for 16 per cent of the sample. Those who thought the country would be better off in twelve months’ time favoured the Coalition by a greater margin, but those who thought the country would be worse off in twelve months counterbalanced them. And, whereas the Coalition gained no advantage from those who thought their household’s position had improved over the past twelve months, it lost votes among those who thought their households had gone backwards.

Interest rates apart, and allowing for the fact that the AES did not ask about forests’ policy, domestic issues appear not to have served the Coalition well. On health, education and the environment – issues rated “extremely important” by upwards of half those interviewed – Labor enjoyed a clear advantage. Labor’s strong showing on health, which was rated “extremely important” by an exceptionally high 75 per cent, was not negated by the controversy generated by its Medicare Gold policy; nor was its lead on education overtaken by its schools’ funding policy. On unemployment, Labor was also ahead. Industrial relations and taxation made no difference either way.

For the Coalition the Iraq war proved as big a liability as any; among those who thought the issue “extremely important” (36 per cent of the sample) the chances of voting for the Coalition declined by 26 percentage points, with the probability of voting for Labor increasing by almost as much. But the Coalition’s losses over the war in Iraq were more than offset by its gains from defence and the issue of terrorism. On defence, which was rated “extremely important” by half (51 per cent), the probability of voting for the Coalition increased by 19 points; on terrorism, which was rated “extremely important” by 49 per cent, the probability of voting for the Coalition also rose by 19 points. And, although no advantage accrued to it through refugees, the issue of immigration again gave the Coalition a boost.

Battlers, Catholics and trade unionists

Contrary to the view that Howard won a new constituency in 1996 and held on to it, more or less, until some time after the 2004 election, our analysis highlights the volatile nature of the Coalition’s gains over this period. In 1998 the Coalition’s vote among blue-collar respondents dropped close to where it had been in 1993 and remained there. Not until 2004 did Howard win most of this constituency back.

If we compare the AES data for the pre-Howard elections (1987–93) with those that cover the elections from 1996 to 2004, we find no trend in blue-collar support for the Coalition net of other demographic factors. What our analysis suggests that the changes in the Coalition’s fortunes during those years had more to do with education than with occupation. Howard built his support not so much among blue-collar workers as among voters with relatively low levels of education. In addition, he extended the Coalition’s advantage among older voters (aged 60-plus years). So the swing to the Coalition may have been less the story of a shift in the labour market than a story about populist right-wing politics mediated by talkback radio – a medium that Howard made his own – and pitched at those with limited education and older voters with a less critical insight into social and political affairs.

What of the change in the “Catholic vote”? One possibility is a re-run of the old story: Catholic “aspiration.” But why aspiration should be particularly marked among Catholics is less obvious now than it was in the heyday of the Democratic Labor Party – and even then it was far from clear that aspirations of a material kind were the key to the shift in the Catholic vote. Another possibility is that the Catholic connection with Labor is partly the product of a church whose teachings on issues like asylum seekers are more liberal than conservative; with falls in church attendance, however, increasing numbers of Catholics have become available to parties of the right. But since the 1996 data suggest that Catholics who attended church most often were more likely to support the Coalition, this seems unlikely. It is more likely that Howard, who prided himself on the number of Catholics in his cabinet, shifted these voters by emphasising conservative values that Catholics endorse.

And what of the trade union vote? Of all the demographic variables, trade union membership is possibly the strongest, and certainly most consistent, predictor of the Labor vote.

Party convergence?

One of the things our research helps revive is the notion that issues matter. In 1996, unemployment, privatisation, the environment and defence made a difference; in 1998, unemployment, privatisation, education and interest rates made a difference; in 2001, unemployment, health and Medicare, and “the war on terror” made a difference; and in 2004, unemployment, interest rates, defence, “the war on terror” and the Iraq war made a difference. This is a formidable list. That some issues (defence, terrorism and interest rates) have worked in the Coalition’s favour whereas others (health, education, privatisation and the environment) have benefited Labor lends weight to scepticism about claims that we are living through an era of party “convergence.”

Judgements that the state of the economy had improved in the past twelve months were invariably more powerful influences on the vote than judgements that household finances had improved in the past twelve months. But judgements that household finances had gone backwards in the past twelve months were more powerful in 1998, 2001 and 2004, than judgements that the country had gone backwards in the past twelve months; only in 1993 and 1996 were these relative weights reversed.

Although our conclusion does not necessarily confound the notion that voters are overwhelmingly egocentric (sociotropic judgements may be self-centred), it does confound the notion that it is simply their “pocketbooks” that govern how people vote.

Finally, we note that in 1996 those respondents who cared “a good deal” which party won helped bring to an end Labor’s 13 years in office and, in 2004, they helped repel Labor’s third attempt to win it back. What is interesting here is that the basis for caring which party won apparently lay not in any commitment to the notion that a change of government was a good thing in itself; rather, it seemed to spring from the notion that which of the parties was in office actually mattered. If this is so, it is another blow to the fashionable notion that the parties are becoming increasingly indistinct. It is also a blow to the notion that contemporary elections are simply about those voters who, but for compulsory voting, wouldn’t be bothered to vote. •

A note on the Australian Election Study: All of the surveys are carried out after the respective elections. Liberal voters are slightly over-represented (although we correct for this by weighting the data by electoral returns). Some of the items are less than satisfactory at measuring key social issues; respondents, for example, are asked about “education” rather than “educational standards,” “educational costs” or “educational choice.” There are inconsistencies: questions on defence, but not in 1998; about interest rates, but not in 2001; and on mortgage repayments, but only in 1996. And for none of the elections are there data on the importance respondents attached to Aboriginal issues, a republic or any aspect of family policy.

The post Howard’s victories: which voters switched, which issues mattered, and why appeared first on Inside Story.

]]>