In the end, it wasn’t even close. The United Conservative Party (UCP) won a massive majority in Alberta’s recent election. The new party – formed in 2017 following the merger of the old Progressive Conservative Party and the Wildrose Party – captured 55 percent of the vote, ousting Rachel Notley’s NDP after a single term. The UCP, led by former federal cabinet minister Jason Kenney, netted 63 of the 87 seats in the Legislative Assembly.
And yet nine – that’s nearly half – of the public opinion polls released during the campaign suggested the distance between the NDP and UCP was in the single digits. Towards the end of the campaign, some media outlets cited polls to back up the idea that the race was tightening.
The UCP won the election by 22 percentage points – 55 percent to the NDP’s 33 percent of the actual ballots cast. That’s a lot. Clearly, the race wasn’t tightening or narrowing.
How did some of the polls get it so wrong? Historically, public opinion polls in Alberta have underestimated conservative support. Any read of an Alberta provincial poll should, therefore, assume that it doesn’t capture the true extent of conservative support, and the limitations of polls should be acknowledged by the news media and the public opinion research industry.
A history of getting it wrong
In 2012, many polls showed the Wildrose Party leading the Progressive Conservatives (PCs). The PCs won a comfortable majority on election night. In Calgary’s 2017 municipal election, three polls suggested the city’s popular mayor was going down to defeat to a relatively unknown challenger.
In the most recent provincial election campaign, the total absolute error for polls was almost 15 points. (This is the total absolute error calculated using all four of the mainline parties running in the recent election – the UCP, the NDP, the Liberal Party, and the Alberta Party.)
By comparison, the total absolute error in 2012 was 21 points. The post-debate polls in 2015 – where the NDP came to power – was 14.5 points. Keep in mind, the total absolute error for the 2013 British Columbia campaign – an election that had many pollsters hanging their heads in shame – was 17 points. So, by comparison to BC’s polling failure in 2013, Alberta’s recent campaign polling was almost as bad.
The recent public opinion polls consistently overestimated NDP support and underestimated UCP support. And this underestimation of conservative support isn’t a new phenomenon.
Trying to capture the elusive Tory supporter
Social scientists point to a few possible reasons for underestimating conservative voters: (1) they refuse to participate in surveys, (2) they lie about who they intend to vote for, and (3) pollsters are not reaching them.
In the United Kingdom in 2015, pollsters famously didn’t account for all the “shy Tories” that helped propel David Cameron to a surprising win. Australian polls also underestimated conservative support in that country’s federal election in May 2019.
These reticent — or even ashamed, it seems, in the UK — voters don’t feel comfortable admitting to pollsters that they plan to vote for conservative parties. Amidst the rise of the alt-right and far right, there is, arguably, a stigma to conservative politics. Elisabeth Noelle-Neuman coined the term “spiral of silence” in 1974 to explain how people’s perceptions of desirable social views can, in fact, influence public opinion. The German political scientist argued that people don’t like expressing views that run counter to mainstream thought, so they remain silent instead of challenging consensus thinking.
On top of not wanting to share how they vote, shy Tories also appear less likely to participate in public opinion research. The thinking is that these voters hang up on pollsters, including on robo-polls (also known as IVR, or interactive voice response), when conservatives hold power, because they are happy with the status quo.
At the end of the day, pollsters just aren’t reaching these people, and the use of different methods don’t appear to make a difference.
Different methods, similar results
Neither the interview mode (how people were asked who they intended to vote for) nor the method of coming up with the sample of people (random digit dialing [RDD], online panels) guarantee that a poll will accurately capture voters’ intentions.
Let’s consider the three surveys that were closest to predicting the actual vote, within each survey’s margin of error.
Forum, which had the UCP at 55, used random digit dialing (RDD) to recruit their respondents. The polling firm then fielded their survey using IVR to ask people who they intended to vote for on April 16.
Janet Brown Opinion Research was also close with the UCP, at 53. Its sample was also collected using RDD. Respondents could choose either to share their vote intention with a live telephone interviewer or to do the survey online. A vast majority (90 percent) chose to talk with a human being. (Full disclosure: I have worked with Janet Brown in the past as a journalist on a political research project conducted by CBC News. I think Janet does good work.)
Angus Reid’s survey, which came out four days before the vote, had the UCP at 52 percent. The data came from an online survey panel. Respondents were randomly selected from the company’s opt-in panel to answer questions online.
These three different polls achieved similar results despite using different methods. Still, two out of the three slightly underestimated conservative support.
If the problem is shy Tories, what can pollsters do?
Looking harder for the shy Tory
The United Kingdom’s polling failure in 2015, which systematically under-represented conservative supporters, offers some potentially helpful insight for Canadian pollsters. In that general election, not a single public poll overestimated Tory support. So, survey research appears to be systematically underestimating conservative support.
Patrick Sturgis, the University of Southampton professor who led the panel of experts that reviewed what went wrong with the UK’s polls, called on the survey research industry to shift its “emphasis away from quantity and towards quality,” and to be “more imaginative and proactive” in their efforts to find those elusive shy conservatives.
In their final report for the British Polling Council and the Market Research Society, Sturgis and the other experts urged pollsters to work harder to ensure they recruit respondents for their samples using what they know about the makeup of the population. The pollster can compare census data of known demographics — age, sex, education level, region — with their samples. For example, some polling methods produce samples that underestimate the proportionate of older people in the population, so pollsters need to work harder to recruit them. The academics also asked the industry to look at coming up with new quota and weighting variables, to ensure that pollsters are actually contacting the people who often get missed in survey research.
For some Canadian pollsters, that advice may translate into being more uncompromising about setting – and sticking to – quotas for variables such as age, sex, education and where people live.
This won’t be easy for the polling firms. They will need to stay in the field longer to fill those quotas, they will likely spend more money, and they might face breaking news stories or dynamic campaign moments that suddenly alter the course of public opinion right in the middle of collecting data.
Some humility about polling results
Beyond the polling method used, pollsters – and the news media – need to be much more transparent about the limitations of survey research. Polls are snapshots in time. There’s a margin of error, of course, but polls, as we know, are not perfect. Transparency about their limits is required now more than ever, from both the pollsters and the news media that report their data.
Polls matter. They can shape public discourse and sometimes even influence campaigns. Voters – especially those looking to vote against an incumbent – sometimes turn to polls to see who has the best chance of winning.
In the wake of the British polling failure in 2015, Sturgis urged the public – and the news media – to recognize that polls are not perfect.
“Even if we move to the most expensive random survey that you can possibly imagine,” he told The Guardian, “there would still be a chance that you would get it wrong.”
No one wants to get it wrong. But, after all, the probability theory on which polling rests suggests there’s a chance it can happen from time to time.
Do you have something to say about the article you just read? Be part of the Policy Options discussion, and send in your own submission. Here is a link on how to do it. | Souhaitez-vous réagir à cet article ? Joignez-vous aux débats d’Options politiques et soumettez-nous votre texte en suivant ces directives.