In the aftermath of the January 23 election, Canada’s pollsters once again found themselves on the defensive. The industry was spared the embarrassment of a repeat of what happened during the 2004 election, for which vir- tually all the major firms had predicted a Conservative minority, but when it came to the seat breakdown, the final results bore little resemblance to what pollsters had confi- dentially projected.

Two days before the 2006 election, polling firm Ipsos Reid predicted a Conservative government with 143-147 seats and pegged the Liberals with only 59-63 seats. Ipsos also pre- dicted that the NDP would make great gains and elect 39-43 MPs while the Bloc would increase their seats in Quebec to 59-63. Meanwhile, Wilfred Laurier University professor Barry Kay, a veteran number cruncher with years of seat projections under his belt, predicted similar numbers: Conservatives 142, Liberals 80, NDP 29, BQ 56 and 1 independent. And for the first time, political junkies could play along with profession- al pollsters, thanks to the Hill and Knowlton seat projection model, a free online tool that gave instant seat projections based on whatever numbers the user fed into it.

The site also included ready-made seat projections based on recently released polls.

Using the latest numbers from SES, Léger, the Strategic Counsel, and EKOS, H&K’s engine projected that the Conservatives would have anywhere between 141 and 154 seats, and the Liberals anywhere from 57 to 82, depending on the specific polling statistics. But after the ballots were counted, the Conservatives wound up with just 124 seats, along with 103 seats for the Liberals, 51 seats for the Bloc, 29 seats for the NDP, and 1 Independent. Of the pollsters that made specific seat projections, not one had given the Liberals more than 90 seats, and virtually all had the Conservatives on the verge of a majority.

Except, that is, for SES Research, the polling firm behind the trend- setting CPAC daily tracking polls, which had also stood out from the pack during the previous election for the surprising accuracy with which it predicted the changing public opinion landscape.

Parliamentary channel CPAC had commissioned the then little known polling company SES Research to pro- duce daily tracking polls throughout the 2004 election, which were posted every afternoon on the channel’s Web site. SES president Nik Nanos also became a regular guest on CPAC’s nightly election wrap-up, where he would discuss the latest numbers. Although the SES polls proved almost instantly addictive for media and politi- cos, who welcomed the daily dose of data, few pollsters followed suit, instead continuing to put out weekly numbers, based on larger sample populations with a far smaller margin of error.

But a funny thing happened over the course of the 2004 campaign: despite the comparatively small sample size and sizeable margin of error, the SES daily polls proved to be a remark- ably accurate barometer of shifting trends. It was Nanos who first revealed the steady growth of Conservative support until the final week of the race, and whose numbers pinpointed the moment of the 11th hour Tory plummet following several high profile gaffes on the campaign trail.

Given the sleeper success of the CPAC-SES experiment in 2004, it was hardly a surprise, then, when during the lead-up to the November 28 elec- tion call, The Globe and Mail/CTV media empire announced that it would also be producing daily rolling tracking polls, courtesy of industry heavyweight Strategic Counsel. With a slightly higher sample size than SES (1,500 to 1,200, or 500 interviews per night, compared to 400) and a corre- spondingly lower margin of error (averaging 2.5 to 3.1), the battle of the dailies was on.

For the first few weeks of the cam- paign, the SES and Strategic Counsel numbers weren’t that far apart, although SES consistently found slight- ly higher support for the Liberals, and lower support for the Conservatives. But as the election dragged on, the gap widened until January 17, when Strategic Counsel showed the Conservatives at 42 percent support nationally, with the Liberals trailing badly at 24 percent " blowout num- bers. In contrast, the SES poll from the same day had the Tories at 36.6 percent and the Liberals at 31.5 percent.

The consistent disparity between SES and Strategic Counsel had already led to an almost viral rivalry amongst political partisans. Conservative supporters, in particular, seemed deeply skeptical of the SES polls, which they dismissed as hopelessly skewed.

They were critical of the SES methodology, by which Nanos eliminated undecided voters from his out- comes rather than redistributing them. Not surprisingly, ”œprogressive voters,” including both NDP and Liberal parti- sans, accused Strategic Counsel of slanting the results in favour of the Conservatives. But it was the now infa- mous 42-24 Strategic Counsel poll that sent shockwaves through the country’s political elite and public opinion research community. Overnight, every pundit, both on- and off-line , became an instant expert on the science of polling, as rabid debates over methodological minutia raged on, both in cyberspace and the real world.

On the other side of the debate, those skeptical of the Strategic Counsel results claimed that the poll- ster’s opening question, which asked respondents which party has the most momentum toward a federal election, could have created a pro-Conservative bias in subsequent answers.

Strategic Counsel managing part- ner Tim Woolstencroft disagreed, say- ing that momentum is a precursor for changes in vote intention. ”œNot always, but often,” he said. ”œIf there’s a shift in momentum, we see a shift in vote intention. Earlier in the cam- paign, the Liberals saw more momen- tum therefore their numbers were better. We were capturing that shift in the Conservative momentum.”

Woolstencroft said that evidence of such a pattern has existed for the last 20 years during each election. It does not bias the outcome of the results, he said. ”œIf you have a premise in the questions to lead the respon- dents, there can be a bias. If there are no questions preceding the vote inten- tion question, then there’s not enough in the other questions to have a mate- rial impact.”

Woolstencroft said that the Strategic Counsel saw the Conservatives’ momen- tum declining before and after it pulled the momentum question and it therefore did not cause the closing gap between the Liberals and the Conservatives. ”œWe saw that the vote intent was more nar- row than the momentum,” he said, adding that his firm took out the ques- tion because there was no analytical value to it. ”œThat late stage in the cam- paign, it was up to 71 percent momen- tum for the Conservatives, so it wasn’t shedding new light. We made room for other questions.”

In fact, two days after the release of the 42-24 poll, the Strategic Counsel quietly pulled the momentum ques- tion from the survey, replacing it with a more neutral query on whether it was the right time for a new govern- ment. By January 21, the 18-point Conservative lead had shrunk to just 10 points, giving the Tories 37 percent to the Liberals’ 27 percent.

The same day that the momentum question was removed, SES’s Nik Nanos said, Globe and Mail reporter Jill Mahoney interviewed him in his office for an hour for a story on the discrepancy in the numbers.

”œI explained that on most occasions the polls were relative- ly consistent factoring the mar- gins of accuracy,” Nanos recalled.

”œMethodologically there were two major differences between the Strategic Counsel and SES Research. The Strategic Counsel has a preference for placing the ballot question later in the interview while SES places the ballot question near the very beginning. Other than that, SES Research is the only pollster that asks an open-ended ballot question without prompting for parties or party leaders. Our preference is to have a clean ballot question without any content related to party or party leader. This allows Canadians to verbalize their choice on their own voting preference as opposed to choosing from a list.”

Despite the furor surrounding the competing results of the rival pollsters, Mahoney’s story was never published.

According to Globe and Mail man- aging editor Colin MacKenzie, it was mostly written about 10 days before the election, or several days before the 42-24 poll was published. The rogue poll had been buried inside the paper rather than played on the front, a clear sign that the Globe was troubled by the huge 18-point spread.

”œIt wasn’t particularly illuminat- ing. It was meant to be a story about the proliferation of polling and, as often happens among that tribe, descended into finger-pointing from the various firms,” MacKenzie said.

As for the momentum question, he said: ”œwe pulled [it] out late in the race to get more questions on last minute issues such as leader preference and that bat- tery on whether a Tory government would be a good thing or a bad thing. Our tracking had pulled back from that unhappy 18-point number by that point and didn’t show any effect from the removal of the momentum question.”

When the ballots were counted on January 23, it was SES that came within one-tenth of a point to matching the national numbers for the top four parties.

”œNot only were the national numbers right, but the sub-regional num- bers were where they should’ve been,” Nanos noted. ”œI think of it as scoring 100 percent on a test and it doesn’t happen very often.”

So did every other pollster get it wrong? Not necessarily, says one polling expert.

”œSome firms that continued polling up until election day did come very close to the final national results,” said Scott Bennett, a Carleton University political science professor who specializes in quantitative research methods.

He believes that Prime Minister Martin’s last-minute efforts to nega- tively attack Stephen Harper during the end of the 56-day election campaign created a small upswing for the Liberals, resulting in more than the predicted seats for the party that was in power for 12 years.

”œThe more accurate firms were able to catch this last-minute upswing, which was small and sometimes in the range of sampling error. I was not totally surprised at the final result because Martin’s efforts in the last part of the campaign did produce impact in some of the very last polls.” Sample size, he says, is not everything.

”œApart from timing of polls, I think some of [the difference in numbers] is due to differ- ences in the way firms structure their questions and how they use survey response categories,” Bennett said. ”œSES did particu- larly accurate work this time around and their approach to the use of response categories differed markedly from many and probably most other firms. In many respects, some of these other methodological considerations seemed to have proved just as and perhaps more important than absolute sample size.”

Former Liberal Party pollster Michael Marzolini agrees that in some cases the questions that were asked may have had an effect on the accuracy of the results.

”œIn cases where it wasn’t as accurate, we have to review the methodol- ogy, and whether we can ask the question, ”˜Do you support a change in government?’” Marzolini said. ”œThat underestimates Liberal support, because the people who answered ”˜yes’ to that question would be hypocritical if they had just told the pollster that they wanted change, and then said that they would vote Liberal, so the number of people who will answer ”˜Liberal’ declines. We didn’t discover this until recently, and it does create a definite bias. I don’t think all the poll- sters are up on that.”

Strategic Counsel’s Tim Woolstencroft said he wasn’t surprised that the race ended up so close. ”œWe suspected that there was an over- estimation in Bloc support and the NDP did better in the end, but we did have a sample on Sunday night [the day before the election] that showed a tightening in the race,” he said.

”œWe also saw that turnout affected the Liberal numbers. The more people who voted, the better the Liberals did.”

Nevertheless, the Strategic Counsel’s final poll on January 22 did not reflect any movement away from the Conservatives over the final weekend. The firm’s final 37-27 spread held, and early on election night on CTV, Strategic Counsel chair Allan Gregg projected ”œa comfortable” Conservative minority.

While most firms came reasonably close in predicting national support, when it came to seat projections they were all over the map.

According to Marzolini, it all comes down to the accuracy " or lack thereof " of seat projection models.

”œIt’s voodoo,” he said, ”œunless you’re breaking down the country into 60 different chunks of about five ridings each, and doing 18,000 interviews,” which is how he did it for the Liberals. ”œI’d know to the seat how the party was doing.”

Outside of internal polls, however, Michael Marzolini points out that the most accurate prediction models dur- ing this election were the ones that did a provincial breakdown, such as Demo- craticspace.com, or Leger Marketing’s January 21 poll of 2,000 respondents breaking out the regions of Quebec, and clearly pointing to the Conserva- tive breakthrough in the 418 region of Quebec City and the South Shore.

But even those models had prob- lems. ”œThe reason the seat projections weren’t all that great in Ontario,” said Marzolini, ”œwas that they were applying Ontario numbers to rural seats, which underestimates the Conservative vote, and to urban seats, which underesti- mates the Liberal vote.”

As for the model featured on the Hill and Knowlton Web site, although it quickly became the hottest online game around for political junkies, par- ticularly those in the media, veteran pollsters questioned the methodology behind the model and dismissed it as little more than a toy.

Not all pollsters relied solely on their company’s particular projection model to make their final predictions, however. Although his final polling results showed a ten-point lead for the Conservatives, EKOS chairman Frank Graves predicted a Conservative minority of ”œabout 125 seats, plus or minus 5 seats,” as well as a Liberal lead opposition, and ”œmodest gains” for both the Bloc and the NDP.

With the exception of the reduction in Bloc numbers, his predic- tion was dead on. So what was his secret? Understand- ing the limits of the seat projection model.

”œIt is only useful to the degree that voting intention data are fresh and reliable,” Graves said.

For EKOS, that meant holding back on predicting the exact number of seats for the Liberals or the other opposition parties " and recog- nizing, at least in the last two elections, the reality of a quirk of political physics known as the ”œLiberal bounce.”

”œIt’s not a consistent law, but in 2004, we predicted a Liberal minority, even though when we plugged our last survey data into the projection, it showed a Conservative minority. All the move- ments and tentativeness indicated that there was a whole bunch of voters who were willing to swing to the Liberals.”

As it turned out, the effect was still there during this election.

”œIt’s okay to have your numbers close, but looking at the data, and say- ing what is going to happen, is differ- ent, so we abstained from making quantitative seat predictions for the Liberals and the NDP because there was too much movement at play.”

For those voters, the concern was a Conservative government, according to Graves: ”œThirty percent or more of NDP voters said the thought of a Conservative majority would cause them to reconsider their vote, so not only was the potential there, we had strong evi- dence that it would be there again. But we didn’t think it would be big enough to have the same effect as last time, since there was also a different dynamic.”

Another factor, of course, is the quality of the prediction model itself, he points out.

”œThere are some crummy models out there that don’t work, and some people were working with very similar data, and ended up with the wrong predictions, so I think they were using the wrong models,” Graves said. ”œA good model is created through a process of statistically regressing the results from previous voters to actual voter outcome, and it’s just a matter of saying on average, these popular votes in these regions produce these results.”

At EKOS, he says, they ran a number of models, and came up with the predic- tion of 125 Conservative seats, which, with David Emerson crossing the floor to join the Tory cabinet when it was sworn in, happens to be exactly the number of Conservative seats in the new House.

”œOur other seat numbers would have been predicted correctly if we had had the final data,” Graves said. ”œAt the end of the day you should be able to look at the numbers and tell where the numbers will end up, so I’m not sure how some of my competitors can con- sistently come up with these estimates that have the Tories doing better.”

Graves said that trend had more to do with volatile voters than a Conservative bias amongst pollsters: ”œIt was interesting to note that the inconsis- tency across polls throughout the cam- paign seemed to apply more to the Liberal numbers than the Conservative numbers (with the notable exception of one late Strategic Counsel poll showing Conservative support much higher). I think this is a product of very high levels of ambiguity amongst conditional Liberal voters who were torn between censuring the Liberals and fear of the Conservatives and what they might bring. Conservative voters seemed much more comfortable and locked into their choices.”

The day after the election, in a public posting on the popular politi- cal blog maintained by National Post columnist Andrew Coyne, Ipsos chair Darrell Bricker admitted that seat pro- jections were not an exact science.

”œWe got the direction right, but missed the totals for the Grits and Tories by an unacceptable amount,” he wrote. ”œIt’s no comfort to me that the others who put out serious mathemat- ical models (as opposed to the local knowledge gurus) missed it too.”

In fact, according to Nanos, SES has a policy of not doing seat projec- tions, precisely because there is no established formula in the polling industry to make accurate predictions.

”œThere are certain accepted stan- dards when people do polls, the ques- tions they ask, the error rate, the sample size, but everyone does seat projections differently. There’s more of an art to doing seat projections. At SES, we just stick to the knitting and leave seat projections to others.”

Nanos believes that the only com- parison someone can make to his polls’ accuracy is to the outcome of the final results of the election.

”œIf the sample is smaller, but the numbers are accurate, how do you rec- oncile that with reality?”

Although people tend to prefer larger samples, in reality, it only mar- ginally increases the accuracy, Nanos said. ”œLarger samples are more impor- tant as a tool to improve the accuracy of sub-samples or regions. In my expe- rience factors such as question, word- ing, question order, and sample design have a greater impact on the accuracy of research. People tend to place less weight on smaller survey samples but in the last two elections, SES Research, which had the smallest samples for both, was the most accurate.”

In the recent election, he points out, SES Research had the smallest sample of decided voters, but turned out to be accurate to within one-tenth of one percentage point for the four major parties.

”œAll the other samples were much larger, some with over 8,000 respon- dents, yet SES Research was the closest to the mark.”

Michael Marzolini devoted a chap- ter of his last book to the 2004 elec- tion. He plans to do the same with this one, and he says he’ll be easier on the pollsters this time around.

”œThey did a pretty good job, and SES did a marvellous job with the last numbers. It was higher for the Green Party, because they prompt for Green, which gives a higher level of vote share, but the fact was that the Liberals did have more support than the previous couple of weeks. There was tightening in the urban centres, and it could have gone a number of ways.”

In his comments on the 2004 elec- tion, he said that the SES poll would be ”œvery good” if they doubled the sam- ple size " which the company did. He also maintains that all pollsters should poll until election day. ”œIf you do that, you shouldn’t be wrong.”