A decade ago, a group of policy wonks obsessed with gathering hard evidence for decision-making began meeting regularly over beers in Ottawa. They put together a campaign for the creation of an independent evaluation watchdog to make sure taxpayers’ money is spent on federal programs that work.

The idea hasn’t quite gotten off the ground, but the group is more convinced than ever that Canada needs an evaluator general reporting to Parliament, making sure data, evidence and evaluation drive policy and funding decisions.

“We need someone to do the deeper dives and wade into and understand at a level of some depth what policies and programs work for whom, in what conditions and why,” said Steve Montague, a member of the advocacy group and an Ottawa-based management consultant.

“An evaluator-general could act as a check in an era when government doesn’t have enough independent analysis.”

Government evaluators do a systematic examination of a program’s design, its implementation and ultimate results to understand why it worked or not. They examine the context or events that triggered the program, such as a terrorist attack, opioid crisis, economic downturn or, at a local level, an increasing number of cyclists killed on a particular route. They look at the relevance or need for the program. Was it effective? Are recipients of program assistance and/or the broader community better off?

The Ottawa group proposing the creation of an evaluator general say this new agent of Parliament would be positioned between the auditor general and the parliamentary budget officer with the three of them providing independent “advice on the propriety of government spending, the credibility of government budgets and the likelihood that programs and policies will achieve desired objectives.”

The auditor general investigates whether programs are run in compliance with accounting rules. The evaluator-general would investigate whether programs are achieving the expected results.

Management consultant Michael Obrecht, considered an architect of the proposal, said the evaluator general would collect and synthesize evaluations, gathering studies from around the world to help improve decision-making and the determine the likelihood that programs and policies will achieve desired objectives.

The proposal calls for an office with a $2-million start-up budget and a team of experts in program evaluation that could do its own evaluations as well as assess the reliability and validity of data it gathers globally.

Four decades of evaluations

The government has had an evaluation function in departments for 40 years, but the scope of evaluations is typically narrow – often focused on a specific program rather than on the big policy issues that straddle various departments and other levels of government. Treasury Board has an online database of more than 1,600 evaluations conducted by various departments and agencies over the years.

The advocacy group, however, has long argued the evaluation function is not a priority for departments and has been underutilized for years. For parliamentarians, the reports are too narrow to help them wrestle with complex national and international issues that transcend a single department.

The evaluations have also been criticized as poor and questionable in their quality because departments are evaluating their own work and deputy ministers don’t want to receive bad news that they’ll have to share with the ministers.

“It would be a naive MP who took a departmental performance report or evaluation at face value,” the evaluator general group concluded in one of its papers.

Frank Graves, president of Ekos Research Associates, is a strong supporter of the group’s proposal and argues it’s more needed today than a decade ago, when he wrote a paper calling for an evaluator general “to champion and raise public consciousness about the importance of knowing what works and what doesn’t.”

“The capacity to assemble and interpret rigorous empirical evidence to test causal hypotheses about program effectiveness atrophied badly from the mid-’90s on,” Graves said.

“Despite a [Liberal] commitment to restoring evidence-based policy and decision-making there appears to be little progress to recovering that capacity in the federal public service.”

Are evaluations being marginalized?

New digital technologies that are changing the world at an unprecedented pace are ramping up the pressure on a risk-averse public service.

The inner workings of government
Keep track of who’s doing what to get federal policy made. In The Functionary.
The Functionary
Our newsletter about the public service. Nominated for a Digital Publishing Award.

The Impact and Innovation Unit within the Privy Council Office has been examining new ways to improve the delivery of government programs and to ensure their efficacy. This new focus is partly in response to the unprecedented pace of technological change, which is putting pressure on normally risk-averse federal policy-makers to keep up with Canadians’ expectations. Public servants can no longer create and map out a new policy or program over five to 10 years and assess how well they work after that.

The unit recently released a “Guide to Impact Measurement, while its sister organization within PCO, the Results and Delivery Unit, was behind the controversial “deliverology” approach to achieving results on political promises and priorities.

The Impact and Innovation Unit is also overseeing a pilot project called the “Impact Genome,” which is using meta-analysis of research studies and predictive analytics to help develop the government’s Youth Employment Strategy.

But some in the evaluation field are worried that the government is marginalizing the traditional theory-based evaluations for which Canada is still seen as a world leader.

At the centre of these concerns is that a focus on results tends to prioritize short-term targets and on what can be measured simply and easily while missing out on the bigger picture. A similar concern was flagged in a recent report by the Organization for Economic Co-operation and Development (OECD) on results-based management.

But Privy Council Office officials insists that high-quality evaluations are as important as ever, and are central to its guide to impact measurement. New measurement tools, however, enable policy-makers to gauge the likely success of a program before it is rolled out and to make tweaks and adjustments to it once implemented.

International interest in evaluations

Governments around the world are wrestling with how to determine what policies are achieving outcomes based on evidence rather than politics, ideology or gut feelings.

In fact, Obrecht said the advocacy group’s campaign for an evaluator general got new life when other countries recently called for creation of similar posts.

In Australia, the Labour Party supports the creation of an evaluator general office. The evaluator would be housed inside the Australian treasury to help assess what policy programs are working.

Last year, a parliamentary working group in France urged the creation of an autonomous Parliamentary Evaluation Agency in a bid to boost Parliament’s oversight and role in evaluating policies.

Still, it’s unclear how much appetite there is for another agent of Parliament in Canada. A recent report by Public Policy Forum on the nine existing parliamentary watchdogs concluded “fewer, stronger agents” would better serve Parliament. The report recommended a high bar for creating new agents and to consider consolidating the work of agents with similar mandates.

The group advocating for an evaluator-general agrees on the need for the job but the structure of the office has generated much debate.

Some argue that a full agent of Parliament position isn’t necessary, and the work could be done by a chief evaluation officer, similar to the chief science officer appointed by the Liberals. Others say the work could be handled by expanding the role of the auditor general or the comptroller general.

Obrecht argued an independent evaluator general is even more critical with so much more information and misinformation floating around.

“Decision-making seems to be contracted into sound bites, and the capacity to collect extensive information and digest it seem to have been weakened by instant access to easy answers.”

Photo: Shutterstock by create jobs 51


Do you have something to say about the article you just read? Be part of the Policy Options discussion, and send in your own submission. Here is a link on how to do it. | Souhaitez-vous réagir à cet article ? Joignez-vous aux débats d’Options politiques et soumettez-nous votre texte en suivant ces directives.

Kathryn May
Kathryn May is the Accenture Fellow on the Future of the Public Service, providing coverage and analysis of the complex issues facing Canada’s federal public service for Policy Options. She has spent 25 years writing about the public service – the country’s largest workforce – and has also covered parliamentary affairs and politics for The Ottawa Citizen, Postmedia Network Inc. and iPolitics. The winner of a National Newspaper Award, she has also researched and written about public service issues for the federal government and research institutes. Follow Kathryn on Twitter: @kathryn_may.

You are welcome to republish this Policy Options article online or in print periodicals, under a Creative Commons/No Derivatives licence.

Creative Commons License

More like this