The term experimentation in the public policy context can sometimes have a negative connotation – bringing to mind everything from uncertainty and risk to failed social experiments. But experimentation shouldn’t be scary. In government, experimentation is the deliberate use of methods that seek to explore, test, and compare the effects and impacts of policies, interventions and approaches, in order to inform and support decision-making. Simply put, experimentation is about systematically learning what works and what doesn’t, and integrating the acquired knowledge into policy- and decision-making processes to arrive at the best outcomes for citizens.

This notion has started to coalesce under the “What Works” banner in places like the United Kingdom and the United States, supported by complementary evidence-focused initiatives, including the US-based Results for All and UK’s Alliance for Useful Evidence.

Elsewhere, governments have recognized the importance of experimentation and have set up policy units devoted to it. In Finland, for example, experimentation is one of the five priorities of the prime minister and the Experimental Finland team is placed squarely within the prime minister’s office. Denmark and the United Arab Emirates (UAE) also have such self-declared units, and speak openly about engaging in experimentation.

In Canada, the federal government has committed to devote a fixed percentage of program funds to experimenting with new approaches to policies and programs and measuring impact. To that end, several departments have already begun systematically developing experimentation frameworks, units or funding streams.

For example, the Mental Health Commission conducted a number of experiments to test Housing First, an approach that focuses on providing housing for homeless people with complex needs as a first priority, before addressing other challenges. Participants were randomly placed into either a Housing First group or a group that continued to receive typical services provided to high-needs homeless persons. These experiments demonstrated that prioritizing housing before other interventions offered better outcomes, in terms of both financial savings for government and improved quality-of-life metrics for participants. The strength of the experimental evidence helped cement Housing First as a favoured practice by governments to tackle homelessness.

Just as citizens expect instant, on-demand services, the policy cycle — and indeed the gathering and use of evidence to inform policy decisions — is increasingly expected to occur equally fast, despite widely cited reports that, in some contexts, such as health care, there might be a 17-year lag before research is translated into on-the-ground changes. The Canada Beyond 150 initiative inside the federal public service is in the midst of training public servants in the new methodologies to make them resilient in the face of change.

In this search for faster, evidence-based policy-making, experimentation presents itself as a unifying thread and an indispensable tool.

The public policy toolbox is growing to include such things as design thinking; agile and lean approaches that encourage rapidly testing multiple versions of an initiative; and technologically driven innovation, such as including new ways to engage with citizens through social media channels. And in this search for faster, evidence-based policy-making, experimentation presents itself as a unifying thread and an indispensable tool.

Experimentation can be applied throughout a typical policy cycle. What follows is an illustration of what experimentation looks like through the six key steps of public policy-making. (Note: The steps of policy cycle have been artificially separated for clarity. In reality, policy-making can be much messier, it may start from any of these stages, and it doesn’t always flow in a linear manner.)

The Problem stage, or the identification of a problem or a need, is often the first step in the policy process. The problem may be identified through various means, such as citizen feedback, evaluation, etc. While experimentation does not necessarily have a formal role in this step, evidence stemming from research on experimentation (and other sources of insight and evidence) can inform both the identification of a problem and further research to understand the problem (see next step).

At the Research stage, policy-makers collect evidence, identify gaps and create new knowledge about an issue. Policy evidence is derived from various techniques: anecdotal, public opinion-based, and empirical evidence generated using research designs that compare two or more approaches. Each method comes with strengths and weaknesses.

At the Options stage, possible policy options bubble up to decision-makers, each with pros and cons, and with differing degrees of viability. Depending on the resources and time available, testing ideas on a small scale may strengthen the quality of options offered to decision-makers, thus leading to better-informed policy decisions and potentially saving time and money later on.

The Decision stage determines the direction of a given course of action. This typically involves approval of a particular option, deferral of a decision, inaction, or continued exploration of the issue. Experimentation brings a new choice to decision-makers, which is to deliberately test and compare, even at a small scale, one or more interventions. Rather than making large-scale decisions in the absence of fully conclusive or satisfactory evidence, experimentation can help to “de-risk” the decision-making process at this step of the policy cycle.

At the Implementation stage, resources are mobilized to ensure the desired decision is put into practice effectively. Experimenting with different ways of delivering a course of action or other aspects of implementation can improve the effectiveness of already developed interventions. It may also ensure that a new intervention achieves the best outcomes possible given its parameters.

Finally, the Evaluation stage provides insights into whether the intervention has succeeded in its stated goals. Evaluation should not be an afterthought; it should be considered right from the Problem stage. Context matters, and even previously tested, successful interventions may yield unforeseen results when rolled out on a larger scale or in an evolving or even slightly different environment. Rigorously evaluating the effectiveness of individual experiments means ensuring that there are staff available with evaluation expertise, adequate financial and human resources are in place, and there is proper planning from the outset. In a broader sense, whether it is summative, formative or developmental, evaluation can bring self-awareness and flexibility needed to ensure an intervention succeeds.

An example of evaluation happened in 2017, when the Canada Revenue Agency experimented with behavioural techniques by sending different “nudging” letters to Canadians who might not be reporting their income from underground economy activity. Ultimately, the experiment suggested that this particular intervention may not be the most effective approach to influencing the reporting behaviour. From an experimentation angle, this is actually a good news story, for both government and taxpayers. The experiment cost less than $20,000 and the agency gained valuable information about the behaviour of a particular group.

Experimentation brings together a wide range skills and professions — evaluators, policy analysts, data scientists, and researchers — who must all learn to work with each other and with each other’s methodological strengths and quirks. As governments begin to engage with policy experimentation, they must build capacity in many areas, including experiment design (the way the actual experiment is structured); ethical screening (which dictates what is morally acceptable); and statistical analysis.

Policy experimentation itself could be said to be currently in the Research stage in the federal public service, with the hopes that it will show enough promise to move to Options and Implementation at a later date. Yet there is no question that experimentation is still emerging as a distinctive public policy sub-field, with a new vocabulary and toolkit.

Photo: Shutterstock/By abyrvalg00


Do you have something to say about the article you just read? Be part of the Policy Options discussion, and send in your own submission. Here is a link on how to do it. | Souhaitez-vous réagir à cet article ? Joignez-vous aux débats d’Options politiques et soumettez-nous votre texte en suivant ces directives.

Dan Monafu
Dan Monafu is an entrepreneur, community builder, and federal public servant. He has spent the past 5 years trying to understand the emerging fields of policy innovation, and now experimentation.
Sarah Chan
Sarah Chan has seven years’ experience in the federal public service and has progressively moved from traditional strategic policy functions towards supporting innovative hot-spots and systems striving to make government better.
Sean Turnbull
Sean Turnbull is a Government of Canada “free agent,” focusing on policy innovation and experimentation. Currently posted to the Treasury Board Secretariat, he is a passionate advocate for increased rigor in government decision-making.

Vous pouvez reproduire cet article d’Options politiques en ligne ou dans un périodique imprimé, sous licence Creative Commons Attribution.

Creative Commons License

More like this