(Version française disponible ici)

Is a quick but imperfect solution preferable to a perfect solution that will take a long time? This is the dilemma facing legislators when it comes to artificial intelligence.

Part 3 of Bill C-27, tabled in June 2022, proposes framework legislation: the Artificial Intelligence and Data Act (AIDA). The ensuing debate highlights the tension between the urgency to act, and the importance of putting in place a robust regime that reflects a social consensus on what a responsible framework for AI would look like.

More recently, opposition to AIDA has intensified. Some believe that the federal government is on the wrong track with the project, arguing it is not salvageable and should be rejected. The advice is to “do better, not faster.”

This reasoning is seductive but wrong.

Adapting, not reinventing the wheel

The idea that the risk of waiting is less than the risk of being wrong is based on two concerns.

The first concern relates to the substance of the governance framework. That is, the ability of government to develop the standards of conduct that are not, by design, specified in the wording of AIDA, but will be detailed in regulations. The second concern is that the regulations may not be adopted in a timely manner (or at all), and that the public consultations essential to the legitimacy of the process may not take place, may be too selective, or may not go far enough.

The justification for this approach is that a flexible framework is needed because of the dynamic and changing nature of AI. However, many are concerned that the government is not well-positioned to carry out this regulatory monitoring because AI is unlike any other sector that has been regulated in Canada, whether because of its complexity, its opacity or the accelerating pace at which it is evolving. In short, artificial intelligence would surpass our current ability to build a functional regime of binding rules.

This vision ignores the existing regulatory frameworks developed in areas that raise similar issues: ensuring a balance between public protection and economic growth, creating flexible and adaptable standards, encouraging compliance and setting out guidelines for accountability.

When it comes to regulating economic activities, Canada often uses a graduated scale of state responses to modulate the behaviour of private players, sometimes referred to as a “compliance continuum.” While there is no set form of “continuum,” this approach involves using a combination of proactive incentives for companies to adopt good practices, and sanctions of increasing severity for non-compliance with rules or violations of legal standards.

Such an approach is applied in the field of competition law: the Competition and Compliance Framework. In addition to education, information and advice measures for stakeholders, the framework also provides for resolution measures (negotiation, consent agreements) before arriving at formal proceedings before the courts, which may lead to the imposition of civil, administrative, regulatory or criminal sanctions.

Although the rules for imposing liability are enshrined in law, the detail of the standards and how they are applied and developed is defined either in regulatory texts or in guidelines. Technical guides on misleading advertising are a good example. Even if such guidelines do not have the force of law, they play a critical role in ensuring that the people and businesses targeted understand what is expected of them in simple language, with examples to back up the explanations given.

There are also interesting regulatory models in the provinces. A relevant example of the regulation of a complex and changing field is that of securities, which has similarities with AI: the ever-changing nature of securities means that regulation is inevitably in catch-up mode with the latest financial innovations.

Apart from certain general prohibitions in legislation, the vast majority of standards to be met are set out in regulations or similar instruments, as would be the case with AI. In addition, the provincial authorities responsible for securities supervision collaborate on the application of pan-Canadian standards through a co-ordinating body, the CSA (Canadian Securities Administrators). The need to adapt to the inexhaustible demand for new methods of financing does not prevent the authorities from ensuring investor protection by adjusting existing rules or adopting new ones.

The common thread that stitches together such a model and others like it is the existence of an agency responsible for ensuring the application of the law and the regulatory regime, and the ability to adopt either regulations or guidelines.

Dynamic, evolving regulations

While models exist to inspire us on how to develop and structure AI regulation, we must nevertheless recognize that the AIDA has very limited content compared with other laws creating regulatory regimes in Canada. For example, as framework legislation, it is silent on the regulatory process subsequent to its adoption.

This lack of detail fuels the second concern, that of adopting regulations in good time and holding public consultations. Ottawa’s publication in March 2023 of a companion document setting out a two-year timetable for drafting the regulations and holding public hearings has not reassured the skeptics.

This is all the more understandable in the Canadian context, where legislative and regulatory changes depend more on political or administrative will than on the desirability of making a change. This can be overcome by providing mechanisms for periodic review and updating directly in the structure of the framework law. As well as creating a framework capable of responding to the continual evolution of AI, standardizing the use of progressive and periodic adjustments (say every two to five years) will make it easier to incorporate improvements or corrections. It will also free us from the impossible burden of identifying the best way of doing things from the outset.

Waiting is no longer an option

Despite the many criticisms of AIDA and the complementary document, this scaffolding can, with some modifications, serve as a first step toward establishing an AI governance framework. Waiting in order to do better runs the risk of the current market becoming frozen and dominated by powerful companies, and reducing the options available for building robust governance. The example of the actors’ and scriptwriters’ strike in the United States, one of the central issues of which is that of the guidelines governing the use of AI, underlines the urgency of taking action.

Canada is failing to regulate AI amid fear and hype

Policy-makers must get up to speed on AI

Start governing AI now, with some early limits

Our failure to act so far has allowed AI to evolve in a regulatory and legal vacuum, ceding to business leaders the power to shape an AI “framework” to suit their commercial interests, without any discussion of the values underpinning these choices. If we consider the nature and extent of AI’s impact on human society to date, we have shown an extraordinary collective naivety in allowing it to evolve at the whim of private economic interests. Every year that passes with such a system makes it harder to dismantle.

It is still possible to adjust our aim. With its AI law, the European Union is leading a global movement to build a governance architecture. So there is an opportunity here. It would be a shame to miss it for fear of not adopting the perfect law.

Read more in this series:

THE IRPP IS HOLDING A (FREE) WEBINAR ON ARTIFICIAL INTELLIGENCE ON OCTOBER 5 AT 1 P.M. ET

Click here to register!

Do you have something to say about the article you just read? Be part of the Policy Options discussion, and send in your own submission, or a letter to the editor. 
Jennifer Quaid
Jennifer Quaid is an associate professor and vice-dean research of civil law at the University of Ottawa. She is also a senior fellow at the Centre for International Governance Innovation and chair of the legal committee of Transparency International Canada.

You are welcome to republish this Policy Options article online or in print periodicals, under a Creative Commons/No Derivatives licence.

Creative Commons License

Related IRPP Research

Are New Technologies Changing the Nature of Work? The Evidence So Far

By Kristyn Frank and Marc Frenette January 27, 2021

The Superclusters Initiative: An Opportunity to Reinforce Innovation Ecosystems

By Catherine Beaudry and Laurence Solar-Pelletier October 8, 2020