(Version française disponible ici)

How should Quebec regulate artificial intelligence (AI)? After consulting several hundred experts, the Quebec Innovation Council recently tabled a report that appears to set out a framework for an AI law for the province. This follows a consultation process involving several experts and what has been described as a public forum on AI.

We welcome this initiative because provincial legislation would enable Quebec to assert leadership in responsible AI. However, this potential legislation must not reproduce the shortcomings of the federal version, or worse, create new ones.

Ottawa’s recent consultation on a code of conduct for generative artificial intelligence systems was strongly criticized for lack of transparency. The absence of consultations for the initial version of the Artificial Intelligence and Data Act (AIDA) was also particularly worrying.

Quebec could avoid these mistakes at the provincial level by fulfilling some of the key objectives of a consultation, such as public education and democratic engagement.

That’s why we need to start pointing out the potential pitfalls of this framework now.

Let’s look first at how the Quebec Innovation Council’s report was shaped, starting with the holding of its public forum on AI on November 2. Far from providing room for debate, we were instead presented with an already familiar narrative about AI in Quebec and Canada, framed by a small number of researchers and practitioners – a panel of experts sorely lacking in diversity and chosen according to unclear criteria.

The forum offered food for thought on various perspectives related to the expected impacts of AI on society, including on work and the Quebec job market, and on the province’s place in the international AI framework and as a leader in the responsible development and deployment of AI. There was no space for dialogue among citizens, civil society, experts, academic researchers, business people, etc. Instead, it was a series of speeches interspersed with a few brief questions from the audience.

Promoting innovation while protecting the public

Quebec must avoid succumbing to the fear that too strict a framework for AI will hold back innovation. This rhetoric came up again and again during the discussions leading up to the Innovation Council’s report. Innovation should be untethered – but not beyond the point it becomes a threat to the public.

The fear of regulation heightens the risk of conflict, such as the recent one between the news media and tech giants. If the practices of the web giants had been better regulated from the outset problems like the blocking of news on certain platforms might not have occurred.

Let’s take these elements as feedback to be considered in the upcoming debates on AI regulation in Quebec. Since the algorithms are at our doorstep, let’s look at this as an opportunity to propose appropriate regulations and to prevent harms now rather than try to cure them later.

Focus on transparency and protecting rights

In the early days of generative AI, it would be in the Quebec government’s interest to establish a framework adapted to present and future technological course changes, notably by establishing principles of governance and transparency for algorithmic data.

Just as ingesting harmful food can seriously damage our health, feeding algorithms with biased data (or data whose use is unregulated) can be dangerous. Just think of the increase in inequalities or any other drift that could present a danger to citizens and hinder the proper functioning of democracy.

The inner workings of government
Keep track of who’s doing what to get federal policy made. In The Functionary.
The Functionary
Our newsletter about the public service. Nominated for a Digital Publishing Award.

Just think of the discrimination already caused by AI systems that relied on pre-existing problematic data. One of the most striking cases is that of Amazon, whose intelligent software algorithm unfairly penalized suitably qualified women in recruitment. Another example is COMPAS, a tool used by American courts to assess the probability of a defendant becoming a repeat offender which is arbitrarily unfavorable to the African-American community.

Risks to democracy lie in the increase in misinformation propagated by certain algorithms, which is difficult to control and known to be particularly dangerous, especially during election periods. Let’s imagine the impact such systems could have on society if they were to become even more widespread in their use.

Let’s understand that while human biases stem from our moral values (which may, for example, be the fruit of our beliefs or the environment in which we evolve), the biases of AI systems stem mainly from algorithm-processed data. In a manner of speaking, these data are the foundation of these robots’ “moral values.”  We should be able to analyze the state of the data through which algorithms operate by opening up of their data, where necessary.

As I suggested for federal Bill C-27 and AIDA, Quebec should also prohibit systems that may infringe on rights and freedoms, on principles of non-discrimination and the right to dignity. These systems should not be allowed to stand in the way of the values of equality and justice.

This applies to biometric recognition systems, the dangers of which were highlighted by the report of the Office of the Privacy Commissioner of Canada on the RCMP’s use of Clearview AI. We are also thinking of social rating systems for individuals, initiated by or on behalf of public authorities, such as the social credit system being tested in China, the emergence of which could restrict individual freedom and give rise to new forms of social inequality.

Preventing the ravages of deepfakes

Particular attention should be paid to the systems behind deepfakes.

Last year, the former minister of Canadian Heritage, Pablo Rodriguez, appointed a panel of experts to consider drafting a bill on online harm, taking into account, among other things, deepfake photos and videos, disinformation, and other software capable of spreading falsehoods. Last November, members of the panel called on the government to speed up the introduction of this “digital harms” legislation, in view of the growing risk of Canadian children being put at risk by privacy breaches and online harassment on the platforms they use every day.

This is all the more relevant given the recent case of explicit, doctored deepfake images of students at a Winnipeg school being posted on social media.

Although the recently introduced Bill C-63 finally considers deepfake of a sexual nature, particularly with regard to children, it is curious that AIDA has not previously expressed any reservations against the abuses of AI systems designed to create deepfakes, and those that could undermine democracy. There are nonetheless risk-related measures that provide that the person responsible for a high-impact system shall establish, in accordance with the regulations, measures to identify, assess and mitigate the risks of harm or biased results that may result from the use of the artificial intelligence system.

In this context, if Quebec were to be less permissive than the federal government towards these systems – by imposing sanctions specific to the creation and/or propagation of malicious deepfakes – or any other ethically questionable algorithmic program capable of having an impact on human rights, then it could establish itself as one of the main spearheads of responsible AI. And all the more so if Quebec includes government institutions in the supervision of this technology, unlike AIDA which, alas, is currently limited to the private sector.

In short, only increased collaboration between Quebec’s various AI experts, in a multidisciplinary dynamic, will lead to a solid regulatory proposal in favor of responsible AI, whose potential could be recognized on an international scale. Quebec, where the influential Montreal declaration for the responsible development of artificial intelligence was born, now has a key role to play.

Do you have something to say about the article you just read? Be part of the Policy Options discussion, and send in your own submission, or a letter to the editor. 
QCBALIRAI
Lahcen Fatah is an ethicist of technology and a doctoral student in science, technology and society at the Centre interuniversitaire de recherche sur la science et la technologie. He is also a member of the board of directors of Nord Ouvert and teaches applied engineering ethics at Polytechnique Montréal.

You are welcome to republish this Policy Options article online or in print periodicals, under a Creative Commons/No Derivatives licence.

Creative Commons License