(Version française disponible ici)

Hardly a week goes by without the potential and risks of artificial intelligence (AI) hitting the headlines.

The technology has been infusing our daily lives for some time through the algorithms of our smartphone applications, but the public deployment of ChatGPT, the language model, was as much a shockwave as a collective and widespread realization of the many social concerns raised by AI.

The law has not been spared, and the attention generated by the phenomenon has highlighted a number of legal issues likely to characterize the deployment of artificial intelligence. These include violations of privacy and personal data, algorithmic biases, intellectual property infringements and the risk of misinformation, to name but a few.

There is also the issue of use in a professional context, whether in terms of the delegation of certain tasks or the reconfiguration of the labour market that we can expect. Or, again, violations of business secrets by employees who are careless in their use of generative AI systems and who would reveal their company’s contracts or other secrets. An example that springs to mind would be the use of a tool like ChatGPT to ask for answers or to create content based on people’s facts or situations that are likely to be confidential.

Just like the Wild West

As the opacity of AI intensifies with the abandonment of open-source code for ChatGPT-4 (which is restricted to subscribers), the model is continually being fed and trained by its users through the addition of massive amounts of data and iterative corrections. Against this backdrop, fears of artificial intelligence that is totally out of control – at least for end users – are growing.

A hundred or so experts have called for a global pause in AI research to ensure that those involved take responsibility for their work and provide a better framework. One of the pioneers of machine learning in Canada, Geoffrey Hinton, has also resigned from Google so that he can freely express his concerns.

It has to be said that ChatGPT could be deployed all over the world without any prior control or authorization whatsoever. This is true of any application incorporating AI, to the extent that some people are talking about a technological “Wild West” to describe the current situation.

The law is too fragmented

However, AI is not exempt from the law already in force. The difficulty lies in the fact that regulating AI requires recourse to several branches of law and types of rules that fall under provincial or federal jurisdiction, which complicates its implementation or renders it ineffective.

For example, privacy can be protected in part by federal or provincial legislation on personal information. These laws govern the large amount of data AI uses to function and which may identify individuals. For example, an algorithm designed to detect high-school dropouts deployed by a Quebec school services centre was criticized by the Quebec Access to Information Commission for not complying with the (new) provisions of a law governing access to documents held by public bodies and the protection of personal information.

In the case of deep fakes, damage caused by the harmful, even malicious use of this technology includes defamation, identity theft, invasion of privacy and copyright infringement. But Canadian law offers a range of remedies for this, too. Provisions of the Criminal Code could also apply in a pornographic context, particularly child pornography. But apart from the sparse legal response, detecting deep fakes is a challenge in itself.

The law is insufficient

Even if we find the right legal path, the law will not necessarily be adapted or sufficient to deal with all the issues raised by AI. The law is also out of step with new technological uses: content-generating AIs – such as ChatGPT for text or Dall-E for images – are shaking up the copyright and creative ecosystem.

The application of the law may also be compromised for procedural or evidentiary reasons. The rules of contractual and extra-contractual liability (in the event of possible damage) can be mobilized to try to impose obligations on companies that market AI systems, but success is far from guaranteed. The plurality of players and the complexity of the value chain in the training, design, adaptation and deployment on a small or large scale of AI systems, not to mention the supply of data, is likely to lead to endless difficulties in designating those responsible and determining their share of responsibility.

The inner workings of government
Keep track of who’s doing what to get federal policy made. In The Functionary.
The Functionary
Our newsletter about the public service. Nominated for a Digital Publishing Award.

The fundamental rights guaranteed by the Canadian Charter of Rights and Freedoms may play a role in the supervision of AI. In particular, they include the rights to privacy, equality and dignity, as well as the freedoms of expression, opinion and demonstration, as well as procedural guarantees, in particular access to justice and the principle of contradiction. For example, would the use of facial-recognition drones undermine the right to demonstrate?

It’s time for a public-safety conversation about artificial intelligence

Canada is failing to regulate AI amid fear and hype

AI must be used responsibly with vulnerable populations

While the diversity of rights guaranteed by the Charter opens up various avenues of recourse for those subject to the law, it is not easy to avail oneself of them. Firstly, the Charter cannot be invoked between private parties (although the Quebec Charter of Rights and Freedoms can). Secondly, the constitutional value of these rights does not sit well with technological tools whose operation is often obscure. In particular, it will often be impossible to prove that complex and opaque algorithmic systems have infringed on fundamental rights.

It is therefore not enough to say that we have the Canadian Charter to ensure that people’s rights and freedoms are effectively safeguarded in the face of the deployment of AI. The way in which harm occurs shows that it is often too late to act after the event, and that it is impossible to restore people to their previous state if, for example, they are the victims of algorithmic discrimination. The law would therefore fail here in protecting individuals.

As a result, although AI is not facing a legal vacuum, the existing rules do not sufficiently address the main risks generated by this technology. As proof of this, a number of legislators around the world, including Canada, are considering adopting a dedicated legal framework to ensure compliance with the principles of transparency, fairness, security, reliability and compliance with the laws in force, all under human control.

This means taking ownership of the technology and imposing obligations on its design, training, validation, deployment and end use. This framework must be comprehensive, and go far beyond a one-off and scattered legal consideration of certain negative effects on individuals and the population. In short, the time for AI-specific legislation has well and truly come.

Read more in this series:

THE IRPP IS HOLDING A (FREE) WEBINAR ON ARTIFICIAL INTELLIGENCE ON OCTOBER 5 AT 1 P.M. ET

Click here to register!

Do you have something to say about the article you just read? Be part of the Policy Options discussion, and send in your own submission, or a letter to the editor. 
Céline Castets-Renard
Céline Castets-Renard is a professor in the faculty of civil law at the University of Ottawa and holds the research chair on accountable artificial intelligence in a global context.  X: @CastetsRenard
Anne-Sophie Hulin
Anne-Sophie Hulin is a professor in the faculty of law at the Université de Sherbrooke and holder of the chair on artificial intelligence and social justice (Abeona Foundation, ENS-PSL, OBVIA).

You are welcome to republish this Policy Options article online or in print periodicals, under a Creative Commons/No Derivatives licence.

Creative Commons License

Related IRPP Research

Are New Technologies Changing the Nature of Work? The Evidence So Far

By Kristyn Frank and Marc Frenette January 27, 2021

The Superclusters Initiative: An Opportunity to Reinforce Innovation Ecosystems

By Catherine Beaudry and Laurence Solar-Pelletier October 8, 2020