Recently, at a time when fear and hype around artificial intelligence (AI) were already running high, another notable voice added further alarm regarding these rapidly emerging technologies. One of the world’s most influential AI scientists, the University of Toronto’s Geoffrey Hinton, announced he was parting ways with Google so he could add his voice to the chorus of warnings about the dangers these technologies pose. Whether or not one agrees with his perspective, the move was relevant and telling.

You can be forgiven if you’re experiencing a feeling of déjà vu. Many of the arguments about the existential threat posed by AI and its promise to automate away entire sectors of human work have been heard before. These fears are often exaggerated, but technologies based on large language models (LLMs), such as ChatGPT, are changing how work is done in a number of fields. They have also created a backlash in ways that previous deployments of machine-learning technologies did not.

Some have called for immediate action to prevent the development of “AI that does not do what we want, and does not care for us.” Yet, there are plenty of potential harms that may arise from AI doing exactly what it was designed to do by corporations that are hopelessly short-sighted about the greater welfare of humanity. The risks posed by AI are often presented as an “alignment problem,” meaning the real issue is finding a way to “align” AI with human values or best interests. However, framing the potential dangers this way ignores one critical factor: AI is already aligned with a particular set of human values and interests – those of the profit-seeking corporations driving AI development.

An example of this came in 2021, when the publication of a paper warning of the harms already emerging from LLMs led Google to acrimoniously expel Timnit Gebru from the company and its Ethical AI team. More recently, Google and its competitors have been cutting back their AI ethics teams even as they charge ahead in response to ChatGPT.

According to Hinton, Google was initially cautious with these technologies because “it knew there could be bad consequences” if released to the public. Once Google was seen to be falling behind OpenAI, however, it “didn’t have really much choice” but to change its position, because “it is just inevitable in the capitalist system… that this stuff will be developed.”

Meanwhile, Yoshua Bengio, the scientific director of the Montreal Institute for Learning Algorithms (Mila) and co-winner (with Hinton) of the 2018 Turing Award, has echoed Hinton’s existential concerns and argued that we need immediate action to prohibit “AI-driven bots” from impersonating humans. In the longer term, to mitigate potential harms due to the rapid acceleration of AI development currently underway, Bengio has stated that “we should be open to the possibility of fairly different models for the social organization of our planet.”

However, all is not lost. We have many examples of government regulation that could help bring about greater alignment between AI development and the public interest. That said, Canada’s proposed Artificial Intelligence and Data Act (or AIDA, which is part of Bill C-27) currently before the Commons standing committee on industry and technology, does not address many of these issues.

As many have noted, a key problem with AIDA is that it leaves the details of how to govern AI up to future regulations, the details and enforcement of which will be largely up to Innovation, Science and Economic Development Canada (ISED), the agency that drafted the legislation without public consultation. ISED’s pro-industry mandate and commitment to promoting AI commercialization sets up potentially conflicting functions when it drafts the regulations. AIDA already seems to be an outdated response – not because it fails to address existential risks of AI takeover, but because it would likely be unable to address many of the more immediate examples of harms involving algorithmic systems.

In addition, AIDA’s focus on addressing individual harm excludes collective harms such as threats to democracy and the environment, and the reinforcement of existing inequalities. AIDA effectively regulates only “high impact” systems, while leaving this term undefined in the legislation. ISED’s companion document does set out some criteria it will use as part of its process for determining whether an AI system is “high-impact,” but these would probably exclude most of the pressing concerns around LLMs. The EU’s proposed Artificial Intelligence Act, which takes a related approach in classifying “high-risk” systems for additional regulation, has recently extended separate obligations for “foundation models” such as the LLMs underlying generative AI, to bring these into scope.

The inner workings of government
Keep track of who’s doing what to get federal policy made. In The Functionary.
The Functionary
Our newsletter about the public service. Nominated for a Digital Publishing Award.

It is hard to see how legislation focused on high-impact individual harms would address the more diffuse harm of unattributed LLM-generated text, which is now being commercialized to draw clicks, serve ads and potentially misinform. An inherent characteristic of these systems is that they end up hallucinating outputs that seem plausible, whether they are true or not. Many recent proposals to regulate machine-learning models (including LLMs) argue for greater accountability in how these systems are “trained.” It can be argued that AIDA would only potentially impose such measures for certain high-impact cases.

AI accountability can’t be left to the CRTC

It’s time for a public-safety conversation about artificial intelligence

Will artificial intelligence lead to more unfairness?

Finally, many of the most “high-impact” applications of AI are those carried out by governments, including immigration decisions, benefits claims, policing and military operations. Yet, AIDA applies specifically to the private sector. (A small number of federal government systems have received algorithmic impact assessments, which have their own limitations.)

In short, while many of the risks and existential harms around AI are overblown and serve mainly to fuel further hype for these technologies, there are actual risks and harms being produced by existing algorithmic systems. The solution is better regulation. Unfortunately, Canada’s approach through AIDA has been arguably undemocratic and will likely result in a permissive, pro-industry regulatory regime that will be unable to address many of these existing challenges. Rather than spending the coming years fleshing out this skeletal attempt at AI and data regulation, we should be heeding those who argue for a more fundamental rethink.

Any technological development with broad social impacts requires regulatory reform. Unfortunately, the Canadian government has unwisely bundled AIDA with reforms to outdated private-sector privacy law in C-27 instead of ensuring the two instead proceed on separate tracks. While those promoting AI hype and existential gloom look primarily to a speculative future, we need to start again with a policy approach that more effectively addresses our present circumstances.

Do you have something to say about the article you just read? Be part of the Policy Options discussion, and send in your own submission, or a letter to the editor. 
Mike Zajko
Mike Zajko is an assistant professor in the department of history and sociology at the University of British Columbia’s Okanagan Campus, studying AI, algorithmic decision-making and public policy.

You are welcome to republish this Policy Options article online or in print periodicals, under a Creative Commons/No Derivatives licence.

Creative Commons License