Recently, at a time when fear and hype around artificial intelligence (AI) were already running high, another notable voice added further alarm regarding these rapidly emerging technologies. One of the worldâs most influential AI scientists, the University of Torontoâs Geoffrey Hinton, announced he was parting ways with Google so he could add his voice to the chorus of warnings about the dangers these technologies pose. Whether or not one agrees with his perspective, the move was relevant and telling.
You can be forgiven if youâre experiencing a feeling of dĂ©jĂ vu. Many of the arguments about the existential threat posed by AI and its promise to automate away entire sectors of human work have been heard before. These fears are often exaggerated, but technologies based on large language models (LLMs), such as ChatGPT, are changing how work is done in a number of fields. They have also created a backlash in ways that previous deployments of machine-learning technologies did not.
Some have called for immediate action to prevent the development of âAI that does not do what we want, and does not care for us.â Yet, there are plenty of potential harms that may arise from AI doing exactly what it was designed to do by corporations that are hopelessly short-sighted about the greater welfare of humanity. The risks posed by AI are often presented as an âalignment problem,â meaning the real issue is finding a way to âalignâ AI with human values or best interests. However, framing the potential dangers this way ignores one critical factor: AI is already aligned with a particular set of human values and interests â those of the profit-seeking corporations driving AI development.
An example of this came in 2021, when the publication of a paper warning of the harms already emerging from LLMs led Google to acrimoniously expel Timnit Gebru from the company and its Ethical AI team. More recently, Google and its competitors have been cutting back their AI ethics teams even as they charge ahead in response to ChatGPT.
According to Hinton, Google was initially cautious with these technologies because âit knew there could be bad consequencesâ if released to the public. Once Google was seen to be falling behind OpenAI, however, it âdidnât have really much choiceâ but to change its position, because âit is just inevitable in the capitalist system⊠that this stuff will be developed.â
Meanwhile, Yoshua Bengio, the scientific director of the Montreal Institute for Learning Algorithms (Mila) and co-winner (with Hinton) of the 2018 Turing Award, has echoed Hintonâs existential concerns and argued that we need immediate action to prohibit âAI-driven botsâ from impersonating humans. In the longer term, to mitigate potential harms due to the rapid acceleration of AI development currently underway, Bengio has stated that âwe should be open to the possibility of fairly different models for the social organization of our planet.â
However, all is not lost. We have many examples of government regulation that could help bring about greater alignment between AI development and the public interest. That said, Canadaâs proposed Artificial Intelligence and Data Act (or AIDA, which is part of Bill C-27) currently before the Commons standing committee on industry and technology, does not address many of these issues.
As many have noted, a key problem with AIDA is that it leaves the details of how to govern AI up to future regulations, the details and enforcement of which will be largely up to Innovation, Science and Economic Development Canada (ISED), the agency that drafted the legislation without public consultation. ISEDâs pro-industry mandate and commitment to promoting AI commercialization sets up potentially conflicting functions when it drafts the regulations. AIDA already seems to be an outdated response â not because it fails to address existential risks of AI takeover, but because it would likely be unable to address many of the more immediate examples of harms involving algorithmic systems.
In addition, AIDAâs focus on addressing individual harm excludes collective harms such as threats to democracy and the environment, and the reinforcement of existing inequalities. AIDA effectively regulates only âhigh impactâ systems, while leaving this term undefined in the legislation. ISEDâs companion document does set out some criteria it will use as part of its process for determining whether an AI system is âhigh-impact,â but these would probably exclude most of the pressing concerns around LLMs. The EUâs proposed Artificial Intelligence Act, which takes a related approach in classifying âhigh-riskâ systems for additional regulation, has recently extended separate obligations for âfoundation modelsâ such as the LLMs underlying generative AI, to bring these into scope.
It is hard to see how legislation focused on high-impact individual harms would address the more diffuse harm of unattributed LLM-generated text, which is now being commercialized to draw clicks, serve ads and potentially misinform. An inherent characteristic of these systems is that they end up hallucinating outputs that seem plausible, whether they are true or not. Many recent proposals to regulate machine-learning models (including LLMs) argue for greater accountability in how these systems are âtrained.â It can be argued that AIDA would only potentially impose such measures for certain high-impact cases.
AI accountability canât be left to the CRTC
Itâs time for a public-safety conversation about artificial intelligence
Finally, many of the most âhigh-impactâ applications of AI are those carried out by governments, including immigration decisions, benefits claims, policing and military operations. Yet, AIDA applies specifically to the private sector. (A small number of federal government systems have received algorithmic impact assessments, which have their own limitations.)
In short, while many of the risks and existential harms around AI are overblown and serve mainly to fuel further hype for these technologies, there are actual risks and harms being produced by existing algorithmic systems. The solution is better regulation. Unfortunately, Canadaâs approach through AIDA has been arguably undemocratic and will likely result in a permissive, pro-industry regulatory regime that will be unable to address many of these existing challenges. Rather than spending the coming years fleshing out this skeletal attempt at AI and data regulation, we should be heeding those who argue for a more fundamental rethink.
Any technological development with broad social impacts requires regulatory reform. Unfortunately, the Canadian government has unwisely bundled AIDA with reforms to outdated private-sector privacy law in C-27 instead of ensuring the two instead proceed on separate tracks. While those promoting AI hype and existential gloom look primarily to a speculative future, we need to start again with a policy approach that more effectively addresses our present circumstances.