(Version française disponible ici)

Policymakers who are looking at artificial intelligence (AI) applications are thinking about what we as a society want to achieve and what we need to protect, yet it is not commonly known that AI apps require intensive natural resources, labour and capital.

Decision-makers – including citizens, the civil service and politicians – should not be misled regarding AI’s regulation and its potential consequences. So how can policymakers acquire the knowledge they need?

In terms of timing, we haven’t missed the boat on regulating AI. The airline industry took more than 100 years to reach the safety standards we have today. What we’ve learned from that body of work is that regulation takes time.

The proposed Canadian Artificial Intelligence and Data Act (AIDA) is to be debated by a parliamentary committee this autumn. Discussions and committee work are also ongoing in several provinces and at other institutions. The proposal includes a framework for regulation over the next years.

Still, there is urgency. AI applications being developed by industry touch on serious issues, including the harms of war, personalized health care, privacy protection and many more. The stakes are high.

Large-scale natural resources

On the natural resources side, the manufacture of AI components, computing power and running applications is large-scale and energy-intensive. We don’t have the necessary supplies to meet demand for computer gear.

The carbon footprint of the tech sector is estimated at between 1.8 and 3.9 per cent and AI apps are far more energy and carbon intensive. Power usage by data centres is expected to rise by a factor of 200 by 2028. And consider that one average conversation in ChatGPT consumes about 500 ml of water, mostly for cooling its thousands of servers.

Business-led efficiency measures, while necessary, can’t be counted on to save us from overusing resources. Ethicist and former Google researcher Timnit Gebru and her colleagues have shown that many businesses in the market are competing to build applications with ever-larger computing needs despite lower utility, higher costs and greater risks.

Decision-makers also are thinking of present and future efforts to mitigate and adapt to climate change, but industrial interest groups have dominated its regulation. As a result it hasn’t worked as it could have.

The mass production of resource-hungry, specialized tech risks causing even more problems for the environment, yet AI regulation is being led by similar industry wish lists.

Author and activist Naomi Klein argues that we need to see past the hype and ensure that AI does no harm. As in climate policymaking, the challenges of AI regulation are primarily economic and political, not a matter of lacking technical tools or information.

Considering AI’s labour

Abuse of workers and communities in the mining of rare earth materials for computing components is common. The focus in Canada is on growing AI talents, including researchers and data scientists, but predatory outsourcing of most of the work on AI is the norm. This goes from development to deployment and includes poor and coercive treatment of miners and factory labourers, extensive layoffs in the tech sector and thousands of temporary contractors. Most AI models need thousands of human data annotators and integrators hired at cut rates and with bad working conditions that don’t meet minimum labour standards for fair work (Fairwork Cloudwork Ratings 2023).

The inner workings of government
Keep track of who’s doing what to get federal policy made. In The Functionary.
The Functionary
Our newsletter about the public service. Nominated for a Digital Publishing Award.

Regulators need to better assess who participates in building any proposed applications and under what conditions they can be hired. This matters because the current situation isn’t socially sustainable, violates labour rights and is unethical by any standard.

Accessing financial capital

AI is expensive. In some jurisdictions, this has so far justified supporting only the largest institutions and companies in adopting AI. For example, $800 million of public funds to develop Quebec’s AI ecosystem churned out 3,000 high-paying jobs per year.  But a Quebec program through Invest-AI for implementation projects supports only businesses with over $1 million in annual sales, excluding most micro and small companies from consideration. With few exceptions, the largest companies enjoy preferential access to support.

Preferential access to larger companies and institutions, as opposed to empowering small businesses and public organizations, confines decisions to the needs of those big groups. As of 2021, only 3.7 per cent of Canadian firms had adopted AI.

Additionally, combining government-supported profit potential with a low understanding of costs and risks is dangerous. Regulators must ensure that government funding is spent on new AI projects with input, consent and oversight from the public. The idea that only experts can discuss AI is short-sighted.

SERIES: How should AI be regulated?

AI bias must move to accountability to address inequity

AI is multidisciplinary and not completely predictable. There is no one person or group able to imagine all the potential costs and consequences of its uses. Education and participation of communities, labour groups and small businesses is crucial.

It’s important to listen to stakeholders to ensure the validity and pertinence of a proposed application. They need not be experts. Seniors can be asked how they feel about a robot assistant with a given level of error organizing pills. Would offenders or victims choose automatic sentencing, given that sentencing algorithms rate some offenders based on their neighbourhood demographics? The answers would speak to whether a proposed application was worth the intended natural, labour, and capital costs.

What policymakers should ask of any proposed AI application: Is it cost-effective in terms of resources? Have the stakeholders been adequately heard? Have the risks been examined and mitigated? Have we accounted for future uncertainties? Is there good documentation of data and descriptions of the model? Is it wanted? And, most importantly, is it necessary? Too often a business case for an application fails to address whether it is expected to improve on existing solutions.

A resource-focused analysis can help point to the potential seriousness of AI that policymakers, society and the tech sector will be considering. That said, it is not possible to predict all the costs, trade-offs and risks.

In uncharted waters, it will be important to keep in mind the principles of precaution and humility.

Do you have something to say about the article you just read? Be part of the Policy Options discussion, and send in your own submission, or a letter to the editor. 
Tammy Mackenzie
Tammy Mackenzie is the CEO and tech lead at Polyaula, a mom, a human rights advocate, an MBA in small and medium-sized enterprises and tech sustainability, and an independent PhD candidate working on institutions as systems, and the levers of power.
Kai-Hsin Hung
Kai-Hsin Hung is a Ph.D. candidate at HEC Montréal researching data, value chains, and work and a member of the Inter-university Research Centre on Globalization and Work.

You are welcome to republish this Policy Options article online or in print periodicals, under a Creative Commons/No Derivatives licence.

Creative Commons License