As Canada approaches its turn to lead the G7 in 2025, senior officials are beginning to consider how to construct a meaningful agenda with lasting effects.
The G7 – some of the world’s most advanced democracies and economies – meets every year to discuss global economic policies and other important issues.
Artificial intelligence (AI) was an essential element of both the Japanese and Italian G7 presidencies.
A call for global governance of AI, initiated in 2023 by countries in Hiroshima, defined subsequent work by the United Nations.
Prompted by the launch of new generative AI tools in late 2022, the G7 leaders agreed the following year to international guiding principles and an international code of conduct to steer the global development of safe AI tools.
The UN secretary-general subsequently announced an AI advisory body with more than three dozen members from stakeholder groups around the world.
Its recommendations could set the stage for an AI global framework inspired by climate change strategies and other international efforts around powerful technologies.
A summit on the future is to take place during the UN General Assembly this month and will allow member states to debate how to harness the opportunities of AI while mitigating its risks.
The Italian AI presidency in 2024 has carved a slightly different niche that emphasizes the equitable distribution of AI benefits. It focuses primarily on sustainable AI development in Africa, specifically data collection and management, computing power, and attracting and retaining skilled talent.
How can Canada expand on the Japanese and Italian approaches while also forging ahead to address other concerns arising from the exponential growth of AI?
As a global leader in AI development in multiple settings, including the UN and G7, Canada is well-positioned to lead in two critical areas: AI risk monitoring and AI for climate action.
AI risk monitoring
Yoshua Bengio, a Université de Montréal professor who directs the Quebec Artificial Intelligence Institute (MILA), was the lead author of a report in the United Kingdom on AI safety.
That report underscored the need for improved mechanisms to assess and mitigate AI risks and called for continuous efforts from governments, academia and society to ensure AI is developed and used responsibly.
The question of how best to monitor AI risks is complex. As described in Bengio’s report, risks come in many forms.
Even if we look only at generative AI, there are myriad potentially harmful effects.
Discrimination can occur when AI tools have a different output for individuals based on their belonging to a specific group. For example, AI can be used to screen resumés and reject female candidates.
There are also risks when images sexualizing or stereotyping women or other groups increase the possibility of gender-based violence or racially motivated hate acts.
There is also the environmental risk of generative AI, documented in detail by Montreal-based researcher Sasha Luccioni, who suggests that AI could accelerate climate change if it becomes as widely adopted globally as it is poised to be.
Finally, there is the immense potential effect on people’s relationships to themselves, their community and society. Consider the new AI “friend necklace” that can be worn to keep individuals company, provide ideas for discussions on dates and interpret aspects of their daily lives.
These risks need to be measured to ultimately reduce them. AI can be harnessed for sustainable development only if it is safe, and safety can be assessed only if risks and harms are measured and monitored.
Developing ways to measure AI risks and build international co-operation to reduce them is an immense undertaking, but some distinct efforts are being made.
The Organisation for Economic Co-operation and Development (OECD) has developed a database that includes media reports such as accidents with self-driving cars. There have also been attempts to measure risks based on government policies.
The idea is that adopting AI strategies and legislation will lead to a safer AI sector.
AI for climate action
The second area that a Canada-led G7 could target would be how to harness AI for climate action.
There are many ways in which safe and responsible AI could help reduce carbon emissions, mitigate the effects of climate change on vulnerable populations and help countries most at risk adapt for what is to come.
At a recent conference in Bonn, researchers from around the world showcased opportunities in the fields of energy efficiency, agriculture and remote sensing.
In addition, a technology executive committee which is part of the UN’s framework convention on climate change, has developed a strategy for climate action to be discussed during the COP29 conference in Azerbaijan in November.
Focusing on this topic would allow some of the harms of generative AI to be addressed by pushing companies to prioritize reducing their carbon footprint and other environmental risks.
Canada is not only a leader in AI talent; it also has several dynamic AI hubs in Montreal, Toronto, Ottawa and Edmonton.
Federal and provincial governments have invested significantly over the last few years to position AI in business and the public sector, creating research chairs at universities and providing spaces for policy reflection.
The country is therefore well-poised to lead the G7 into more in-depth consideration of how to measure AI risks and how artificial intelligence can be used to help fight climate change.
In doing so, we can harness our growing AI expertise to address two of the most pressing issues the world faces today.