(Version française disponible ici)
Policymaking should be a society-wide effort, including elected officials, government employees, academics, business leaders, civil society groups and individuals. In theory, each of us should be able to participate.
These days, a lot of people are attending conferences on artificial intelligence (AI) around Canada and the world, and the topic is getting increasing media attention. As a society, we are discussing policy and science, and trying to figure out what to do with all this new technology and its new risks.
Hundreds of jurisdictions are creating policies and regulations as well as modifying existing laws to achieve that goal.
The fields of interest represented are vast. They cover education, big business, health, justice systems, law, engineering, media, human trafficking, discrimination, bias, national identities and, to a lesser but growing extent, diversity. Artists are also making themselves heard.
These are all important points of view, but there are a lot of absent voices. We need to include them to determine what they think, need or intend because policymakers cannot â and should not â make decisions about complex and important issues without hearing all points of view.
What we may be short on in AI policymaking is subsidiarity â a legal principle included in most state-level legal systems and large institutions. It means that decisions should be made by the people who will be affected by them.
AI is complex, far-reaching. Itâs too much for only a few groups or experts to manage. Thatâs the nature of society-wide situations.
In Quebec, we have historically always shown a strong spirit of subsidiarity, which has contributed to the establishment of our laws at all levels. Why shouldnât we apply the same philosophy to the discussion of future technologies?
The missing voices
Who, then, is missing (or almost missing) from Canadian and world AI policymaking conversations? Who seems to be quieter than their numbers or needs would imply?
To begin, as in other society-wide conversations, groups that can be considered part of the mandate of diversity initiatives are underrepresented. Though they are organizing to be heard, they are not proportionally present now in AI policymaking conferences.
For example, traditionally marginalized groups such as immigrants and the LGBTQ communities continue to be marginalized, if present at all. Women in AI are underrepresented and paid less.
People who are physically challenged through disability or age or who live farther away from where decisions are taken may also have a hard time joining these conversations.
Socially challenged groups â some of whom do not have a right to vote but are already being affected by automated decision-making, such as young people, people in poverty, prisoners, temporary foreign workers and the global AI workforce â are all largely voiceless in AI conferences and the news.
Though there are many land acknowledgments, there is also no discernable integration of Indigenous views of ownership and power. Also, time-tested tools for coming to collective decisions in the face of collective risks are being left unused. Given that Canada is home to more than 700 First Nations, an Indigenous view of what AI is and how it can be of use to all communities is needed.
In addition, non-human life is not represented in these conversations. What rights do other mammals have? Or fish? How might these developments help or hinder their thriving on Earth? There are groups of people who have organized to represent these interests in climate conversations and they should be invited to AI conversations, too.
Another omission is small businesses. They donât have the resources to attend international events, to get to know the people involved or follow along for consultation opportunities. This should be taken into consideration.
At expensive, catered events, academics present to audiences of government and corporate regulators while small businesses focus on sustaining operations. Sixty-eight per cent of Canadians in the private sector work in small businesses. Their voices matter, and big business does not represent the interests of small businesses.
Also unheard, and in some ways more worrisome, are military leaders and strategists, police and private-security companies. Though there are some private conferences with defence contractors, our security forces are almost completely absent from public conversations on AI. They may be in the audience, but they are only rarely on stage or participating in panels.
Yet by some indications, what is being reported in the media is far behind actual military applications.
For example, itâs not well-known that AI robots are shooting in Ukraine without human direction or that autonomous four-legged robots are patrolling borders, today, to hunt refugees. Nor that AI is capable of assisting governments and criminals to make bioweapons. That there are drone swarms in use in conflict zones and under development. Miniaturized drone swarms being developed. Canada has showed some signs of support, but many governments have repeatedly refused to sign non-proliferation treaties for autonomous weapons or so-called âslaughterbots.â
Some Canadian researchers, including Geoffrey Hinton and Yoshua Bengio, have specifically asked Canada to ban the fast-moving militarization of AI. Researcher Petra Molnar has traced historical and current tech experimentation in contested spaces such as war zones, prisons and migrant-favoured borders.
Private-security companies are a potentially ripe industry for testing dangerous tech. They may be able to be absent because people donât know how pervasive theyâve become in society.
Canada has twice as many private security personnel as police. They are rolling out a wide variety of AI apps into businesses and institutions right now. Maybe security company tech representatives arenât going to conferences so that they can avoid critical scrutiny.
These people and groups are disproportionately missing from AI policy conferences, including the biggest, such as the 2023 world AI summits in Montreal and Amsterdam, the 2023 U.S. summits in Las Vegas and New York, and the 2023 U.K. safety summit in London. Smaller conferences tend to focus even more on a small number of topics and groups.
Also missing from the conversation are whole topics. For example, we need to talk about the potentially catastrophic class of risks of this tech.
To start with: the tech doesnât have to become sentient or be alive for these risks. Yet, with few exceptions (like the U.K. Safety Summit), thereâs a trend of silencing by evoking Hollywood-induced hysteria or tech company subterfuge. Both phenomena exist, but focussing on those narratives sidelines people like the scientists and special-interest groups who are well-informed and concerned.
Tech giants and criminals can benefit from lax regulation and from enforcement that primarily considers market effects. The field of strategic management is clear on this point: potentially grave risks are best managed ahead of time with mitigation measures. Because mitigation is costly, there is often political and economic pressure to avoid it. Itâs nonetheless necessary for AI.
How can we tell who else might be missing?
There are many ways. Canada has a particularly full toolkit. Consultation practices, town squares, school initiatives to hear youth, near-universal internet access for public forums . . . the list continues.
A specific thing we can do in AI is to expedite visas for foreign specialists who are spending their time and energy to connect with us on these important issues.
For example, African delegates at the Shaping AI for Just Futures conference in Ottawa in October were attending in part to explain that our supply chains start and end in their countries, where the minerals are extracted, where many AI systems are moderated, and second-hand tech markets thrive â at a price.
Regulations alone arenât enough to protect people. We need Canadian regulation to control the abuses of Canadian corporations abroad. Itâs reprehensible. Yet itâs nearly impossible to imagine what is not brought to our attention.
For another example, we could provide meaningful, easy-to-access financial support so that small businesses can attend policy conferences in their field.
It is the role of conference organizers and policymakers to ensure that everyone with a stake is heard so that policymakers hear the full gamut of needs and concerns on AI.
There is a richness to this process. Consider what we might learn from historians of techno-social paradigm shifts? Or a group of climate activists who have been working on policy within these same systems? Or from specialists on the KaianereâkĂł:wa, the Haudenosaunee great law of peace, about how to govern by consensus on big questions?
The main solution is practical: organizers and policy makers need to be humble and listening, and they need to be able to bring everyone to the table