In a short time, the Canadian government has taken important steps toward better AI governance. This progress was evident at the Student Symposium on AI and Human Rights, held in April by Global Affairs Canada (GAC) and the Canadian Institute for Advanced Research. Participants heard first hand how the Canadian government is beginning to evaluate and consult on AI’s impact on human rights, specifically equality, privacy, accountability and freedom of expression. With these issues in mind, Canada still has much work to clarify the regulatory environment in place for artificial intelligence. Here are some observations from the event.

Government as a leader in creating inclusive spaces

The tech sector long has been criticized for being too white and too male, and this has led to technologies and policies that do not accurately represent the experiences of women and people of colour. The risk is that homogeneous design does not properly anticipate the real-world applications of technologies, creating AI that reflects the biases of the teams that worked on it.

The symposium featured a mix of students from different disciplines and backgrounds, and equal numbers of young men and women, mostly from Quebec and Ontario. I enrolled my graduate students from the Media Studies Program at Concordia University in Montreal to participate in the symposium. Students were expected to prepare a three-page memorandum (on a template provided by Global Affairs Canada), and they presented their findings at the symposium. Two groups formed, one around the implications of AI for the Canadian labour market, and one around how automation may increase race and gender discrimination.

From left to right: Concordia University media studies students Nina Morena, Courtney Blamey and Aurelia Talvela, presenting on AI discrimination on gender and race.

This symposium exemplifies how the government can lead by creating inclusive spaces. Although it may not be a leader in all cases, especially those involving the relationship between AI and Indigenous peoples, the support given to students to participate in this conference indicates a commitment to public participation. If there is to be proper democratic oversight of this new technology, this needs to continue.

A new generation of AI scholars?

When we started preparing for the symposium, my students, mostly women, were worried that they knew nothing about AI governance — a reminder that the rhetoric of advanced technology excludes many people from important political discussions. Siva Vaidhyanathan, director of the Center for Media and Citizenship at the University of Virginia, cautions that presumptions like these, about who interacts with technologies and who can claim expertise, mean that “we forge policies and design systems and devices that match those presumptions.”

Many students raised concerns that Canada lacks public knowledge about artificial intelligence. Instead, our discussions of AI have tended to use the language of Silicon Valley, with concepts like “disruption,” and the “fourth industrial revolution.” The hype around AI can foreclose the usual policy debates, making AI seem like a complete disruption rather than part of a much larger pattern of technological change in society.

That these students could go from being worried about their lack of expertise to being knowledgeable symposium participants at the forefront of AI governance is a potent reminder that the lack of inclusion in the conversations around AI is more a question of engagement than of ability.

The capacity of these students to learn and advise on AI supports calls for new researchers trained in assessing the impacts of algorithms. Understanding how AI will impact society requires interdisciplinary research, especially in the social sciences and humanities.

Concentration in the AI industry

AI does not build itself in a vacuum, nor is our understanding of AI without inaccuracies and biases. What’s missing from most policy discussions is an acknowledgement that AI is the key research area of major platform-based companies such as Google or Amazon. Yet these companies are not submitted to adequate public scrutiny, in spite of a long history of mistakes and abuses in developing their tech. Good AI governance must not repeat the past missteps of policy.

The growing concentration of power amongst the leaders in AI requires us to be aware that the future of AI is political as much as it is policy-based. The willingness of government to partner with and promote its industry contacts could impede its ability to hold these firms accountable.

Questioning the language of disruption

In their presentations, my students questioned how we talk about, describe and imagine the impacts of AI. Such representations are powerful. There is already a concern that too many AI discussions focus on the omnipotent AI-to-come, ignoring the mundane applications that are already here. My students found that much of the hype around AI sidesteps accountability for existing policy frameworks.

The professor and students of Concordia University’s Media Studies program, from left to right ─ back row: Fenwick McKelvey, Courtney Blamey, Bradley Peppinck (in far back), Aurelia Talvela, Margaret MacDonald; front Row: Anna Nguyen, Shanae Blaquiere, Nina Morena.

AI policy tends to try and develop new paradigms rather than repurpose existing frameworks. Treating AI as an unprecedented novelty impedes the government’s ability to create clear guidance about AI development and implementation. Why do we need new guidelines for “ethical AI” when we already have the Charter of Rights and Freedoms? Canada could easily advance its approach to AI governance by interpreting AI research and development through the lens of the Charter.

In short, the “innovation agenda” (to borrow a popular government phrase) cannot continue to ignore the crucial the step of interpreting the design and deployment of machine learning systems through effective and already existing human rights frameworks and labour laws.

Global AI governance, locally

The progress made by key government agencies suggests that the issues raised here will be addressed soon, but for now there remains a global gap in AI leadership. On May 25, 2018, the European Union’s law on General Data Protection Regulation (GDPR) comes into effect and its impacts will be global.

The rush to consider the GDPR as a global data framework indicates that there is a first-mover advantage in developing new digital policy. Modifying existing frameworks to suit AI, and considering the experiences of diverse populations are important, if we are to build AI policies that serve all Canadians.

If we continue at the current pace to implement humane, just and inclusive AI governance, Canada can be a global AI leader.

This article is part of the Ethical and Social Dimensions of AI special feature.

Photo: Shutterstock, by enzozo.


Do you have something to say about the article you just read? Be part of the Policy Options discussion, and send in your own submission. Here is a link on how to do it. | Souhaitez-vous réagir à cet article ? Joignez-vous aux débats d’Options politiques et soumettez-nous votre texte en suivant ces directives.

Fenwick McKelvey
Fenwick McKelvey is an associate professor in the Department of Communication Studies at Concordia University. He is co-director of the Applied AI Institute and the Machine Agencies Working Group at the Milieux Institute. Twitter @mckelveyf

Vous pouvez reproduire cet article d’Options politiques en ligne ou dans un périodique imprimé, sous licence Creative Commons Attribution.

Creative Commons License