In an effort to bring innovations to its immigration and refugee system, Canada has begun using automated decision-making to help make determinations about people’s applications.

A report released in September 2018 by the University of Toronto’s International Human Rights Program and the Citizen Lab at the Munk School of Global Affairs and Public Policy finds that Canada is experimenting with using artificial intelligence (AI) to augment and replace human decision-makers in its immigration and refugee system. This experimentation has profound implications for people’s fundamental human rights.

Use of AI in immigration and refugee decisions threatens to create a laboratory for high-risk experiments within an already highly discretionary system. Vulnerable and under-resourced communities such as noncitizens often have access to less robust human rights protections and fewer resources with which to defend those rights. Adopting these technologies in an irresponsible manner may only serve to exacerbate these disparities.

The rollout of these technologies is not merely speculative: the Canadian government has been experimenting with their adoption in the immigration context since at least 2014. For example, the federal government has been developing a system of “predictive analytics” to automate certain activities currently conducted by immigration officials and to support the evaluation of some immigrant and visitor applications. The government has also quietly sought input from the private sector in a 2018 pilot project for an “Artificial Intelligence Solution” in immigration decision-making and assessments, including for applications on humanitarian and compassionate grounds and applications for Pre-Removal Risk Assessment. These two application categories are often used as a last resort by people fleeing violence and war who wish to remain in Canada. They are also highly discretionary, and the reasons for rejection are often opaque.

In an immigration system plagued with lengthy delays, protracted family separation and uncertain outcomes, the use of these new technologies seems exciting and necessary. However, without proper oversight mechanisms and accountability measures, the use of AI can lead to serious breaches of internationally and domestically protected human rights, in the form of bias or discrimination; privacy violations; and issues with due process and procedural fairness, such as the right to have a fair and impartial decision-maker and being able to appeal the decision. These rights are internationally protected by instruments that Canada has ratified, such as the United Nations Convention on the Status of Refugees and the International Covenant on Civil and Political Rights among others. These rights are also protected by the Canadian Charter of Rights and Freedoms and accompanying provincial human rights legislation.

We already know that algorithms make mistakes. For example, 7,000 students were wrongfully deported from the UK because an algorithm wrongly accused them of cheating on a language test. Algorithms also discriminate, and they are by no means neutral. They have a particularly bad track record on race and gender, equating racialized communities with higher risks of recidivism, or reinforcing gender stereotypes by automatically associating “woman” and “kitchen.”

Indeed, the potential impact of these systems on individuals’ physical safety, human rights and livelihoods is far reaching. Bias, error or system failure can result in irreparable harm to individuals and their families. For people navigating Canada’s immigration system, extensive delay, substantial financial cost, interrupted work or studies, detention (often for months or years at a time), prolonged family separation and deportation are all possibilities. For refugee claimants, the consequence of a rejected claim on an erroneous basis can be persecution on the basis of an individual’s “race, religion, nationality, membership in a particular social group, or political opinion,” as described in the UN refugee convention. Error or bias in deciding upon their applications for protection may expose them to torture, cruel and inhumane treatment or punishment, or risks to life.

As a result, immigration and refugee law sits at an uncomfortable legal nexus: the impact on the rights and interests
of individuals is often very significant, even where the degree of deference is high and the procedural safeguards are weak. There is also a serious lack of clarity about how courts will interpret administrative law principles like natural justice, procedural fairness and standard of review where an automated decision system is concerned.

Before Canada commits to the use of AI in immigration and refugee decision-making, there is a pressing need to develop research and analysis that responds to the Canadian government’s express intention to pursue greater adoption of these technologies. As these systems become increasingly normalized and integrated, it is crucial that choices related to their adoption are made in a transparent, accountable, fair and rights-respecting manner. Canadian academic and civil society must engage on this issue.

Ottawa should establish an independent, arm’s-length body to engage in all aspects of oversight and review for all automated decision-making systems used by the federal government, making all current and future uses of AI public. Ottawa should also create a task force that brings key government stakeholders together with people from academia and civil society to better understand the current and prospective impacts of automated decision-making technologies on human rights and the public interest more broadly.

The global experiences of migrants and refugees represent a grave humanitarian crisis. In response to issues like migration, even well-intentioned policy-makers are sometimes too eager to see new technologies as a quick solution to tremendously complex and intractable policy issues. Artificial intelligence, machine learning, predictive analytics and automated decision-making may all seem promising.

Technology also travels. Whether in the private or public sector, a country’s decision to implement particular technologies can set an example for other countries to follow. Canada has a unique opportunity to develop international standards that regulate the use of these technologies in accordance with domestic and international human rights obligations. It is particularly important to set a clear example for countries with more problematic human rights records and weaker rule of law, as insufficient ethical standards and weak accounting for human rights impacts can create a slippery slope internationally. Critical, empirical and rights-oriented research into the use of AI should serve not only as an important counterbalance to stopgap responses or technological solutionism but as the central starting point from which to assess whether such technological approaches are appropriate to begin with.

The challenge, then, is not how to use new technology to entrench old problems, but instead to better understand how we may use this opportunity to imagine and design systems that are more transparent, equitable and just.

Photo: Shutterstock, by ProStockStudio.


Do you have something to say about the article you just read? Be part of the Policy Options discussion, and send in your own submission. Here is a link on how to do it. | Souhaitez-vous réagir à cet article ? Joignez-vous aux débats d’Options politiques et soumettez-nous votre texte en suivant ces directives.

Petra Molnar
Petra Molnar is a lawyer and researcher at the International Human Rights Program at the University of Toronto Faculty of Law. She is the co-author (with Lex Gill) of "Bots at the Gate: A Human Rights Analysis of Automated Decision-Making in Canada's Immigration and Refugee System."

Vous pouvez reproduire cet article d’Options politiques en ligne ou dans un périodique imprimé, sous licence Creative Commons Attribution.

Creative Commons License