In April, the federal government sent a request for information to industry to determine where artificial intelligence (AI) could be used in the immigration system for legal research, prediction and trend analysis. The type of AI to be employed here is machine learning: developing algorithms through analysis of wide swaths of data to make predictions within a particular context. The current backlog of immigration applications leaves much room for solutions that could improve the efficiency of case processing, but Canadians should be concerned about the vulnerability of the groups targeted in this pilot project and how the use of these technologies might lead to human rights violations.

An algorithmic mistake that holds up a bank loan is frustrating enough, but in immigration screening a miscalculation could have devastating consequences. The potential for error is especially concerning because of the nature of the two application categories the government has selected for the pilot project: requests for consideration on humanitarian and compassionate grounds, and applications for Pre-Removal Risk Assessment. In the former category of cases, officials consider an applicant’s connections with Canada and the best interests of any children involved. In the latter category, a decision must be made about the danger that would confront the applicant if they were returned to their home country. In some of these cases, assessing whether someone holds political opinions for which they would be persecuted could be a crucial component. Given how challenging it is for current algorithmic methods to extract meaning and intent from human statements, it is unlikely that AI could be trusted to make such a judgment reliably. An error here could lead to someone being sent back to imprisonment or torture.

Moreover, if an inadequately designed algorithm results in decisions that infringe upon rights or amplify discrimination, people in these categories could have less capacity than other applicants to respond with a legal challenge. They may face financial constraints if they’re fleeing a dangerous regime, as well as cultural and language barriers.

An algorithmic mistake that holds up a bank loan is frustrating enough, but in immigration screening a miscalculation could have devastating consequences.

Because of the complexity of these decisions and the stakes involved, the government must think carefully about which parts of the screening process can be automated. Decision-makers need to take extreme care to ensure that machine learning techniques are employed ethically and with respect for human rights. We have several recommendations for how this can be done.

First, we suggest that the federal government take some best practices from the European Union’s General Data Protection Regulation (GDPR). The GDPR has expanded individual rights with regard to the collection and processing of personal data. Article 22 guarantees the right to challenge the automated decisions of algorithms, including the right to have a human review the decision. The Canadian government should consider a similar expansion of rights for individuals whose immigration applications are decided by, or informed by, the use of automated methods. In addition, it must ensure that the vulnerable groups being targeted are able to exercise those rights.

Second, the government must think carefully about what kinds of transparency are needed, for whom, and how greater transparency might create new risks. The immigration process is already complex and opaque, and with added automation, it may become more difficult to verify that these important decisions are being made in fair and thorough ways. The government’s request for information asks for input from industry on ensuring sufficient transparency so that AI decisions can be audited. In the context of immigration screening, we argue that a spectrum of transparency is needed because there are multiple parties with different interests and rights to information.

The inner workings of government
Keep track of who’s doing what to get federal policy made. In The Functionary.
The Functionary
Our newsletter about the public service. Nominated for a Digital Publishing Award.

If the government were to reveal to everyone exactly how these algorithms work, there could be adverse consequences. A fully transparent AI decision process would open doors for people who want to exploit the system, including human traffickers. They could game the algorithm, for example, by observing the keywords and phrases that the AI system flags as markers of acceptability and inserting those words into immigration applications. Job seekers already do something similar, by using keywords strategically to get a resumé in front of human eyes. One possible mechanism for oversight in the case of immigration would be a neutral regulatory body that would be given the full details of how the algorithm operates but would reveal only case-specific details to the applicants and partial details to other relevant stakeholders.

Finally, the government needs to get broader input when designing this proposed use of AI. Requesting solutions from industry alone will deliver only part of the story. The government should also draw on expertise from the country’s three leading AI research institutes in Edmonton, Montreal and Toronto, as well as two new ones focused specifically on AI ethics: the University of Toronto’s Ethics of AI Lab and the Montreal AI Ethics Institute. Another group whose input should be included is the immigration applicants themselves. Developers and policy-makers have a responsibility to understand the context for which they are developing solutions. By bringing these perspectives into their design process, they can help bridge empathy gaps. An example of how users’ first-hand knowledge of a process can yield helpful tools is the recently launched chatbot Destin, which was designed by immigrants to help guide applicants through the Canadian immigration process.

The application of AI to immigration screening is promising: applications could be processed faster, with less human bias and at lower cost. But care must be taken with implementation. Canada has been taking a considered and strategic approach to the use of AI, as evidenced by the Pan-Canadian Artificial Intelligence Strategy, a major investment by the federal government that includes a focus on developing global thought leadership on the ethical and societal implications of advances in AI. We encourage the government to continue to pursue this thoughtful approach and an emphasis on human rights to guide the use of AI in immigration.

Photo: Joseph Sarraf, left, raises his hand as he swears the oath of Canadian Citizenship along with 19 others during a citizenship ceremony at the Vanier Sugar Shack in Ottawa on April 11, 2018. THE CANADIAN PRESS/Justin Tang


Do you have something to say about the article you just read? Be part of the Policy Options discussion, and send in your own submission. Here is a link on how to do it. | Souhaitez-vous réagir à cet article ? Joignez-vous aux débats d’Options politiques et soumettez-nous votre texte en suivant ces directives.

Diana Robinson
Diana Robinson is an MBA candidate at the Cambridge Judge Business School and a visiting student at the Leverhulme Centre for the Future of Intelligence, focusing her research on AI ethics, strategy and policy.
Karina Vold
Karina Vold, PhD, is a research associate at the Leverhulme Centre for the Future of Intelligence, a research fellow in the Faculty of Philosophy at the University of Cambridge, and a Canada-UK Fellow for Innovation and Entrepreneurship.

You are welcome to republish this Policy Options article online or in print periodicals, under a Creative Commons/No Derivatives licence.

Creative Commons License