When global travel emerges from the constraints imposed by the pandemic, travellers may find themselves returning to border crossings that are unfamiliar. Facial recognition is rapidly becoming embedded in many airports around the world, transforming border crossings in a purported effort to increase efficiency and security.

But the technology is deeply problematic.

Facial recognition poses an insidious threat to human rights and is frequently biased so that when errors inevitably occur, the brunt of these will unacceptably be borne by members of marginalized communities. Yet our research shows that facial recognition is becoming central to how travellers are processed when traversing increasingly automated airports around the world. Meanwhile, our current legal toolkit lacks the clarity necessary to check the more problematic aspects of this rapid technological evolution.

Accuracy and racial bias in automated border crossings

Despite improvements in recent years, facial recognition remains less accurate than other biometrics (fingerprints and iris scans, for example). Its accuracy has been sufficient enough, however, to inspire confidence in those who rely on its results over time.

This confidence is both overstated and difficult to dislodge. The most accurate facial recognition systems still yield thousands of errors on a daily basis when systematically applied to all international travellers. In more complex investigative contexts, errors are rarer but with severe consequences —individuals have been falsely arrested and publicly accused of serious offences based on facial recognition errors. If facial recognition eventually becomes accepted proof of identity or nationality in asylum claims, errors could imperil the lives of refugees as well as undermine human rights obligations.

Facial recognition also remains mired in racial biases that unevenly distribute the benefits and hazards of its adoption to the detriment of marginalized groups. The degree of racial bias varies by algorithm and context of use, but it can be quite pronounced — in some instances, population-wide accuracy levels can obscure error rates that are far higher for some demographic groups. For example, the error rate for Black women can be 10 or even 100 times higher than that for white men.

The harms of racially biased facial recognition algorithms extend well beyond the immediate impact of an error; they rest in their capacity to perpetuate prejudices and negative stereotypes. A referral to enhanced security screening may be routine for many travellers, but for those who have experienced a lifetime of racial profiling, biased referral can irreversibly contribute to lifelong indignity and compound mistrust of state investigative agencies. Automating this bias does nothing to alleviate its harmful impact.

These persistent inaccuracies and racial biases are all the more troubling in light of the transformative nature of the change envisioned. The objective is to automate much of the border crossing journey, and facial recognition is central to this vision. For adherents of this vision — which include government agencies and industry groups around the world — facial recognition would displace travel documents as the primary border identifier (“your face will be your passport” is a common refrain) and would eventually become the means by which travellers interact with an array of “smart” security gates, airline check-ins, baggage drop-offs and airport-wide sensors.

Along with travel documents, human discretion may also be displaced in this brave new world. A host of algorithmic tools seeks to automate everything from security risk assessments, customs processing and asylum eligibility. Once embedded with facial recognition, airport infrastructure can “know” a traveller without human intervention and can directly apply the outcomes of algorithmic assessments. Yet these algorithmic tools have their own accuracy and racial bias challenges, compounding those inherent in facial recognition systems.

Inserting some human oversight as a Band-Aid on this automated ecosystem is helpful but will not solve the problem. Biometric matching and other algorithmic tools are often difficult for a human reviewer to refute. Automated decision-making is frequently opaque, lacking the rationalization necessary for a human to refute its outcomes. Algorithmic determinations are also subject to “automation bias,” which can instill powerful cognitive deference to technological outcomes over time.

A pernicious threat to privacy and civil liberties at the border and beyond

Even cured of its inaccuracies and racial biases, facial recognition still poses an insidious threat to human rights and civil liberties, and our research has shown that systems adopted at borders are frequently repurposed.

Facial recognition provides a powerful surveillance tool that can reveal the identity of an individual from any live or historical image. When used online, facial recognition can locate pseudonymous profiles and link these to a known traveller on the basis of facial images alone. If integrated into live CCTV camera feeds, facial recognition can locate, track and identify anonymous individuals in real time. While border crossings have always been characterized by high levels of surveillance, airports from the United States, to Japan, to India are being inundated with hundreds of “touchpoints” that pervasively track travellers in their myriad interactions with airlines and border agencies.

Too often, facial recognition capabilities created at border crossings are ultimately repurposed to achieve numerous other objectives. In the European Union, a controversial border control facial recognition system can now be used for general law enforcement purposes. In an Australian proposal, a border system is set to form a central backbone for a national facial recognition capability that can be used for anything from general law enforcement to traffic safety. If the proposal becomes law, the system will even be available to private companies and government agencies seeking to verify their customers’ identities, creating the building blocks for a de facto national ID system.

In 2019, Canada and the Netherlands launched a joint pilot program at Montreal’s Trudeau Airport, Toronto’s Pearson Airport and Schiphol in Amsterdam. Through this “Known Traveller Digital Identity” initiative, travellers build rich “trust rating” profiles by interacting with various state agencies and private companies, including approved airlines, which in turn “attest” to the traveller’s trustworthiness. These attestations, along with accompanying data such as educational credentials, vaccination records and credit ratings are stored in a digital profile and complied into a “trust score,” which is then linked to travellers using facial recognition. Travellers with higher trust scores can access expedited security processing at airports, but once broadly adopted, these profiles are intended to facilitate a range of health care, banking and voting-related objectives. Participation is voluntary but the inherently coercive nature of border crossings, where travellers can routinely be subject to questioning and security checks they would not face in the course of daily life, provides a powerful incentive for enrollment.

Our outdated legal framework is not up to the coming challenges

In Canada, our legal framework is outdated, and lacks the clarity necessary to curb abuse of facial recognition.

Our Privacy Act — the central statute that seeks to check excessive privacy incursions — was adopted over 40 years ago and has not been updated since. It is wildly outdated, and provides limited protection against the challenges posed by facial recognition technologies. Tasked with overseeing government conduct, the Privacy Commissioner is treated by the act as an “internal investigator” incapable of compelling the government to abide by the law. The act lacks several other features of a modern privacy law, like the express obligation to ensure privacy incursions are both necessary and proportionate.

The Canadian Charter of Rights and Freedoms may ultimately curb some of the more egregious harms, but this will take time and litigation, and will further be stymied by the excessively secretive stance that our border agencies have taken on key facial recognition metrics such as accuracy, effectiveness and racial bias rates. Ad hoc adoption of commercial facial recognition tools in the absence of any institutional safeguards compounds the accountability problem. In other jurisdictions, facial recognition systems are frequently accompanied by dedicated statutory regimes and general purpose protections for biometrics. In the U.S., some municipalities have taken the additional step of imposing a moratorium on the use of facial recognition.

It is fundamental that we put in place some limits before facial recognition becomes an indelible part of our border control environment and bleeds past our borders into our daily lives. In the meantime, we should follow the example of many cities and impose a moratorium on new adoption of facial recognition systems for border control objectives until we, as a society, have taken more time to grapple with the excesses and implications of this pernicious technology.

Photo: Shutterstock.com, by metamorworks 

Do you have something to say about the article you just read? Be part of the Policy Options discussion, and send in your own submission, or a letter to the editor. 
Tamir Israel
Tamir Israel is staff lawyer at the Samuelson-Glushko Canadian Internet Policy & Public Interest Clinic, a technology law clinic based at the University of Ottawa, Faculty of Law.

You are welcome to republish this Policy Options article online or in print periodicals, under a Creative Commons/No Derivatives licence.

Creative Commons License