A few weeks ago, members of the Nexus traveller identification program were notified that Canadian Border Services is upgrading its automated system, from iris scanners to facial recognition technology. This is meant to simplify identification and increase efficiency without compromising security. But it also raises profound questions concerning how we discuss and develop public policies around such technology – questions that may not be receiving sufficiently open debate in the rush toward promised greater security.

Analogous to the U.S. Customs and Border Protection (CBP) program Global Entry, Nexus is a joint Canada-US border control system designed for low-risk, pre-approved travellers. Nexus does provide a public good, and there are valid reasons to improve surveillance at airports. Even before 9/11, border surveillance was an accepted annoyance and since then, checkpoint operations have become more vigilant and complex in response to the public demand for safety.

Nexus is one of the first North America government-sponsored services to adopt facial recognition, and as such it could be a pilot program that other services will follow. Left unchecked, the technology will likely become ubiquitous at North American border crossings within the next decade, and it will probably be adopted by governments to solve domestic policy challenges.

Facial recognition software is imperfect and has documented bias, but it will continue to improve and become superior to humans in identifying individuals. Given this, questions arise such as, what policies guide the use of this technology? What policies should inform future government use? In our headlong rush toward enhanced security, we risk replicating the justification the used by the private sector in an attempt to balance effectiveness, efficiency and privacy.

One key question involves citizens’ capacity to consent. Previously, Nexus members submitted to fingerprint and retinal scans – biometric markers that are relatively unique and enable government to verify identity at the border. Facial recognition technology uses visual data and seeks, analyzes, and stores identifying facial information in a database, which is then used to compare with new images and video.

There is an important distinction between the technologies. A thumbprint or iris scan requires the complicity of the subject. One must voluntarily (or be compelled to) place one’s fingers on a scanner or look into an iris scanner. In contrast, facial recognition can be carried out at a distance and from anywhere as long as there is an equipped camera. It can thus be used without the explicit consent or even knowledge of the citizen. This should raise privacy and consent questions for the public and for policy-makers. For fingerprint and iris scans, a “transaction moment” with “consent” is built into the technology itself. For facial recognition, the transaction moment of consent is not required by technology. It only requires the appropriate policy.

This distinction is even more important given the potential pervasiveness of facial recognition scanning. Photos of Nexus members could be used to monitor and track them across any camera network that is connected to facial recognition software and the appropriate government data base. This would be a dramatic increase in the state’s capacity for surveillance, and policies would need to build confidence that government institutions will provide checks and balances and ensure this enhanced surveillance power is not abused.

Another key concern with the latest Nexus news is what it reveals about how governments assess and adopt new technologies. There are three ways private or public organizations typically test technology policies. The first is the “trickle down” method, where policy is normalized by limiting it to an elite group of people who, through a combination of privilege and convenience, are not threatened by the policy. The risk is that lessons about trust and safety predicated upon the experience with this group are erroneously generalized across different social groups.

The second is the “exploitation” method, where organizations test new policies on marginalized groups of people who have limited rights to consent and/or protest. Though this may seem far-fetched and dystopian, one only needs to look at voice recognition testing in prisons in the US, or at the Canadian police departments that are secretly testing Clearview AI, a facial recognition service. The third method is to “test and iterate,” engaging disparate groups of people with their knowledge and consent.

The Nexus program is for people who Canada and US authorities deem “low risk” and who are, by definition, a privileged group. Nexus members are likely to be socio-economically advantaged and less likely to fear a stronger surveillance capacity of the state. This makes Nexus a “safe” but also problematic program in which to normalize the adoption of facial recognition software.

Some argue the program is voluntary – no one has to participate. However, that argument replicates the incentives and metrics around consent and participation that technology giants like Facebook and Google have been criticized for using. Since Nexus provides significant convenience for its members, they must choose between slow service but better privacy (that is, long delays crossing the border) and faster, better service with a higher likelihood of state surveillance. This choice exemplifies the privacy paradox that people live with daily; namely, even individuals who deeply value their privacy often continue to use services and engage in behaviour that undermine that privacy.

What is happening to Nexus members today is less concerning than how government will use the lessons in future. It isn’t hard to see those lessons being generalized into a mandatory facial recognition program for all travellers at the borders. This could hurt marginalized groups and have a chilling effect on freedom of speech, as those who participate in marches or protests may be more easily tracked and their entry at a border restricted. Governments should not model their privacy policies after the private sector, they should set a new, higher, bar.

Given the population it affects, it is not surprising that the changes to Nexus have generated little notice. But we should be wary of drawing broader conclusions from this muted response. There is growing concern regarding government use of facial recognition technology, which is banned in many American cities, for example, San Francisco, Oakland, Somerville and Brookline, and more are in the works.

The world of less surveillance has likely been lost to us. Instead, it’s important to understand how decisions to use such technologies are being made and to stay engaged with their operationalization. In debating civil liberties, “ultimately, how much do you trust law enforcement and private companies to do the right thing?”

With Nexus, it’s crucial we create a sound legal basis for facial recognition technologies, there must be public debate about the limits we must impose on such technologies, and there needs to be a body that provides sufficient oversight. While government has a mandate to keep people within its borders safe, individuals are ultimately responsible for overseeing their government.

Within astronomy, there is a theorized “Goldilocks Zone,” the habitable zone around a star where the temperature is just right: not too hot and not too cold for liquid water to exist on a planet. In other words, the zone where life can flourish. Consider an analogous zone for the permitted use of surveillance technologies. It is up to all of us to define and defend this zone.

Photo: Shutterstock, by metamorworks.


Do you have something to say about the article you just read? Be part of the Policy Options discussion, and send in your own submission. Here is a link on how to do it. | Souhaitez-vous rĂ©agir Ă  cet article ? Joignez-vous aux dĂ©bats d’Options politiques et soumettez-nous votre texte en suivant ces directives.

David Eaves
David Eaves is a public-policy entrepreneur and expert in information technology and government. He has worked closely with municipal and federal leaders in Canada and the United States on the intersection of technology, open data and governance, and is the faculty director of digital HKS at the Harvard Kennedy School of Government.
Naeha Rashid
Naeha Rashid explores digital government issues as a research fellow at Harvard University's Ash Center. She is a graduate of the Kennedy School of Government at Harvard University, and of McGill University.

You are welcome to republish this Policy Options article online or in print periodicals, under a Creative Commons/No Derivatives licence.

Creative Commons License