Between 2011 and 2015, teachers in Houston had their job performance evaluated by a “data-driven” appraisal algorithm called the Educational Value-Added Assessment System (EVAAS). The algorithm allowed the board of education to automate decisions that translated concretely into which teachers were awarded bonuses, given sanctions for poor scores or even fired. These automated decisions tightly controlled the fate of workers — in this case, teachers. These teachers were unable to challenge the decisions or receive an explanation of them because the source codes and other information underlying the algorithm are proprietary trade secrets owned by SAS, a third-party vendor.

A long civil lawsuit ensued, and in mid-2017, a US federal judge ruled that the use of the secret algorithm to evaluate workers’ performance without proper explanation denied the teachers their constitutional rights. The judge had to balance the understandable right of the private vendor to safeguard its trade secrets with the teachers’ constitutional right to due process, which protects Americans against substantively unfair or mistaken deprivations of life, liberty or property.

The ruling meant that the teachers and the Houston Federation of Teachers must be able to independently verify and challenge evaluation decisions made by the algorithm. However, SAS refused to explain the inner workings of its EVAAS algorithm. As a result, the EVAAS algorithm is no longer used by the school district.

The future of workers and fundamental rights  

This case is ground-breaking and provides important lessons on how we should be thinking about the protection of fundamental rights for workers and citizens in the age of artificial intelligence. The case sends a strong signal that workers must have sufficient information to meaningfully challenge job “terminations based on low EVAAS scores” and explains that “due process is designed to foster government decision-making that is both fair and accurate.” It challenges regulators and technologists to question the design of black boxes in artificial intelligence and algorithms. More precisely, it asks the key question of whether publicly procured or government-owned algorithms and their automated decisions would pass the due process and algorithmic explainability litmus tests in Canada, New Zealand, the United States and elsewhere.

We need to scrutinize the increased use of automated decision systems to evaluate, hire and fire workers. Without the adequate inclusion of workers in the conception, design and deployment of these systems, they could become unfairly biased against workers and put fundamental rights at risk. In the case of teachers in Houston, the use of algorithmic evaluations not only denied them access to due process, it also failed to fully assess the lived realities of teachers, especially those working in underprivileged and under-resourced neighbourhoods. Biases embedded in algorithms used to automate decisions could contribute to widening existing inequalities within and between our schools and communities.

In many ways, the future of work is about the future of poverty and privilege, and powerlessness and power. In most discussions about the future of work, there is very little focus on the future of workers, their families and our communities. We are told to believe in a technologically driven future of work, which has very little to do with the realities of most workers; we are encouraged to believe in data science and artificial intelligence as false gods — bringers of techno-solutions.

Instead, we should be asking how the application of technology and the changing nature of work can contribute to the general welfare and a good society.

Algorithms all around

The road to the future of work is, no doubt, paved with good intentions. In Houston, for example, the EVAAS algorithm was first used to determine teacher bonuses by evaluating student performance on standardized tests. But then the school board began to use similar algorithmic systems to sanction teachers for low student performance on those tests. Without transparency, good intentions can lead to bad results.

Algorithms are all around us. In New Zealand, for example, a recent report revealed 14 government agencies using 32 algorithms for purposes as varied as determining school bus routes, assessing an individual’s priority for access to health services and determining the risk of long-term unemployment among young people in order to offer them social services. The report showed that few, if any, agencies consult with those affected by the use of algorithms and that there are no clear standards for safeguards and assurances within existing laws. The low level of transparency and engagement with workers and citizens is the same in other countries, including Canada.

We are more than the sum of our data

As Canada implements its Pan-Canadian Artificial Intelligence Strategy and the newly minted International Observatory on the Social Impacts of AI and Digital Technologies, we must adopt a strategic mix of hard laws (regulations) and soft laws (professional codes of conduct). This mix must guarantee that we can optimize not only economic gains from technologies but also fundamental rights and protections like procedural fairness for workers and citizens.

The new strategy must include the voices of workers, strong support of human rights and the recognition that human “labour is not a commodity.” The Universal Declaration of Human Rights influenced the Charter of Fundamental Rights of the European Union, and its articles 7 and 8 speak to the right to privacy. This charter inspired the creation of the European Union’s General Data Protection Regulation (GDPR), which came into force in mid-2018.

The GDPR includes numerous protections like the “right to explanation” surrounding automated decision-making. It was created to complement the EU’s Digital Single Market strategy, a plan for expanding online opportunities across member states while also protecting people’s fundamental rights in an increasingly data-driven economy and society.

More needs to be done. Canadians deserve data protection as much as Europeans do. Norms and technical and professional codes of conducts are needed; many organizations have laid the foundations with their work on principles and ethics, such as the UNI Global Union, the IEEE and the Alan Turing Institute.

The future of work is linked to the future of data protection, consent, human agency, collective action, freedom of association and self-determination. In the age of artificial intelligence, we must remember that we are not the sum of our data, and that we must treat all workers and people not as commodities but as unique individuals with fundamental rights.

Photo: Shutterstock, by kentoh.


Do you have something to say about the article you just read? Be part of the Policy Options discussion, and send in your own submission. Here is a link on how to do it. | Souhaitez-vous réagir à cet article ? Joignez-vous aux débats d’Options politiques et soumettez-nous votre texte en suivant ces directives.

Kai-Hsin Hung
Kai est doctorant à HEC Montréal en recherche sur les données, les chaînes de valeur et le travail. Il est aussi membre du Centre de recherche interuniversitaire sur la mondialisation et le travail.
Joy Liddicoat
Joy Liddicoat is the assistant research fellow on the Artificial Intelligence and the Law Project at the University of Otago, New Zealand. Prior to this, she was the assistant commissioner at the Office of the Privacy Commissioner and a human rights commissioner of New Zealand.

Vous pouvez reproduire cet article d’Options politiques en ligne ou dans un périodique imprimé, sous licence Creative Commons Attribution.

Creative Commons License