It is conventional wisdom, repeated by authoritative voices such as the former chief justice of Canada Beverley McLachlin, that Canadians face an access-to-justice (A2J) crisis. While artificial intelligence (AI) and algorithm-assisted automated decision-making could play a role in ameliorating the crisis, the contemporary consensus holds that the risks posed by AI mean its use in the justice system should be curtailed. The view is that the types of decisions that have historically been made by judges and state-sanctioned tribunals should be reserved exclusively to human adjudicators, or at the very least be subject to human oversight, although this would limit the advantages of speed and lowered cost that AI might deliver.

But we should be wary of prematurely precluding a role for AI in addressing at least some elements of the A2J crisis. Before we concede that robust deployment of AI in the civil and criminal justice systems is to be avoided, we need to take the public’s views into account. What they have to say may lead us to very different conclusions from those reached by lawyers, judges and scholars.

Though the prospect of walking into a courtroom and being confronted by a robot judge remains the stuff of science fiction, we have entered an era in which informed commentators confidently predict that the foreseeable future will include autonomous artificial intelligences passing bar exams, getting licensed to practice law and, in the words of Matthew and Jean-Gabriel Castel in their  2016 article “The Impact of Artificial Intelligence on Canadian Law and the Legal Profession,” “perform[ing] most of the routine or ‘dull’ work done by justices of the peace, small claims courts and administrative boards and tribunals.” Hundreds of thousands of Canadians are affected by such work every year.

Influential voices in the AI conversation have strongly cautioned against AI being used in legal proceedings. Where the matter has been addressed by governments, such as in the EU’s General Data Protection Regulation or France’s Loi informatique et libertĂ©s, that precautionary approach has been rendered as a right for there always to be a “human in the loop”: decisions that affect legal rights are prohibited from being made solely by means of the automated processing of data.

Concerns about the accountability of AI — both generally and specifically in the context of legal decisions — should not be lightly dismissed. There are significant and potentially deleterious implications to replacing human adjudicators with AI. The risks posed by the deployment of AI in the delivery of legal services include nontransparency and concerns about where to locate liability for harms, as well as various forms of bias latent in the data relied on, in the way that algorithms interact with those data and in the way that users interact with the algorithm. Having AI replace human adjudicators may not even be technically possible: some observers such as Frank Pasquale and Eric L. Talley have taken pains to point out that there is an irreducible complexity, dynamism and nonlinearity to law, legal reasoning and moral judgment, which means these matters may not lend themselves to automation.

Real as those technological constraints may be at the moment, they also may be real only for the moment. Furthermore, while these constraints may apply to some (or even many) instances of adjudication, they don’t — or likely won’t — continue to apply to all of them. Law’s complexity runs along many axes, including applying to many areas of human endeavour and impacting many different aspects of our lives. This requires us to be careful not to treat all interactions with the justice system as equivalent for purposes of AI policy. We might use algorithms to expeditiously resolve, for example, consumer protection complaints or breach of contract disputes, but not matters relating to child custody or criminal offences.

Whether and when we deploy AI in the civil and criminal justice systems are questions that should be answered only after taking into account the views of the people who would be subject to those decisions. The answer to the question of judicial AI doesn’t belong to judges or lawyers, or at least not only to them — it belongs, in large part, to the public. Maintaining public confidence in the institution of the judiciary is a paramount concern for any liberal democratic society. If the courts are creaking under the strain of too many demands, if resolutions to disputes are hobbled by lengthy delays and exorbitant costs, we should be open to the possibility of using AI and algorithms to optimize judicial resources. If and to the extent we can preserve or enhance confidence in the administration of justice through the use of AI, policy-makers should be prepared to do so.

We can reframe the issue as an inquiry into what people look for from judicial decision-making processes. What are the criteria that lead people who are subject to justice system decisions to conclude that the process was “fair” or “just”? As Jay Thornton has noted , scholars in the social psychology of procedural justice, such as Gerald Leventhal and Tom Tyler, have done empirical work that provides exactly this insight into people’s subjective views. People want their justice system to feature such characteristics as consistency, accuracy, correctability, bias suppression, representativeness and ethicality. In Tyler’s formulation, people want a chance to present their side of the story and have it be considered; they want to be assured of the neutrality and trustworthiness of the decision-maker; and they want to be treated in a respectful and dignified manner.

It is not obvious that judicial AI fails to meet those criteria — it is almost certainly the case that on some of the relevant measures, such as consistency, judicial AI might fare better than human adjudicators. (Research has indicated, for example, that judges render more punitive decisions the longer they go without a meal — in other words, a hungry judge is a harsher judge. Whatever else might be said about robot judges, they won’t get hungry. When deciding between human adjudication and AI adjudication, we should also attend to the question of whether existing human-driven processes are performing adequately on the criteria identified by Leventhal and Tyler. That is not a theoretical inquiry but an empirical one: it should be assessed by reference to the subjective satisfaction of the parties who are involved in those processes.

There may be certain types or categories of judicial decisions that people would prefer be performed by AI if so doing would result in faster and cheaper decisions. We must also take fully into account the fact that we already calibrate adjudicative processes for solemnity, procedural rigour and cost to reflect conventional views of what kinds of claims or disputes “matter” and to what extent they do so. For example, the rules of evidence that apply in “regular” courts are significantly relaxed (or even obviated) in courts designated as “small claims” (which often aren’t so small: in Ontario, Small Claims Court applies to disputes under $25,000). Some tribunals that make important decisions about the legal rights of parties — such as the Ontario Landlord and Tenant Board — do not require their adjudicators to have a law degree. We have been prepared to adjust judicial processes in an effort to make them more efficient, and where technology has been used to improve processes and facilitate dispute resolution, as has been the case with British Columbia’s online Civil Resolution Tribunal, the results appear to have been salutary. The use of AI in the judicial process should be viewed as a point farther down the road on that same journey.

The criminal and civil justice systems do not exist to provide jobs for judges or lawyers. They exist to deliver justice. If justice can be delivered by AI more quickly, at less cost and with no diminishment in public confidence, then the possibilities of judicial AI should be explored and implemented. It may ultimately be the case that confidence in the administration of justice would be compromised by the use of AI — but that is an empirical question, to be determined in consultation with the public. The questions of confidence in the justice system, and of whether to facilitate and deliver justice by means of AI (including the development of a taxonomy of the types of decisions that can or should be made using AI), can only be fully answered by those in whom that confidence resides: the public.

This article is part of the Ethical and Social Dimensions of AI special feature.

Photo: Shutterstock, by Andrey_Popov.


Do you have something to say about the article you just read? Be part of the Policy Options discussion, and send in your own submission. Here is a link on how to do it. | Souhaitez-vous rĂ©agir Ă  cet article ? Joignez-vous aux dĂ©bats d’Options politiques et soumettez-nous votre texte en suivant ces directives.

Bob Tarantino
Bob Tarantino is an entertainment lawyer and a PhD student at Osgoode Hall Law School.  

Vous pouvez reproduire cet article d’Options politiques en ligne ou dans un pĂ©riodique imprimĂ©, sous licence Creative Commons Attribution.

Creative Commons License