The Mark Carney government has made “deploying AI at scale” a cornerstone of its attempt to make government more productive and slash costs by cutting 28,000jobs by 2029. The goal is to achieve savings of $60 billion over several years.

There are many reasons to be skeptical of the government’s AI strategy. Savings projections resulting from digitalization should be taken with a grain of salt. For example, the Phoenix pay system, designed to automate the federal payroll, was supposed to save $70 million per year. Instead, unable to deal with the complexity of paying hundreds of thousands of public servants, it has cost the federal government $4.34 billion and climbing to try to fix it.

However, unrealized savings are the least of the concerns coming from the government’s wholehearted embrace of AI, including algorithmic-based tools. Deploying these technologies as cost-cutting measures will not only result in worse service for Canadians, it will put lives at risk – as has already happened here and in other countries.

If the federal government is intent on exploring the use of AI (however it is defined) in government, it should not do so as a cost-cutting measure, but only after careful, case-by-case deliberation that pays close attention to how this (or any) technology interacts with the people using, and affected by, the tech in question.

Two forms of AI

The problems begin with the technologies themselves. Simplifying greatly, focus on two general forms of AI.

The first, “generative AI” such as ChatGPT produces probabilistic output predicting what the next word is likely to be in a sequence based on its training data. It produces patterns that look like human thought, but it’s just repeating patterns in its data. As such, it’s prone to producing “hallucinations,” which can involve presenting false information as true.

These are not technically incorrect outputs per se because the program is simply doing what it’s designed to do: provide probabilistically determined strings of words and sentences. The fact this problem cannot be fixed means that its output can never be fully trusted.

Trust in government a key factor in acceptance of greater federal use of AI

Can Canada afford to cut the public service while Trump moves the goalposts?

The second form is “discriminative AI,” such as decision-making algorithms that produce options and targets by identifying patterns in data. Such programs are only as good as the data provided and their algorithms – both of which are created by humans and therefore share their biases and fallibilities.

Two recent Canadian cases reveal that governments’ experimentation with generative AI and algorithmic decision-making in delivering immigration and social-assistance programs have already resulted in serious negative consequences.

In the first case, Immigration, Refugees and Citizenship Canada (IRCC) acknowledged using generative AI to reject an application for permanent residence on the grounds the applicant’s job duties didn’t match her claimed Canadian job experience. However, the AI tool erroneously generated the applicant’s current job duties, which means the algorithm wrongly rejected the application.

In the second case, Quebec launched an AI-driven overhaul in 2025 of its social-assistance system called Project UNIR, which uses algorithms to help determine eligibility for financial assistance. The project eliminated the previous “assigned agents” who worked with clients from the beginning to the end of their files. It now divides tasks in each applicant’s file among officials working in different regions.

One man killed himself after being given incorrect information by staff saying he was ineligible for assistance – a mistake exacerbated without a human agent who knew the context of the man’s file. Other people calling the system have expressed suicidal thoughts because of its administrative delays and document losses.

What makes these two cases – as well as the three deaths from listeria in 2024 due to the Canadian Food Inspection Agency’s reliance on an algorithm and bad data to determine which manufacturing facilities to investigate – so frustrating is that they mirror tragic events earlier and elsewhere.

In 2016, Australia introduced an automated debt-recovery program to identify potential welfare fraud. The program, known as robodebt and based on an algorithm, was so riddled with errors that it wrongly identified 450,000 individuals as being involved in fraud. Robodebt sparked at least three suicides, police investigations, a royal commission and an agreement for the Australian government to pay AU$475 million in compensation to victims.

The Australian government’s ill-fated, costly experiment with this algorithmic decision-making and the Canadian cases hold significant lessons on the consequences of turning to algorithms to operate and manage public services while cutting back on frontline public servants.

Such debacles share important similarities, as we explored in our 2023 book The New Knowledge: Information, Data & the Remaking of Global Power.

The first involves the quality of the technology and the consequences of automating public services.

The reliance on generative AI to create actionable reports is itself a problem because it can be unable to deal with the complexity of real-world cases, while making it difficult-to-impossible for human case workers to intervene to correct problems.

It’s therefore more difficult for clients to figure out what’s happening and why a decision was made – a problem that’s exacerbated as anxious clients are unable to reach human agents via jammed phone lines. The Quebec government has spent millions on a private firm to handle the extra phone calls.

The second regards the role of workers using or affected by these technologies. To mitigate the harms caused by hallucinations and algorithmic decision-making, governments have tended to embrace a “human in the loop” strategy, ensuring people participate in the operation and supervision of algorithm-driven systems.

However, a human-in-the-loop rule is not sufficient to guard against errors or prevent harms. The very presence of the technology affects how people do their jobs.

The shift to automated programs often constrains or even prevents frontline staff from using their experience and expertise to make decisions. Scholars refer to this as the rise of “screen-level bureaucracy” because bureaucracy does not disappear but rather changes form and becomes less accountable as algorithmic decisions are typically delivered opaquely via private-sector technology.

What’s more, people’s well-documented tendency to treat computer outputs as authoritative is supercharged when workers are asked to do more with less, giving them less time to perform due diligence on these algorithmically generated outputs.

Finally, researchers are increasingly concerned that reliance on AI technologies will lead to deskilling, the ability to develop the skills and expertise needed to catch AI errors. This is another reason why the human-in-the-loop strategy is deeply flawed. The more you use these technologies, the worse your overall skills will become.

Beyond the human-in-the-loop strategy, governments are attempting other mitigating factors.

For example, the IRCC says in its artificial intelligence strategy that the department uses AI for administrative tasks such as summarizing and producing documents but that those tools do not themselves reject or recommend rejecting applications. The IRCC also rates AI-delivered document summary and production as low risk, while it rates AI informing decision-makers as medium risk.

However, the IRCC’s distinction between low and medium risk is not useful if human decisions on files, which presumably will feed into future consequential decisions, are based on erroneous information from AI tools.

For governments, coming to terms with this reality means recognizing that making such technologies work requires human analysis and review at every step.

Far from cutting labour costs, only a well-resourced and skilled workforce can adequately manage these technologies. If workers are not given sufficient time and power to review algorithmically generated outputs, then system breakdown and worse service is all too likely.

The Carney government has placed its faith in AI to deliver low-cost government services. These technologies, used only in specific circumstances, may offer some benefits to Canadians.

However, not only does starting with your preferred solution foreclose other, potentially better options, neither AI technology as it currently exists nor the many examples of what happens when such technologies are introduced augers anything but a slow-motion disaster for Canadians and the public service.

Do you have something to say about the article you just read? Be part of the Policy Options discussion, and send in your own submission. Here is a link on how to do it.

You are welcome to republish this Policy Options article online or in print periodicals, under a Creative Commons/No Derivatives licence. Photographs cannot be republished.

Natasha Tusikov photo

Natasha Tusikov

Natasha Tusikov is an associate professor in the Department of Social Science at York University. Her research examines the intersection of law, crime, technology and regulation. Bluesky: @ntusikov.bsky.social

Blayne Haggart photo

Blayne Haggart

Blayne Haggart is a professor of political science at Brock University. He is also a senior fellow with the Centre for International Governance Innovation and the co-author (with Natasha Tusikov) of The New Knowledge: Information, Data and the Remaking of Global Power.

 

Related Stories