As Canada moves quickly on artificial intelligence while also dramatically increasing its defence spending, there is good reason to think that these trends will converge on what many governments believe is the “killer app” of 21st-century warfare: AI-assisted weapons systems.

Earlier this year Prime Minister Mark Carney sent a clear signal that AI has become a priority file for Ottawa when he created the cabinet portfolio of Minister of Artificial Intelligence and Digital Innovation. Then this summer the government signed a memorandum of understanding with the Toronto tech company Cohere to identify where Canadian-built information systems could improve public services.

The direction here is unmistakable. The state is shifting from talk to action in exploring broader integration of AI, which will inevitably shape choices in national defence where the consequences are life and death.

Canada’s defence and AI priorities are converging

Those moves are coming at a time when Canada is making new commitments to vastly increase its military capacity. In June, NATO allies agreed to raise their own defence and security spending targets to five per cent of GDP by 2035. The largest sustained increase in Canada’s military spending since the Cold War will more than double Canada’s current defence budget, potentially reaching more than $100 billion annually by 2035.

These aren’t just numbers on a balance sheet; they’re choices about what kind of military Canada builds and what values are embedded in it. The spending will decide what the Canadian military buys, how it integrates software into weapons and command systems, and where human judgment fits into the chain of decision-making that can take a life. If Canada becomes complacent about human oversight, algorithms will write rules instead of people.

The potential consequences of crucial military decisions are not merely theoretical. Last month Israeli Prime Minister Benjamin Netanyahu described as a “tragic mishap” a strike on a Gaza hospital that killed 20 people, including health workers and journalists. That’s the kind of instantaneous, devastating “error” that proponents claim AI systems can prevent, but evidence from Gaza shows precisely the opposite.

Gaza and Ukraine reveal the dangers of algorithmic warfare

Multiple investigations have reported that Israel uses AI-assisted tools — bearing names like “Lavender” and “Gospel” — to generate military target lists and prioritize attacks. These systems mark large numbers of people as suspected Hamas members based on algorithmic profiling — i.e., proximity to suspicious locations, contact with flagged individuals, social media interactions — and feed their names into a faster targeting cycle.

Most disturbing is how human oversight of these deadly AI-guided exercises has been eroded to a mere 20 seconds per decision. Israeli intelligence officers admit spending just 20 seconds signing off on individual Lavender-generated strikes, despite knowing that the system misidentifies targets in approximately 10 per cent of cases. Israeli soldiers involved in the process have described their roles as “rubber stampers,” effectively delegating life-or-death judgments to algorithms that are obviously incapable of nuanced judgment, moral reasoning or contextual understanding.

As this accelerated decision process bypasses critical human reflection, commanders are raising the thresholds for permissible civilian harm and authorizing attacks — up to 20 civilian casualties for every militant targeted. Rather than the promised precision, AI-driven operations have become synonymous with humanitarian catastrophe, highlighting a reality where speed and efficiency take precedence over human life and accountability.

In Ukraine, a United Nations commission concluded in May that Russian forces committed crimes against humanity by using drones to attack civilians in the Kherson region. Human Rights Watch documented quadcopter drone attacks on people riding bicycles or engaged in other normal activities. It is the kind of close-up violence that terrorizes people going about their everyday lives, and shows what the scourge of technology-determined capability looks like when it reaches innocent civilians.

How Canada can lead with safeguards and human oversight

While global rules for confronting such incidents are lagging behind the pace of atrocities, there are signs of progress. For instance, last December the UN General Assembly voted 166-3 to adopt Resolution 79/62 on lethal autonomous weapons. This year diplomats have been exploring ways to use the UN’s Convention on Conventional Weapons as a vehicle for setting new limits on AI-assisted military technology. While these initiatives are important, there are still no binding international laws.

Canada does not need a global treaty in order to act responsibly. Our military already uses Geneva Convention guidelines to conduct legal reviews of new weapons, a process that also checks compliance with the laws of war. But we must ensure that any such process clearly defines and quantifies required levels of human control before our military procures weapons systems that move faster than our values.

For instance, the government can require that all potentially lethal actions remain subject to human decision-making, allowing sufficient time and information to halt the operation. Ottawa can also require that weapon vendors submit event logs, model versioning, and thorough explanations to independent reviews whenever things go wrong. If a supplier cannot meet that standard, the system should not be deployed.

Lessons from the American 1950s defence boom for Canada

Greening the military: Why defence spending must align with climate action

The NATO pledge to more than double defence spending only makes these criteria more urgent. If Canada is preparing to invest at levels not seen since the 1950s, the military must keep humans firmly and accountably in charge. The clearest line Carney can draw is also the simplest: Canada will not use AI-assisted weapons that can select and attack human targets without a human decision. The government can adopt that policy at home, and advocate for it in NATO and at the UN.

Domestic policy points in the same direction. The government’s deal with Cohere shows that Ottawa intends to use AI, so it should weave that momentum into a transparent, public directive on military AI that sets testing requirements, engagement controls, and accountability mechanisms. And we should publish those standards in order to help inform and maintain public consent as defence spending rises, and send Canadian firms and allies a clear signal about the safeguards expected.

Protecting the ‘right to hesitation’ in warfare

There is another crucial principle at stake that also deserves plain language. Democracies depend on the ability to pause, to weigh risks, to accept responsibility for using force. Scholars call this ‘the right to hesitation’ — human beings being given the time and space needed to properly deliberate before making decisions that contribute to violence.

Designing deliberation space into systems is not weakness. It is discipline, and it is how we draw a line between restraint and catastrophe. Proponents argue AI can reduce human error, but the evidence from Gaza shows how algorithmic bias and speed can amplify mistakes rather than prevent them, creating new forms of deadly error that occur at machine speed but leave human-scale devastation.

Canada is right to modernize, and to cultivate domestic AI capacity. It is also right to insist that humans remain in command when lives are at stake. Gaza and Kherson are warnings, not templates.

Canada is well positioned to lead by example, insisting on clear red lines and practical controls that keep human judgment at the centre of any use of force. If we do, we will be better allies and a stronger democracy. If we do not, we risk waking up in a world where the space for ethics has been engineered out.

Do you have something to say about the article you just read? Be part of the Policy Options discussion, and send in your own submission. Here is a link on how to do it.

You are welcome to republish this Policy Options article online or in print periodicals, under a Creative Commons/No Derivatives licence.

Nishtha Gupta photo

Nishtha Gupta

Nishtha Gupta is a master’s student at the School of Public Policy and Global Affairs at the University of British Columbia.

Related Stories