Canada is rapidly advancing its artificial intelligence capabilities while significantly increasing defence spending, a convergence that could redefine the nation’s military strategy and ethics in warfare. The creation of the Minister of Artificial Intelligence and Digital Innovation portfolio earlier this year, followed by a memorandum of understanding with Toronto-based tech firm Cohere, signals Ottawa’s commitment to integrating AI into both civilian and military systems. This shift is not merely about technological advancement; it is about shaping the future of national defence, where AI-assisted weapons systems could become the defining tools of 21st-century warfare.
Canada’s defence budget is set to more than double by 2035, reaching over $100 billion annually—a sustained increase not seen since the Cold War. This surge in spending will determine the kind of military Canada builds, how it integrates AI into weapons and command systems, and where human judgment fits into life-or-death decisions. If Canada fails to prioritise human oversight, algorithms will increasingly dictate the rules of engagement, potentially eroding ethical safeguards.
The consequences of AI-driven military decisions are already evident in conflicts like those in Gaza and Ukraine. In Gaza, Israeli forces have deployed AI-assisted tools such as “Lavender” and “Gospel” to generate target lists, often with devastating consequences. Investigations reveal that these systems mark individuals as suspected militants based on algorithmic profiling, leading to misidentifications and civilian casualties. Human oversight has been reduced to a mere 20 seconds per decision, with Israeli intelligence officers admitting they spend minimal time reviewing AI-generated targets. This has resulted in what commanders describe as “tragic mishaps,” where speed and efficiency take precedence over human life and accountability.
Similarly, in Ukraine, Russian forces have used drones to attack civilians, with the United Nations concluding that these actions constitute crimes against humanity. Human Rights Watch documented drone strikes on civilians engaged in everyday activities, highlighting the terrifying precision of technology-driven violence. These incidents underscore the urgent need for safeguards to prevent AI-assisted weapons from becoming tools of indiscriminate harm.
Canada has an opportunity to lead by example in establishing ethical guidelines for AI in warfare. The government can ensure that all potentially lethal actions remain subject to human decision-making, allowing sufficient time and information to halt operations. Ottawa should require weapon vendors to submit event logs, model versioning, and thorough explanations to independent reviews whenever errors occur. If a supplier cannot meet these standards, the system should not be deployed.
The NATO pledge to more than double defence spending only amplifies the urgency of these safeguards. If Canada is preparing to invest at levels not seen since the 1950s, it must ensure that humans remain firmly and accountably in charge. The clearest line Prime Minister Mark Carney can draw is simple: Canada will not use AI-assisted weapons that can select and attack human targets without human oversight. The government can adopt this policy domestically and advocate for it in NATO and at the UN.
Domestic policy points in the same direction. The government’s deal with Cohere shows that Ottawa intends to use AI, so it should weave that momentum into a transparent, public directive on military AI that sets testing requirements, engagement controls, and accountability mechanisms. Publishing these standards will help inform and maintain public consent as defence spending rises and send Canadian firms and allies a clear signal about the safeguards expected.
Democracies depend on the ability to pause, to weigh risks, and to accept responsibility for using force. Scholars call this “the right to hesitation”—human beings being given the time and space needed to properly deliberate before making decisions that contribute to violence. Designing deliberation space into systems is not weakness; it is discipline, and it is how we draw a line between restraint and catastrophe.
Proponents argue AI can reduce human error, but the evidence from Gaza shows how algorithmic bias and speed can amplify mistakes rather than prevent them, creating new forms of deadly error that occur at machine speed but leave human-scale devastation. Canada is right to modernise and to cultivate domestic AI capacity. It is also right to insist that humans remain in command when lives are at stake. Gaza and Kherson are warnings, not templates. Canada is well positioned to lead by example, insisting on clear red lines and practical controls that keep human judgment at the centre of any use of force. If we do, we will be better allies and a stronger democracy. If we do not, we risk waking up in a world where the space for ethics has been engineered out.

