Minister of AI and Digital Innovation Evan Solomon announces the Regional Defence Investment Initiative from Toronto’s Downsview Aerospace Innovation and Research hub.
Canadian AI styles itself as ethical but is reorienting towards military and dual-use technology.

Mélina Poulin is a PhD student in Society, Culture, and Digital Technologies at INRS, Québec. 

Nicolas Chartier-Edwards is a PhD Candidate in Politics, Science, and Technology at INRS, Québec. Both are Québec research chairs in Francophone digital tech and AI. 



Last month, Innovation, Science and Economic Development Canada (ISED) released the results of the 30-day national sprint aimed at renewing Canada’s AI strategy. ISED is promoting the sprint’s results as the fruit of public participation. We believe there are significant pitfalls that undermine this initiative’s credibility on both democratic and scientific grounds. But a certain silence within the Canadian AI ecosystem makes us question its stance on sensitive issues related to the automation of weapons and other military technologies. That silence appears to suggest an attitude that departs from Canadian AI’s historical positioning as “responsible” and “ethical.”

The government is using this consultation, which should be a mechanism for democratic participation, as a way to absolve itself and the Canadian AI ecosystem of ethical commitments.

The ISED’s sprint relied on 28 AI consultants and an online survey that received more than 11,000 responses from industry, academia, and civil society. The volume of responses was high, and ISED set a clear tone with its decision to relegate data analysis and synthesis to four large language models (LLMs). As professors Blair Attard-Frost and Jonathan Roberge have pointed out, ISED’s questionable methodology creates an illusion of consensus through primarily selecting industry-affiliated consultants and presenting superficial explanations generated by LLMs.

The report contains contradictory statements that follow one another without explicit acknowledgment; calls for regulation clash with calls for deregulation, for example. A statement from Joëlle Pineau, the leading figure in Montréal’s ethical AI ecosystem who left Meta to become director of AI at Cohere, embodies this ambiguity when she asserts that we “must aggressively tackle the negative public sentiment on AI.” She is most concerned with a decline in investments, but her statement seems to reflect a broader desire to make public opinion adhere to government and industry orientations (massive public and private funding in infrastructures) rather than listen to all the concerns that have been expressed.

A deeper contradiction lies in the repeated “ethical” commitments of Canadian AI, while the ecosystem is gradually reorganizing itself towards dual-use technology in both civil and military industries. This shift, justified in the name of Canadian sovereignty, began with the introduction of the concept of dual use in the 2025 federal budget and the creation of the Office of Advanced Innovation and Science Research, Engineering and Leadership (BOREALIS) within National Defense. 

In the context of the national sprint, Sam Ramadori, VP and executive director of Law Zero, has called on the government to increase capital investments by BOREALIS in order to bring Canadian start-ups closer to defence needs. 

RELATED: We read every submission from Canada’s AI task force: here’s what they said

LawZero is a non-profit organization, founded and directed by leading AI expert Yoshua Bengio, that specializes in technical solutions for developing “ethical and secure” AI. Bengio has been central to the development of the Montréal ecosystem through his role as former scientific director of MILA, promoter of the Montréal Declaration, “whistleblower” on the existential risks posed by AI, and participant in the “campaign against killer robots.” LawZero’s surprising positioning and its recent success in persuading Ottawa to subsidize its work can be explained by the growing use of “sovereignty” rhetoric by AI industries to legitimize their interest in the military market. 

Another example is Cohere, the government’s latest darling, which has secured massive public funding and forged new links with the defence industry. The historically controversial convergence of these industries is all the more startling within an ecosystem that has long prided itself on its ethics.

We are not surprised that a national sprint, coordinated by figures mostly from the Canadian AI industry, has produced a report that encourages AI investments based on military-industrial ties, among other things. What does concern us is the ethical and democratic masquerade of Canadian AI, as evidenced by the enthusiasm for dual-use technologies from LawZero. 

The problem is that the government is using this consultation, which should be a mechanism for democratic participation, as a way to absolve itself and the Canadian AI ecosystem at large of their ethical commitments. Even more alarming is the “quiet militarization” of AI, which is taking place in a regulatory void, as Canada has no legislation on AI or autonomous weapons, and operates within an ethical vacuum that serves corporate opportunism and state complacency.

The opinions and analysis expressed in the above article are those of its author, and do not necessarily reflect the position of BetaKit or its editorial staff. It has been edited for clarity, length, and style.

Feature image courtesy Josh Scott for BetaKit.

0 replies on “The quiet militarization of Canadian AI”