Canadian AI Minister Evan Solomon plans to meet with OpenAI co-founder and CEO Sam Altman next week to discuss the commitments the company made in the wake of the mass shooting in Tumbler Ridge, BC.
“We believe in innovation, but not at the expense of safety.”
AI Minister Evan Solomon
OpenAI has indicated that since it banned the account of the alleged perpetrator of one of the country’s deadliest mass shootings, the company has tightened its protocols for referring ChatGPT accounts to law enforcement, and would have reported that account to police if it had been flagged today.
OpenAI revealed this, as well as the fact that the alleged shooter had bypassed its safeguards by creating a second ChatGPT account following the ban, in a Thursday letter the company sent to Solomon following a meeting between its executives and Canadian government officials earlier this week.
In a statement shared with BetaKit following the publication of this story, Solomon said, “while we note [OpenAI’s] willingness to strengthen law enforcement referral protocols, establish direct points of contact with Canadian authorities, and enhance safeguards, we have not yet seen a detailed plan for how these commitments will be implemented in practice.”
Solomon said he will meet with Altman next week “to seek further clarity and to ensure that the commitments made are translated into concrete action.”
The US AI firm initially banned suspect Jesse Van Rootselaar’s ChatGPT account eight months before the Feb. 10 shooting. While Van Rootselaar was booted for misusing OpenAI’s models “in furtherance of violent activities,” OpenAI did not inform law enforcement of its concerns about her messages. According to The Wall Street Journal, which was first to report this news, the messages included references to gun violence. OpenAI has not shared the contents of the conversation, including ChatGPT’s responses, with the public or the Canadian government. The incident is under investigation by RCMP.
Solomon summoned senior OpenAI leaders to Ottawa on Tuesday to walk federal officials through its safety and escalation protocols. Solomon told BetaKit yesterday during a media scrum that ministers left that meeting disappointed given the lack of “concrete changes” OpenAI presented. Justice Minister Sean Fraser has threatened new legislation unless OpenAI quickly changes its approach.
OpenAI now says some of those changes are either here or coming. In the letter, addressed to Solomon, OpenAI vice-president of global policy Ann O’Leary wrote that under the company’s new protocols for law enforcement reporting, OpenAI would refer Van Rootselaar’s chatbot messages to police if they were discovered today.
The letter says over the past several months, OpenAI has partnered with mental health, behavioural, and law enforcement experts to refine its criteria “for when conversations cross the line into an imminent and credible risk” that merits police referral. Today, OpenAI says mental health and behavioural professionals help review difficult cases and its protocols better account for users who may not discuss the target, means, or timing of planned violence.
OpenAI said it would implement additional measures “to help prevent tragedies like this in the future.” These steps include developing direct points of contact with Canadian law enforcement, expanding its commitment to direct users in need to localized support. They also include strengthening its system for detecting repeat policy violators like Van Rootselaar, who the company recently discovered evaded its earlier ban with a second ChatGPT account, OpenAI disclosed in the letter.
As BetaKit has reported, this incident has raised some thorny questions about how to balance individual privacy from corporate and government surveillance when protecting public safety.
Yesterday, Solomon told BetaKit during that media scrum that while OpenAI’s connection to the Tumbler Ridge tragedy has not changed the federal government’s approach to developing its new AI strategy or regulating AI, “the urgency has changed,” and Canadians want answers.
Solomon said he does not see the threat of legislation to mandate law enforcement reporting as conflicting with the “light, tight, right” vision he laid out last year for AI regulation, and does not expect it to stifle innovation. But he said his first priority is to protect Canadians. “We believe in innovation, but not at the expense of safety,” he added.
OpenAI is facing multiple lawsuits in the US on behalf of family members of people who have died by suicide after conversations with ChatGPT, who accuse the company of creating a model that is both manipulative and overly supportive. These suits have not yet been tried in court.
This latest incident comes as OpenAI announced it has raised another $110 billion USD in funding, bringing its post-money valuation to $840 billion USD.
“These chatbots are complex systems designed to be agreeable and persuasive, two potentially deadly combinations in the wrong hands,” Canadian tech media commentator Amber Mac said in a social media post, in response to BetaKit’s previous reporting on this story.
In its letter, OpenAI called these changes “the first step,” indicating that it plans to engage with federal and provincial governments, industry peers, and local stakeholders in the coming months as the company continually refines its approach.
“The tragedy in Tumbler Ridge has raised serious questions about how digital platforms respond when credible warning signs of violence emerge,” Solomon said in today’s statement. “Canadians deserve greater clarity about how human review decisions are made, how escalation thresholds are applied, and how privacy considerations are balanced with public safety.”
Solomon said yesterday that the feds have not yet met with other major AI chatbot providers like Google, Meta, and Anthropic regarding their escalation protocols but intend to do so.
“All options remain on the table as we assess what further steps may be necessary,” he added today.
UPDATE (02/27/26): This story has been updated to include AI Minister Evan Solomon’s response to the letter and note his upcoming meeting with OpenAI CEO Sam Altman.
Feature image courtesy TechCrunch under Creative Commons Attribution 2.0 Generic (CC BY 2.0).
