OpenAI’s decision not to alert law enforcement of its concerns about a user who went on to commit a mass murder has raised thorny questions about how to balance individual privacy from corporate and government surveillance when protecting public safety.
The US AI company confirmed a Wall Street Journal report to BetaKit this week that it had banned Tumbler Ridge, BC mass shooter Jesse Van Rootselaar’s ChatGPT account eight months before the Feb. 10 shooting. Van Rootselaar was kicked off the platform for misusing OpenAI’s models “in furtherance of violent activities,” but the company did not inform police of its concerns about her messages.
After learning of the decision, Canadian AI Minister Evan Solomon summoned senior OpenAI leaders to Ottawa Tuesday night to walk government officials through the American company’s safety protocols and escalation process. Solomon said he and other officials left that meeting disappointed; Justice Minister Sean Fraser has threatened to introduce new legislation unless OpenAI changes its approach.
“We’d be reluctant to say that we want Google to actively monitor emails and report on them.”
Michael Geist
Experts BetaKit spoke with said Canada should proceed cautiously when considering whether to introduce new laws that force technology companies to report disturbing but not illegal content to police. University of Ottawa professor Michael Geist, who specializes in internet law and privacy matters, worries “the rush to say ‘we need to require disclosures’ carries with it some pretty significant risks.”
Geist thinks enacting lower, mandatory standards for reporting chatbot activity to law enforcement could harm individual privacy. With most of what humans write now passing through a digital intermediary, Geist also sees this as more than just an AI issue.
“We’d be reluctant to say that we want Google to actively monitor emails and report on them,” Geist told BetaKit in an interview. “I don’t see a significant difference between that and what takes place in an AI chatbot context.”
University of British Columbia associate professor of sociology Mike Zajko, who focuses on internet policy, also shared reservations about such a move. “Companies like OpenAI collect vast amounts of highly sensitive personal information, and have a great deal of discretion about what they do with it,” Zajko told BetaKit. “Privacy and surveillance concerns already exist, and mandating information sharing with law enforcement amplifies these concerns.”
The Wall Street Journal, which first broke the news of OpenAI’s handling of this situation, reported that Van Rootselaar’s messages included references to gun violence and were flagged by an automated review. OpenAI employees discussed whether they should alert Canadian police before deciding against it and banning the account in June 2025, the same month Solomon stressed that Canada needed “light, tight, [and] right” AI regulation.
“This was a devastating tragedy, and we are doing all we can to support the ongoing investigation,” an OpenAI spokesperson told BetaKit. A Royal Canadian Mounted Police (RCMP) spokesperson confirmed that OpenAI reached out to investigators following the incident and said the police service is reviewing the shooter’s online activities, but declined to provide further comment.
In a public statement following last night’s meeting with OpenAI, Solomon wrote that “internal review alone is not sufficient when public safety is at stake.” He said the ministers in attendance expressed disappointment that no substantial new safety measures were presented, adding that OpenAl indicated it will return shortly “with more concrete proposals tailored to the Canadian context.”
Justice Minister Sean Fraser, who also met with OpenAI last night, indicated that legislative change could be on the way if the company does not respond accordingly. “The message that we delivered, in no uncertain terms, was that we have an expectation that there are going to be changes implemented, and if they’re not forthcoming very quickly, the government’s going to be making changes,” Fraser said on Wednesday during a press scrum.
Solomon said the federal government is “reviewing broader measures to ensure that AI systems and platforms operating in Canada have clear standards and accountability,” promising that the government will have more to say on this front “in the coming weeks.”
Sharon Polsky, president of the Privacy and Access Council of Canada, told BetaKit in an interview that OpenAI was not necessarily obligated to proactively share information with the Canadian government—and that she was surprised that the company didn’t wait until it was legally compelled to do so, pointing to when Meta leadership ignored a summons to appear in front of Canadian Parliament in 2021.
RELATED: “Stay tuned”: OpenAI teases Canadian office
“On one hand, it’s good PR … they have to play nice,” Polsky said. But she added that it raises deeper questions about to what degree private companies “should be compelled to work on behalf of the state.”
For its part, OpenAI told BetaKit it regularly reviews its law enforcement referral criteria, and is currently doing so in relation to this case to see if any improvements can be made.
While Zajko thinks an update to Canada’s private-sector privacy laws is long overdue, he cautioned against acting too quickly and reactively to this incident, adding, “there is often a danger when policy is developed in response to a horrific crime that the policy may not actually be effective in preventing similar acts in the future, and may create additional risks and harms.”
Zajko noted that there is a vast amount of information flowing through platforms like ChatGPT “that could potentially suggest risks of illegality or some kind of harm,” much of which police do not have the expertise to evaluate, including from folks facing mental health challenges.
RELATED: Mila researchers target “AI psychosis” amid concerns about chatbots mental health impact
OpenAI noted that “overenforcement” can be distressing for young people and their families, and may raise privacy concerns.
At last night’s meeting, Solomon said the two parties unpacked how OpenAI identifies “imminent and credible risk,” moves cases from automated detection to human review, and handles referrals when young people may be involved. Solomon said they did not review details of the Tumbler Ridge case as it is an ongoing RCMP investigation.
When OpenAI detects users who appear to be planning to harm others, a small team of human reviewers trained on the company’s usage policies examines their conversations. If that team determines there is “an imminent and credible risk of serious physical harm to others,” the company said it reports those conversations to law enforcement.
“It shouldn’t take the AI Minister meeting with corporate executives to be able to understand what the safety policies of these companies are.”
OpenAI has not provided Van Rootselaar’s chat logs to the Canadian government or public. However, from what he has heard about OpenAI’s process for reviewing this type of content and high standard for reporting it to law enforcement, Geist said that he thinks the AI firm’s approach “sounds quite reasonable.” While in hindsight, it would have been good to report Van Rootselaar’s activity, “hindsight’s hindsight,” he said.
What the OpenAI-Tumbler Ridge example does highlight, Geist argued, is the need for “greater transparency” from tech companies about their standards. “It shouldn’t take the AI Minister meeting with corporate executives to be able to understand what the safety policies of these companies are and how they’re administered,” Geist said.
If those businesses are unwilling to be transparent about how they approach these situations or their standards fail to meet the needs of Canadians, additional legislation may be required to mandate this type of transparency or establish a national standard, Geist said.
With files from Alex Riehl.
Feature image courtesy Unsplash. Photo by Levart_Photographer.
