OpenAI sets new safety standards following Solomon meeting and pressure over Tumbler Ridge response

AI Minister Evan Solomon at SAAS NORTH 2025.
After meeting with Sam Altman, Canada’s AI minister says OpenAI is taking “immediate actions.”

Canadian AI minister Evan Solomon says that OpenAI CEO Sam Altman has agreed to a series of safety standards and systems changes in the wake of the mass shooting in Tumbler Ridge, BC.

“Canadians must be confident that these technologies operate under clear rules, strong safeguards, and real accountability when risks emerge.”

The changes came after Solomon had a video call with Altman on Wednesday afternoon. The meeting was initially announced last week, after OpenAI revealed that the alleged perpetrator of the Tumbler Ridge shooting, Jesse Van Rootselaar, had been banned from its large language model, ChatGPT.

Solomon set the meeting to “seek further clarity” about OpenAI’s commitments in the wake of the shooting, and to ensure that those commitments were “translated into concrete action.”

Solomon said in a statement following the call that Altman agreed in the meeting to establish a direct point of contact with the Royal Canadian Mounted Police (RCMP) and implement safety protocols that “direct individuals experiencing distress to appropriate local support services.”

The minister also said that he asked OpenAI to apply its new safety standards retroactively, which Altman agreed to do. Solomon explained that this means OpenAI will review previously flagged cases to determine if any incidents should be referred to law enforcement under its new standards, and report them to the RCMP if they are.

An OpenAI spokesperson told BetaKit in an email that Altman spoke with Solomon to discuss the steps the company is taking, including strengthening its law enforcement referral criteria and improving how its systems “account for country and community context.” They did not specifically confirm Solomon’s assertion that the company will apply its new standards retroactively.

“We remain committed to continuing this work with the Canadian government going forward,” the spokesperson said.

RELATED: Evan Solomon will meet Sam Altman as OpenAI faces pressure over Tumbler Ridge response

Solomon said the steps he and Altman agreed to represent “immediate actions to strengthen safety and accountability.” At the same time, the minister reinforced that AI presents an “enormous opportunity” for Canada.

“Canadians must be confident that these technologies operate under clear rules, strong safeguards, and real accountability when risks emerge,” Solomon said. 

OpenAI initially banned Van Rootselaar’s ChatGPT account eight months before the Feb. 10 shooting. While she was booted for misusing OpenAI’s models “in furtherance of violent activities,” OpenAI did not inform law enforcement of its concerns about her messages. According to The Wall Street Journal, which was first to report the news, the messages included references to gun violence. 

OpenAI has not shared the contents of the conversation, including ChatGPT’s responses, with the public or the Canadian government. The incident is under investigation by the RCMP. 

After the incident, OpenAI said it tightened its protocols for referring ChatGPT accounts to law enforcement, and that it would have reported that account to police if it had been flagged today. 

Altman’s commitments appear to closely mirror measures his company already committed to in a letter sent to Solomon following a meeting between OpenAI executives and Canadian government officials last week

Solomon says the government is examining “a range of measures to strengthen protections.”

The letter said that OpenAI has partnered with mental health, behavioural, and law enforcement experts to refine its criteria “for when conversations cross the line into an imminent and credible risk,” and that it would develop direct points of contact with Canadian law enforcement. OpenAI also updated its privacy policy on Wednesday night, informing users of its age prediction, safeguards, and parental tool features.

Solomon said in his Wednesday statement that the government is examining “a range of measures to strengthen protections,” including stronger privacy frameworks, enhanced protections against online harms, and new transparency expectations for AI systems operating in Canada.

Solomon is leading the development of Canada’s new AI strategy and upcoming privacy legislation; he’s also said the government is working on new online harms legislation after the original Online Harms Act never became law. . The AI minister said multiple times last week that “all options are on the table” following the tragedy.

“We believe in innovation, but not at the expense of safety,” Solomon said last week.

The Tumbler Ridge shooting, and its connection to AI, has raised thorny questions about how to balance individual privacy from corporate and government surveillance when protecting public safety. University of Ottawa professor Michael Geist told BetaKit last month that he thinks enacting lower, mandatory standards for reporting chatbot activity to law enforcement could harm individual privacy.

“We’d be reluctant to say that we want Google to actively monitor emails and report on them,” Geist told BetaKit in an interview. “I don’t see a significant difference between that and what takes place in an AI chatbot context.”

However, Canadian tech media commentator Amber Mac doesn’t think the comparison is accurate.  

“These chatbots are complex systems designed to be agreeable and persuasive, two potentially deadly combinations in the wrong hands,” Mac wrote in a social media response to BetaKit’s story.

In his statement on Wednesday, Solomon noted some other commitments from his meeting with Altman. This includes OpenAI assessing how it can include Canadian privacy, mental health, and law enforcement experts in identifying and reviewing “high-risk cases” involving Canadian users. OpenAI will also provide a report outlining its new systems to identify “high-risk offenders and repeat policy violators,” which Solomon said he will ask the Canadian AI Safety Institute to examine and provide advice to his office.

With files from Josh Scott.

Feature image courtesy SAAS NORTH.

0 replies on “OpenAI sets new safety standards following Solomon meeting and pressure over Tumbler Ridge response”