OpenAI says that since it banned the account of the perpetrator of a mass shooting in Tumbler Ridge, BC, it has tightened its protocols for referring ChatGPT accounts to law enforcement, and would have reported that account to police if the account had been flagged today.
“We believe in innovation, but not at the expense of safety.”
AI Minister Evan Solomon
That, as well as the news that the shooter had created a second ChatGPT account following the initial ban, were among the revelations included in a Thursday letter sent to AI Minister Evan Solomon following OpenAI’s meeting with Canadian government officials earlier this week.
The US AI firm banned suspect Jesse Van Rootselaar’s ChatGPT account eight months before the Feb. 10 tragedy. While Van Rootselaar was booted for misusing OpenAI’s models “in furtherance of violent activities,” OpenAI did not inform law enforcement of its concerns about her messages. According to The Wall Street Journal, which was first to report this news, the messages included references to gun violence. OpenAI has not shared the contents of the conversation, including ChatGPT’s responses, with the public or the Canadian government. The incident is under investigation by RCMP.
Canadian AI Minister Evan Solomon summoned senior OpenAI leaders to Ottawa on Tuesday to walk federal officials through its safety and escalation protocols. Solomon told BetaKit yesterday during a media scrum that ministers left that meeting disappointed given the lack of “concrete changes” OpenAI presented. Justice Minister Sean Fraser has threatened new legislation unless OpenAI quickly changes its approach.
OpenAI now says some of those changes are either here or coming. In the letter, addressed to Solomon, OpenAI vice-president of global policy Ann O’Leary wrote that under the company’s new protocols for law enforcement reporting, OpenAI would refer Van Rootselaar’s chatbot messages to police if they were discovered today.
The letter says over the past several months, OpenAI has partnered with mental health, behavioural, and law enforcement experts to refine its criteria “for when conversations cross the line into an imminent and credible risk” that merits police referral. Today, OpenAI says mental health and behavioural professionals help review difficult cases and its protocols better account for users who may not discuss the target, means, or timing of planned violence.
OpenAI said it would implement additional measures “to help prevent tragedies like this in the future.” These steps include developing direct points of contact with Canadian law enforcement, expanding its commitment to direct users in need to localized support. They also include strengthening its system for detecting repeat policy violators like Van Rootselaar, who the company recently discovered evaded its earlier ban with a second ChatGPT account, OpenAI disclosed in the letter
BetaKit has reached out to Solomon for reaction to the changes outlined in the OpenAI letter.
As BetaKit has reported, this incident has raised some thorny questions about how to balance individual privacy from corporate and government surveillance when protecting public safety.
Yesterday, Solomon told BetaKit during that media scrum that while OpenAI’s connection to the Tumbler Ridge tragedy has not changed the federal government’s approach to developing its new AI strategy or regulating AI, “the urgency has changed,” and Canadians want answers.
Solomon said he does not see the threat of legislation to mandate law enforcement reporting as conflicting with the “light, tight, right” vision he laid out last year for AI regulation, and does not expect it to stifle innovation. But he said his first priority is to protect Canadians. “We believe in innovation, but not at the expense of safety,” he added.
OpenAI is facing multiple lawsuits in the US on behalf of family members of people who have died by suicide after conversations with ChatGPT, who accuse the company of creating a model that is both manipulative and overly supportive. These suits have not yet been tried in court.
This latest incident comes as OpenAI announced it has raised another $110 billion USD in funding, bringing its post-money valuation to $840 billion USD.
“These chatbots are complex systems designed to be agreeable and persuasive, two potentially deadly combinations in the wrong hands,” Canadian tech media commentator Amber Mac said in a social media post, in response to BetaKit’s previous reporting on this story.
In its letter, OpenAI called these changes “the first step,” indicating that it plans to engage with federal and provincial governments, industry peers, and local stakeholders in the coming months as the company continually refines its approach.
Solomon said the feds have not yet met with other major AI chatbot providers like Google, Meta, and Anthropic regarding their escalation protocols but intend to do so.
Feature image courtesy Pexels. Photo by Matheus Bertelli.
