Mila researchers target “AI psychosis” amid concerns about chatbots’ mental health impact

Etienne Brisson at the Mila AI Policy Conference.
Research hub’s AI Safety Studio looks to limit chatbot outputs that feed users’ delusions.

Québec research institute Mila says it is making mental health safeguards for AI chatbots a top research priority as cases of psychosis driven by chatbots mount worldwide.  

Over sessions at a pre-conference for the Mila AI Policy Conference in Montréal this week, researchers and policy experts discussed the rising number of mental health crises and suicides associated with AI chatbot use, as well as the work Mila researchers are doing to address it. Through its AI Safety Studio, the research hub is developing independent metrics and guardrails to limit chatbot outputs that have fed users’ delusions, and have even allegedly led to deaths by suicide.


An LLM is a “raw mirror without a moral compass, not bound to truthfulness,” and without deep understanding and reasoning.

Simona Gandrabur,
Mila

Simona Gandrabur, head of Mila’s AI Safety Studio, told BetaKit on Wednesday that when she joined the research institute over the summer, she proposed pivoting their current research direction into tackling the problem of AI chatbots eroding users’ mental health. “I couldn’t not work on this problem,” she said. 

Gandrabur put these edge cases in context: with 800 million weekly active users, 10 percent of Earth’s population is using ChatGPT weekly, according to OpenAI. The number one use of generative AI is for companionship or therapy, she said, and a fifth of students or their friends have had romantic relationships with AI. 

These increasingly emotionally intimate interactions have sometimes led to what’s dubbed “AI psychosis,” where prolonged interactions with a chatbot validate delusions and lead people to detach from reality and harm themselves or others. 

Mila’s Safety Studio team is building infrastructure for open-source, independent guardrails, as well as reliability tests and risk-assessment tools. The biggest challenge, however, is “having real-world data that reflects how conversations with chatbots drift toward psychosis,” Gandrabur said. Often, users will have extended, months-long conversations with chatbots before anything goes awry. 

Lack of education making matters worse

Etienne Brisson, founder of The Human Line Project based in Trois-Rivières, Que., explained at Mila on Wednesday the work his grassroots organization is doing to connect people and their families through support groups. The project is also compiling data and working with prominent universities to track the disturbing trend. 

“We need people to understand and break the stigma about who it affects,” Brisson said, adding that many people experiencing AI psychosis had no prior mental health concerns. 

Mila, one of Canada’s three main AI research hubs, assembles more than 1,000 AI researchers from various universities for academic study of machine learning and AI. They work across a number of streams, including climate, safety, and policy. 

A lack of education about AI chatbot limitations is making matters worse, speakers said. During a talk, Gandrabur explained that a large language model (LLM) is a “raw mirror without a moral compass, not bound to truthfulness,” without deep understanding and reasoning, and trained on all of the internet to predict the next word. Then, there’s the reinforcement learning layer, where some models are optimized to promote user engagement, which can create “sycophancy and [an] echo-chamber.” 

Alignment, tuning, and external filters known as “guardrails” create system rules to follow and flag potentially harmful outputs—but there are limitations and ways to “jailbreak” chatbots into spitting out dangerous information. 

Canada needs a “recalibration” of existing frameworks

One central problem speakers at the AI Policy Conference touched on was AI companies’ incentives to optimize user engagement. Google and AI companion startup Character.AI settled court cases last week alleging that their chatbots led to a teenager’s death by suicide. OpenAI has now been sued multiple times for its chatbot allegedly encouraging users to attempt suicide. Just this week, a lawsuit filed in California alleged that ChatGPT had acted as a “suicide coach” for a Colorado man who died of a self-inflicted gunshot wound in November. 

To better understand the issue and develop solutions, Mila-affiliated researchers are also looking at people’s everyday experiences.

“[Chatbots] are tutors, companions, and voices that sound calm, certain, and reassuring, and endlessly available when few others are,” said Helen Hayes, associate director of policy at the McGill University Centre for Media, Technology, and Democracy and Mila AI Policy Fellow. 

Simona Gandrabur at Mila the Mila AI Policy Conference.

Chatbots transforming from a tool of information into a tool of relationships means that Canada needs a “recalibration” of its existing frameworks, Hayes said. This should include obligations for companies to design safety into their AI models, institutional oversight capable of evaluating AI chatbots before users interact with them, and youth participation in AI governance. 

Canada doesn’t yet have legislation regulating AI models. The Artificial Intelligence and Data Act (AIDA), part of Bill C-27, sought to modernize data privacy and protections but died in January 2025 when Parliament was prorogued—as did the Online Harms Act. AI and digital innovation minister Evan Solomon has said the feds will table legislation this year that will specifically tackle deepfakes and data privacy, as well as release a refreshed AI strategy

Meanwhile, other jurisdictions have explored regulations. Hayes said European Union regulators are moving towards systemic risk assessments, and Australian regulators are classifying AI companions as high-risk technology.

If you or someone you love is experiencing a mental health crisis, please call or text Canada’s 24/7 suicide crisis helpline at 988

All images courtesy Mila, via LinkedIn

0 replies on “Mila researchers target “AI psychosis” amid concerns about chatbots’ mental health impact”