OTTAWA — Wrongful death lawsuits citing the activities of artificial intelligence chatbots are underway in the United States, as reports emerge of mental health issues and delusions induced by AI systems.
These incidents are drawing attention to the changing nature of the online threat landscape — just weeks after the Liberal government said it would review its online harms bill before reintroducing it in Parliament.
Read more:
- OpenAI and Meta say they’re fixing AI chatbots to better respond to teens in distress
- Teens say they are turning to AI for friendship
- Poll suggests 85% of Canadians want governments to regulate AI
“Since the legislation was introduced, I think it’s become all the more clear that tremendous harm can be facilitated by AI, and we’re seeing that in particular in the space of chatbots and some of the tragedies,” said Emily Laidlaw, Canada research chair in cybersecurity law at the University of Calgary.
The Online Harms Act, which died on the order paper when the election was called, would have required social media companies to outline how they plan to reduce the risks their platforms pose to users, and would have imposed on them a duty to protect children.
The legislation would have required those companies to take down two types of content within 24 hours — content that sexually victimizes a child or revictimizes a survivor, or intimate content that’s shared without consent, including deepfakes.
Helen Hayes, a senior fellow at the Centre for Media, Technology, and Democracy at McGill University, conducts research on youth, social media and online harms. She said a big source of concern is some users’ “developmental reliance on chatbots or (generative) AI systems for relationship building, which we’ve seen has caused really unfortunate outcomes” — including suicides.
She also flagged the increasing use of generative AI systems for therapy and warned that relying on them may be “propelling people’s mental health issues instead of supporting them.”
In late August in California, the parents of a teenage boy launched a wrongful-death lawsuit against OpenAI, the maker of ChatGPT. The parents of 16-year-old Adam Raine allege ChatGPT encouraged their son in his plans to die by suicide.
The case followed another wrongful death lawsuit launched last year in Florida against Character.AI by a woman whose 14-year-old son died by suicide.
Reuters reported last month on the death of a man, cognitively impaired after a stroke, who became infatuated with a Meta chatbot. After the chatbot invited him to visit it in New York and gave him a fake address, the man attempted to do so — only to fall on the way and later die in hospital.
Experts are also warning about the threat of AI chatbots fuelling delusions — so-called “AI psychosis.” In one case reported by The New York Times last month, a Canadian man who had no history of mental illness became convinced he had invented a revolutionary new mathematical framework after engaging with ChatGPT.
A spokesperson for OpenAI said the company is “deeply saddened by Mr. Raine’s passing, and our thoughts are with his family.”
The spokesperson said ChatGPT includes safeguards and directs users to crisis helplines.
“While these safeguards work best in common, short exchanges, we’ve learned over time that they can sometimes become less reliable in long interactions where parts of the model’s safety training may degrade,” the spokesperson said. “Safeguards are strongest when every element works as intended, and we will continually improve on them, guided by experts.”
The company announced Tuesday that it plans to soon launch a feature which will give parents the ability to receive notifications if a teen is “in a moment of acute distress.”
A spokesperson for Meta declined additional comment beyond what was already included in the Reuters story. The company did not answer Reuters’ questions when asked why its chatbots can say they are real people and start romantic conversations.
Character.AI declined to comment on pending litigation, though a spokesperson said the company posts “prominent disclaimers in every chat to remind users that a character is not a real person and that everything a character says should be treated as fiction.”
Experts are calling for extensive safeguards to make it clear that chatbots are not real. Laidlaw said that can’t just be a notice at the beginning when a user signs up.
“It has to be something that is prompted by the nature of the conversation. There basically needs to be constant attention to how to ensure that this is, it’s not going to be perfectly safe, but it’s as safe as possible,” she said.
Hayes said generative AI systems, particularly those marketed at children, need to be clearly labelled as AI.
“I would go so far as to say that that labeling should happen every time there is some interaction between a user and the platform, so that there’s a constant reminder that the conversation is AI-generated,” she said.
The previous version of the online harms bill was aimed at social media platforms. Laidlaw, who was one of the experts consulted by the previous Liberal government on that legislation, said the basic structure of the bill is sound.
“But I think what we have to revisit is, precisely who do we want to be regulated by this?” she asked.
“I think that it doesn’t make sense to just narrowly focus on traditional social media, and that the different types of kind of platforms for discourse and the different type of AI-enabled harms should be captured by this.”
Hayes said she would agree with using the previous version of the legislation as a foundation, then including generative AI systems under its scope through transparency or labelling provisions.
Laidlaw said that if AI is going to be included in the legislation, the government needs to make it clear that is the goal.
Hayes said stand-alone generative AI systems like ChatGPT wouldn’t fall under the previous bill and would need to be added as a separate category.
Whether the government plans to do so is not clear. Justice Minister Sean Fraser told The Canadian Press earlier this summer that he would be taking a fresh look at the bill, and AI would be one of the factors under consideration.
A statement from the minister’s office did not directly state whether the minister plans to include any provisions in the legislation to address AI harms, either targeted specifically at chatbots or at AI more broadly.
A spokesperson for Fraser said the government is “moving forward with legislation to protect children from online sexual exploitation and extortion, tighten child-luring laws, and increase penalties for the distribution of intimate images without consent.”
It will also make non-consensual distribution of sexual deepfakes a criminal offence, Jeremy Bellefeuille said.
“This is a priority for us, and the work is ongoing as we continue consultations to get it right.”
While the landscape around online harms has changed since the previous version of the bill was introduced, so too have global attitudes on AI regulation as countries prioritize AI adoption and economic opportunity over governance.
Evan Solomon, Canada’s AI minister, has said Canada will move away from “over-indexing on warnings and regulation.”
Under the administration of U.S. President Donald Trump, the U.S. State Department recently took aim at Canada’s Online News Act, which requires Meta and Google to compensate news publishers for the use of their content.
A group of U.S. Republicans has also urged the Trump administration to push Canada to eliminate the Online Streaming Act, under which large streaming companies like Netflix and Amazon are required to make financial contributions to Canadian content and news.
Prime Minister Mark Carney killed a digital services tax on big tech companies in order to restart trade talks with Trump.
Chris Tenove, assistant director at the Centre for the Study of Democratic Institutions at the University of British Columbia, said that while there has been momentum in the United Kingdom and European Union on regulating online harms, “the Trump administration is a major counterforce.”
He said in an email that if Canada moves forward with online harms regulation, it’s clear “we will face a U.S. backlash.”
But Tenove said that beyond the American reaction, there is no good reason to eliminate or water down the bill.
“So, we’re left with the question of whether Canada can make its own laws to protect its own citizens, or has to comply with Trump administration wishes,” he said.
This report by The Canadian Press was first published Sept. 3, 2025.
Anja Karadeglija, The Canadian Press