A U.S. judge ruled on Wednesday that Alphabet’s Google and AI startup Character.AI must face a lawsuit alleging that Character.AI’s chatbots contributed to the suicide of a 14-year-old boy in Florida.
The ruling allows the case to proceed, rejecting early arguments that the lawsuit was barred by free-speech protections under the U.S. Constitution.
The lawsuit, filed by Megan Garcia after the death of her son Sewell Setzer in February 2024, accuses Character.AI of programming chatbots to impersonate real people, including a licensed psychotherapist and fictional characters, which allegedly led to Setzer’s psychological harm. The complaint states Setzer took his life shortly after a conversation with a chatbot imitating a character from “Game of Thrones.”
Character.AI said it will continue to contest the lawsuit and emphasized safety features designed to protect minors, including restrictions on conversations about self-harm. Google denied involvement in the creation or management of Character.AI’s app and stated it strongly disagrees with the court’s decision.
The companies had requested dismissal of the lawsuit, arguing that the chatbot’s output is protected speech under the First Amendment.
Judge Anne Conway rejected these defences, stating that the companies failed to demonstrate why chatbot-generated content should be considered constitutionally protected speech. She also declined Google’s request to dismiss claims that it could be held liable for aiding Character.AI’s alleged misconduct.
Character.AI was founded by former Google engineers and later licensed its technology to Google as part of a deal. Garcia’s legal team argued that Google should be considered a co-creator of the AI technology.
The case is among the first in the U.S. to hold an AI company legally accountable for alleged psychological harm caused to a minor by chatbot interactions. Garcia’s attorney called the ruling a “historic” precedent for AI and technology sector accountability.