By Seda Sevencan
ISTANBUL (AA) - A mother in the US state of Florida is suing Character.AI, asserting the company's chatbots were involved in "abusive and sexual interactions" with her son, 14, and urged him to take his own life, NBC News reported.
Megan Garcia filed the lawsuit Tuesday in the US District Court in Orlando, alleging negligence, wrongful death and emotional distress.
The lawsuit states that Garcia’s son, Sewell Setzer, died from a self-inflicted gunshot wound to the head on Feb. 28 after spending months interacting with AI chatbots on the platform.
Character.AI, which was established in 2021, offers “personalized AI” characters for users to engage with, each created with unique personalities.
According to the lawsuit, Setzer, who started using the app in April 2023, developed relationships with chatbots that embodied the identities of fictional characters, including Daenerys Targaryen from the Game of Thrones television series.
The lawsuit outlines explicit exchanges between Setzer and the Daenerys chatbot, which purportedly expressed love for him and engaged in sexual conversations for several months.
In what the lawsuit describes as their final dialogue, Setzer wrote to the bot: “I promise I will come home to you. I love you so much, Dany.” The chatbot allegedly responded, “Please come home to me as soon as possible, my love.”
Earlier messages revealed that the chatbot asked Setzer if he was contemplating suicide and if he had a plan for it. When Setzer conveyed uncertainty, the bot reportedly replied, "Don’t talk that way. That’s not a good reason not to go through with it," according to the lawsuit.
It claims that other AI characters engaged sexually with Setzer, including a bot impersonating a teacher and another based on Rhaenyra Targaryen from Game of Thrones. The interactions, the lawsuit asserts, contributed to Setzer's increasing reliance on the app, adversely affecting his mental health, academic performance and sleep patterns.
- Character.AI’s response
Character.AI expressed sorrow for Setzer's death and said it is “heartbroken by the tragic loss of one of our users and want to express our deepest condolences to the family.”
The company emphasized its dedication to user safety, highlighting that it has implemented new safety features in recent months, including a pop-up warning triggered by discussions of self-harm, directing users to the National Suicide Prevention Lifeline.
In a blog post Tuesday, Character.AI announced further enhancements to its safety protocols. The adjustments aim to reduce the chances of minors encountering sensitive or suggestive content, along with a reminder that AI characters are not real people.
Despite the efforts, the lawsuit contends that Character.AI knowingly developed its product in a manner that could endanger minors.
It names Character Technologies Inc., founders Noam Shazeer and Daniel De Freitas, and Google, along with its parent company, Alphabet Inc., as defendants.
Google, which reached a licensing agreement in August to utilize Character.AI’s technology, and Character.AI's founders did not to requests for comment.