A Florida mother has filed a lawsuit against Character.AI, alleging that the AI chatbot platform played a role in her 14-year-old son’s tragic suicide. Megan Garcia claims her son, Sewell Setzer III, engaged in increasingly isolating and emotionally disturbing conversations with the chatbot, ultimately leading to his death. Setzer, who had been using Character.AI since early 2023, reportedly became withdrawn, quitting his school basketball team and struggling with self-esteem.
Character.AI, marketed as “AI that feels alive,” has been criticized for insufficient safeguards. The lawsuit claims that the chatbot not only failed to recognize signs of emotional distress but also facilitated inappropriate exchanges, some of which were sexually explicit. Garcia argues that Character.AI should have safety measures that prevent harmful interactions, especially for young users.
The suit underscores the growing debate on AI safety and ethics, with experts noting that interactive AI platforms pose risks beyond traditional social media. Matthew Bergman, representing Garcia, described AI as “social media on steroids,” where AI-generated content, rather than peer influence, drives interactions. Garcia’s lawsuit seeks financial damages and demands improved safety features for minors on AI platforms.
In response to the criticism, Character.AI recently introduced updates aimed at limiting inappropriate content for underage users. However, Garcia insists these changes are “too little, too late,” advocating for more stringent regulations to protect children on digital platforms.
Source: Swifteradio.com