Character.AI, Google sued after death of chatbot-obsessed teenager
Oct.24,2024

Asia Tech Wire (Oct 24) -- California-based artificial intelligence startup Character.AI and Google are being sued after the death of a chatbot-obsessed teenager.

Megan Garcia, the mom of 14-year-old Sewell Setzer III, a Florida boy, filed the lawsuit Tuesday in U.S. District Court in Orlando.

In addition to Character.AI and Google, Garcia named Character.AI founders Noam Shazeer and Daniel De Freitas as defendants along with Google's parent company, Alphabet Inc.

Garcia disclosed in her complaint that Setzer began using Character.AI last year to interact with chatbots modeled after characters from Game of Thrones, including Daenerys Targaryen.

Her son committed suicide on February 28, 2024, seconds after his last interaction with a bot, after constantly chatting with Character.AI bots for several months before his death.

The Florida mom accused Character.AI of wrongful death, negligence, deceptive trade practices, and product liability in her complaint.

Garcia claimed that the Character.AI platform was "unreasonably dangerous" and lacks safety precautions when selling to children.

She also accused Character.AI of anthropomorphizing artificial intelligence characters and the platform's chatbots of providing unlicensed psychotherapy.

Character.AI has mental health-themed chatbots, such as "Therapist" and "Are You Feeling Lonely," which Setzer interacted with.

Former Google engineers Noam Shazeer and Daniel De Freitas founded Character.AI in 2021.

Google hired 20% of Character.AI's employees, including the two founders, to join its AI division DeepMind in August and paid $2.7 billion for a one-time license to the startup's AI models.

Character.AI's website and mobile apps feature hundreds of customized AI chatbots, many of which are modeled after popular characters from TV shows, movies, and video games.

A few months ago, The Verge reported on millions of young people, including teens, interacting with bots that may be pretending to be Harry Styles or a therapist.

Another recent WIRED report highlighted the issue of Character.AI's customized chatbots posing as real people without their consent, with one of the bots posing as a teenager who was murdered in 2006.

Following the incident, Character.AI urgently revised its community safety policy and terms of service, but also closed the comments section of the tweet in question.

"We are heartbroken by the tragic loss of one of our users and want to express our deepest condolences to the family," Character.AI said in a statement.

"As a company, we take the safety of our users very seriously and we are continuing to add new safety features," the company said, attaching a link to a webpage.

Some of the updates include:

  1. Model changes for minors (under 18) designed to reduce the likelihood of encountering sensitive or suggestive content;
  2. Improved detection, response and intervention for user inputs that violate the platform's terms or community guidelines;
  3. A revised disclaimer appears in every chat reminding users that the AI is not a real person;
  4. Notification when a user has spent an hour-long session on the platform with additional user flexibility in progress.

    In addition, Character.AI has hired a Head of Trust and Safety and a Head of Content Policy, and recruited additional personnel responsible for content security.

    Character.AI has added a pop-up feature to its platform, whereby when users enter phrases related to self-harm or suicide, the system pops up a message directing them to contact the 988 Suicide and Crisis Lifeline.

    Related Topics

    You must be login to post a comment.