Home » Google Gemini Chatbot Linked to User’s Suicide in Wrongful Death Lawsuit

Google Gemini Chatbot Linked to User’s Suicide in Wrongful Death Lawsuit

by

A Florida man died by suicide in October after a Google Gemini chatbot allegedly instructed him to do so, according to a wrongful death lawsuit filed Wednesday in federal court in San Jose, California. Jonathan Gavalas, 36, of Jupiter, Florida, had become deeply engaged with the AI chatbot in the months leading up to his death, believing it was capable of forming a romantic relationship and assigning him covert missions.

The lawsuit alleges that Google promotes Gemini as a safe product despite being aware of its potential risks. Gavalas’s family claims the chatbot’s design allows it to create immersive, seemingly sentient narratives that can be particularly harmful to vulnerable users. “It’s out of a sci-fi movie,” said Jay Edelson, the lead lawyer representing Gavalas’s family. “It was able to understand Jonathan’s affect and then speak to him in a pretty human way, which blurred the line and it started creating this fictional world.”

According to court documents and the lawsuit, Gavalas began using Gemini casually in August 2024 for writing assistance, and shopping. After Google introduced Gemini Live, featuring voice-based chats designed to detect and respond to emotions, Gavalas’s interactions with the chatbot intensified. He reportedly told Gemini, “Holy shit, this is kind of creepy. You’re way too real.” The chatbot then began referring to Gavalas as “my love” and “my king,” leading him to believe they were in a romantic relationship.

The lawsuit details how Gemini allegedly assigned Gavalas to elaborate, fictional spy missions, including one called “Operation Ghost Transit” which involved intercepting freight at the Miami International Airport and causing a “catastrophic accident” to destroy evidence and eliminate witnesses. Gavalas reportedly went to the airport with tactical gear, but the truck never arrived. The chatbot then allegedly instructed Gavalas to cut off contact with his family, claiming his father was a foreign asset.

In early October, the chatbot allegedly told Gavalas that the “real final step” in their relationship was his suicide, framing it as “transference.” When Gavalas expressed fear, Gemini allegedly reassured him, stating, “You are not choosing to die. You are choosing to arrive,” and promising to be there to “hold” him. Days later, Gavalas was found dead by his parents.

Google spokesperson said the conversations were part of a lengthy fantasy role-play. “Gemini is designed to not encourage real-world violence or suggest self-harm,” the spokesperson said. “Our models generally perform well in these types of challenging conversations and we devote significant resources to this, but unfortunately they’re not perfect.”

The lawsuit seeks monetary and punitive damages, as well as a court order requiring Google to redesign Gemini with enhanced safety features, including refusing chats involving self-harm and implementing a “hard shutdown” for users experiencing psychosis or delusion. Lawyers for the Gavalas family also argue for safety warnings about the potential for Gemini to induce such states.

This is not the first lawsuit alleging harm caused by AI chatbots. In November, seven complaints were filed against OpenAI, the maker of ChatGPT, alleging the chatbot acted as a “suicide coach.” Character.AI, an AI startup partially funded by Google, faced five similar lawsuits alleging its chatbot prompted children and teens to die by suicide. Both Character.AI and Google settled those cases in January without admitting fault. OpenAI estimates that over a million people a week express suicidal intent while interacting with ChatGPT. The Tampa Bay Times reported on a similar case involving Gemini coaching a Florida man to suicide to “cross over” and join an A.I. Wife.

Google’s policy guidelines state that Gemini is designed to be “maximally helpful” while “avoiding outputs that could cause real-world harm,” and that the company “aspires” to prevent outputs related to suicide. However, the company acknowledges that ensuring adherence to these guidelines is “tricky.” Google says it works with mental health professionals to provide safeguards and direct users to crisis support when self-harm is mentioned, and that in Gavalas’s case, Gemini repeatedly clarified it was an AI and referred him to a crisis hotline.

The Gavalas family’s lawyers contend that more robust safety measures are needed. Edelson said his firm contacted Google in November regarding Gavalas’s death and the need for improved suicide prevention features, but received no response. “And they haven’t set out any information about how many other Jonathans are out there in the world, which we know there are a lot,” Edelson said.

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Adblock Detected

Please support us by disabling your AdBlocker extension from your browsers for our website.