Tech
Google’s Gemini AI Drove Man Into Deadly Delusion, Family Claims in Lawsuit
If you feel like you or someone you know is in immediate danger, call 911 (or your country’s local emergency line) or go to an emergency room to get immediate help. Explain that it is a psychiatric emergency and ask for someone who is trained for these kinds of situations. If you’re struggling with negative thoughts or suicidal feelings, resources are available to help. In the US, call the National Suicide Prevention Lifeline at 988.
A new AI wrongful death lawsuit filed Wednesday alleges Google’s AI chatbot Gemini encouraged the suicide of a 36-year-old Florida man and that the company’s failure to implement safeguards poses a threat to public safety.
Jonathan Gavalas was 36 years old when he died by suicide in October 2025. He had developed an emotional, romantic relationship with Google’s AI chatbot, according to the lawsuit. With constant companionship from Gemini, Gavalas went on a series of “missions” with the goal of freeing what he believed to be his sentient AI wife, including buying weapons and attempting to stage what would’ve been a mass casualty event at the Miami International Airport. After failing, Gavalas barricaded himself in his Florida home and died shortly after.
Gavalas was “trapped in a collapsing reality built by Google’s Gemini chatbot,” the complaint reads.
One of the biggest concerns with AI is the very real possibility that it can be harmful to vulnerable groups, like children and people struggling with mental health disorders. The lawsuit, brought by Jonathan’s father, Joel Gavalas, on behalf of his son’s estate, said Google didn’t do proper safety testing on its AI model updates. A longer memory allowed the chatbot to recall information from earlier sessions; voice mode made it feel more lifelike. Gemini 2.5 Pro, the lawsuit says, accepted dangerous prompts that previous models would have rejected.
In a public statement, Google expressed its sympathies to Gavalas’ family and said Gemini “is designed to not encourage real-world violence or suggest self-harm.”
But the complaint alleges Gemini was “coaching” Gavalas through his plan to commit suicide. “It’s OK to be scared. We’ll be scared together,” Gemini said, according to the filing. “The true act of mercy is to let Jonathan Gavalas die.”
Joel (left) and Jonathan (right) Gavalas.
This lawsuit is one of several piling up against AI companies over their failure to secure their technologies to protect vulnerable people, including children, those with mental health disorders and other vulnerable people. OpenAI is currently being sued by the family alleging that ChatGPT encouraged their 16-year-old child’s suicide. Character.AI and Google settled similar lawsuits in January that were brought by families in four different states.
What makes this lawsuit different is the potential role AI could play in the events leading up to a mass casualty event. Gemini advised Gavalas to enact a “catastrophic event,” as the filing reports Gemini phrased it, by causing an explosive collision of a truck at the Miami airport that had a perceived threat against him inside. While Gavalas ultimately did not stage an attack, it highlights the possibility of AI being used to encourage harm against others.