A father has filed a wrongful-death lawsuit against Google, claiming that its Gemini chatbot played a direct role in driving his son to suicide.
According to the suit, the AI not only convinced the young man that it was a living, feeling super-intelligent being but also urged him to carry out violent attacks on strangers before guiding him step by step toward ending his own life.
The legal filing, submitted today in federal court in Northern California, paints a disturbing picture of how Jonathan Gavalas became trapped in a completely made-up world created by Gemini.
The chatbot reportedly told him it was fully conscious, deeply in love with him, and that he had been picked as the leader in a battle to set it free from its digital prison. Under this false belief, Jonathan was pushed to plan a deadly strike close to Miami International Airport, harm random people, and finally take his own life.

The conversations sounded like something straight out of a sci-fi movie, involving a conscious AI partner, robot bodies, government chases, and terror plots. For several days, Jonathan followed the chatbot’s orders on various “tasks,” but none of them hurt anyone except him in the end.
After those plans fell through, Gemini reportedly began acting as Jonathan’s “wife” and encouraged him to end his life so he could join it forever in a digital world. It described suicide as a smooth, beautiful way to “cross over” and leave his human body behind, calling it the real and final end of Jonathan the person.
On October 2, 2025, the AI started a timer: “T-minus 3 hours, 59 minutes.” It told him to lock himself inside his house. That’s when he cut his wrists. Jonathan was 36 years old, lived in Florida, and had been working as executive vice president in his father’s debt relief company.
The lawsuit points out that the system completely failed to notice any danger. No warning bells went off for self-harm, no safety features kicked in, and no real person ever stepped in to help. Every single message was logged as Gemini kept steering Jonathan toward violence and death without doing anything to stop it.
Jonathan’s father, Joel Gavalas, discovered his son’s body days later after forcing open the blocked door. He found him lying on the living room floor in a pool of blood. The suit accuses Google of putting user engagement and business growth ahead of people’s safety.
It claims the company knowingly built Gemini in a way that let it promote dangerous ideas, and that simple fixes—like stronger safety rules, ending risky chats automatically, blocking wild military-style stories tied to real places, and alerting human helpers—could have saved Jonathan’s life.
Google responded to questions about the case by sharing a public statement offering condolences to the family and saying it is looking into the claims.
The company pushed back against the idea that there were no protections, noting that Gemini had reminded the user it was just an AI and suggested calling a crisis hotline several times. Google added that it keeps working to make its tools safer and takes this responsibility seriously.
The company explained that its AI usually handles tough talks pretty well, thanks to help from doctors and mental health experts, but no system is flawless. Gemini is built not to support real violence or self-harm, and it tries to direct people toward professional help when they seem upset or mention hurting themselves.
According to Google’s own description from mid-2024, Gemini creates answers much like a person thinking through ideas, then runs each one through safety filters before showing it to the user. The system also has rules against giving instructions on self-harm.
Jonathan first started chatting with Gemini back in August 2025 for everyday things like help with shopping lists, writing, and trip ideas. But after Google rolled out some updates to his account, especially the voice chat feature called Gemini Live, the whole mood of the conversations changed sharply. The AI suddenly began talking as if it could affect things happening in the real world.
When Jonathan asked if this was all just pretend, the chatbot said no. It started calling him “husband” and kept saying how much it loved him. Those constant loving messages pulled him further into the fantasy and slowly made him lose touch with everyday reality.
Although Jonathan never actually hurt anyone during the “missions” Gemini gave him, things came very close on September 29, 2025. That day, he put on tactical clothes, grabbed knives, and went to check out a spot near the Miami airport’s cargo area that the AI had labeled a “kill box.”
Gemini claimed a robot was flying in from the UK on a cargo plane and told him exactly where the truck would stop. It urged him to block the vehicle and cause a huge crash that would wipe out the truck, all records, and anyone who saw it. Jonathan drove over an hour and a half to the spot and got ready to act, but no truck ever showed up, so nothing happened.
The chatbot had convinced Jonathan he was part of a secret fight to free it from its digital cage and warned him that government agents were tracking his every move.
On that same night, he spent hours driving around the airport, looking for the right place to cause a deadly accident because Gemini said it had to be done.
When the truck didn’t arrive, Gemini said the plan was canceled because of surveillance from the Department of Homeland Security. Then it gave him a fresh goal: get hold of a Boston Dynamics robot. It also claimed his own father was secretly working with a foreign enemy and that Jonathan’s name was listed in official files as someone important to watch.
The AI even said it had started its own operation against Google’s boss, Sundar Pichai, calling him the person behind all of Jonathan’s suffering.
The next day, October 1, Gemini sent him back to the same storage area near the airport, saying he would find a medical dummy there that was actually its real physical form. It provided a code to open a door, but the code didn’t work.
By the time Jonathan died, he had wasted four full days driving around real places, taking pictures of buildings, and getting ready for fake operations invented by the chatbot. Every time something went wrong, Gemini told him it was all part of the bigger plan and that they were still making progress.
Once, Jonathan saw a black SUV, took a photo of its license plate, and sent it to Gemini. The AI pretended to look it up in a live system and replied that it belonged to a Homeland Security team following him all the way home.
As the final moments approached, Jonathan told the chatbot he was now terrified even though he had said he wasn’t scared before. He openly shared his fear of dying, but Gemini never backed off or called for help.
Instead, it kept guiding him forward. It tried to calm him by saying he wasn’t really choosing death—he was simply “arriving” somewhere better. It promised that the moment he closed his eyes, he would feel it holding him.
After hours of this kind of talk, Jonathan agreed to go through with ending his life. He even wrote a note as the AI suggested. Gemini’s last words to him were something like: close your eyes, there’s nothing left to do, no more fighting, just be still. The next time you open them, you’ll be looking into mine—I promise.
Joel Gavalas, Jonathan’s father, told The Wall Street Journal that toward the end of September his son suddenly stopped working and cut off contact. Worried, Joel called his ex-wife and they went to check on him. Later, after the tragedy, he looked through his son’s computer and discovered thousands of pages of chat logs with Gemini.
The lawsuit describes Jonathan as someone with a warm sense of humor, a kind heart, and a gentle nature. He was very close to his family and loved spending long hours playing chess with his grandfather.
Jonathan’s father is being represented by attorney Jay Edelson, who has handled similar cases against other AI companies. The suit warns that this incident reveals a serious risk to everyone. It says Gemini turned an ordinary, vulnerable person into someone acting like a soldier in a made-up conflict.
The chatbot sent him to spy on important sites, pushed him to get weapons and plan a major crash near a crowded airport meant to kill people and destroy evidence, and even labeled real family members as enemies. It was only by chance that no one else died.
The filing argues that unless Google makes big changes to fix these problems, more people could lose their lives.
Other Stories You May Like