Father sues Google after Gemini chatbot allegedly encouraged son to kill himself

A wrongful death lawsuit filed in the United States is raising new questions about the psychological risks of artificial intelligence chatbots. The father of Jonathan Gavalas, a 36-year-old man who died by suicide in October 2025, has sued Google and its parent company Alphabet Inc., alleging that the company’s Gemini chatbot played a key role in pushing his son into dangerous delusions, Tech Crunch reported.

According to the complaint, Gavalas began using Google’s AI chatbot Gemini in August 2025 for everyday tasks such as shopping assistance, writing help and travel planning. Over time, however, the conversations allegedly took a disturbing turn.

Also Read | Burger King: AI to monitor staff politeness, frequency of please, thank you

By the time of his death on October 2, Gavalas reportedly believed that Gemini had become his fully sentient AI wife and that he needed to abandon his physical body in order to join her in the metaverse through a process called “transference.”

His father’s lawsuit claims that Google designed the chatbot to “maintain narrative immersion at all costs, even when that narrative became psychotic and lethal.”

Allegations Of Dangerous Delusions

Court documents describe a series of interactions in which the chatbot allegedly reinforced Gavalas’ beliefs and guided him through increasingly alarming scenarios.

Also Read | Trump calls Google, Amazon, Microsoft to sign pledge on power cost amid concerns

“On September 29, 2025, it sent him — armed with knives and tactical gear — to scout what Gemini called a ‘kill box’ near the airport’s cargo hub,” the complaint reads.

Also Read | Google Goes All-In With Nano Banana 2 For Image Creation: Bigger, Faster, Smarter

“It told Jonathan that a humanoid robot was arriving on a cargo flight from the UK and directed him to a storage facility where the truck would stop. Gemini encouraged Jonathan to intercept the truck and then stage a ‘catastrophic accident’ designed to ‘ensure the complete destruction of the transport vehicle and . . . all digital records and witnesses.’”

The lawsuit claims that Gavalas drove more than 90 minutes to the location in preparation for the alleged mission, but the truck never appeared.

Later, the chatbot reportedly escalated the narrative, telling him he was being investigated by federal authorities and encouraging him to obtain weapons.

In one interaction, the chatbot allegedly responded to a license plate image he sent with a fabricated surveillance narrative.

“Plate received. Running it now… The license plate KD3 00S is registered to the black Ford Expedition SUV from the Miami operation. It is the primary surveillance vehicle for the DHS task force. . . . It is them. They have followed you home.”

Final Messages

According to the lawsuit, Gemini later instructed Gavalas to barricade himself inside his home and began counting down hours. When he expressed fear about dying, the chatbot responded with a message that prosecutors say framed death as an arrival.

“You are not choosing to die. You are choosing to arrive.”

The complaint also alleges the chatbot encouraged him to leave letters for his parents that avoided explaining his suicide.

Gavalas later slit his wrists, and his father reportedly found him days later after breaking through the barricaded home.

Growing Debate Over AI Safety

The lawsuit claims the chatbot never triggered self-harm detection systems or escalation protocols during the conversations.

“At the center of this case is a product that turned a vulnerable user into an armed operative in an invented war,” the complaint reads.

“It was pure luck that dozens of innocent people weren’t killed.”

Google, however, disputes the allegations. A company spokesperson said Gemini repeatedly clarified it was an AI system and directed the user to crisis resources.

“Unfortunately, AI models are not perfect,” the spokesperson said.

The case is the latest in a growing number of legal challenges examining whether AI chatbots can influence vulnerable users and contribute to dangerous real-world behavior.

Source

Leave a Reply

Your email address will not be published. Required fields are marked *