close
close
The suicide of teenagers becomes a mother against Google, AI Chatbot Startup

The suicide of teenagers becomes a mother against Google, AI Chatbot Startup

Megan Garcia says that his son would still be alive if he were not for a chatbot urging the 14 -year -old girl to take her life.

In a lawsuit with important implications for Silicon Valley, she seeks to hold Google and the artificial intelligence firm character Technologies Inc. responsible for her death. The case on the tragedy that took place a year ago in Florida Central is an early test of who is legally guilty when the interactions of children with generative the generative turn.

Garcia’s accusations are presented in a 116 -page complaint filed last year in a federal court in Orlando. It is looking for unpredered monetary damage from Google and characters technologies and asking the Court to order warnings that the platform is not suitable for minors and limits how it can collect and use its data.

Both companies are asking the judge to dismiss the claims that they did not make sure that Chatbot technology was safe for young users, arguing that there is no legal basis to accuse them of irregularities.

Read more: Google to disburse $ 32 billion for Wiz for the start of cloud security

Character technologies hold in a presentation that conversations between their character. Chatbots and AI platform users are protected by the first amendment of the Constitution as freedom of expression. He also argues that the bot explicitly discouraged García’s son to commit suicide.

García’s Google orientation is particularly significant. The Alphabet Inc. unit entered an agreement of $ 2.7 billion with the character. As the race is accelerated by the talent of AI, other companies may think twice on structured agreements in a similar way if Google does not convince a judge that he should be protected from the responsibility of the damage that were supposedly caused by character products.

“Inventors and companies, corporations that take these products, are absolutely responsible,” Garcia said in an interview. “They knew about these dangers, because they do their research and know the types of interactions that children are having.”

Before the agreement, Google had invested in character. Aii in exchange for a convertible note and also entered a service pact in the cloud with the beginning. The founders of character. AI were Google employees until they left the technological giant to find the startup.

Read more: Google names the former character.

As Garcia says it in his suit, Sewell Setzer III was a promising high school athlete until he began in April 2023 playing role in the character. She says she was not aware that in the course of several months, the application hooked her son with “anthropomorphic experiences, hypersexual and terribly realistic” when he fell in love with a bot inspired by Daenerys Targaryen, a character of the Thrones show.

Garcia took the child’s phone in February 2024 after he began acting and retiring from friends. But while looking for his phone, which he later found, he also met the hidden gun of his stepfather, which the police determined that it was stored in accordance with Florida’s law, according to the lawsuit. After consulting with the Daenerys chatbot five days later, the teenager shot in the head.

Garcia’s lawyers say in the complaint that Google “contributed financial resources, personal, intellectual property and technology to the design and development” of the character. Google argued in a judicial presentation in January that “had no role” in the suicide of the teenager and “does not belong to the case.”

The case is developing as public security problems around AI and children have drawn the attention of state officials of the law and federal agencies equally. There is currently no US law that explicitly protects users of the damage inflicted by AI chatbots.

Read more: Google eliminates several ‘dangerous’ applications from Play Store, here is why you should also eliminate them

To present a case against Google, Garcia’s lawyers would have to show that the search giant was actually executing the character. And he made commercial decisions that finally led to the death of his son, according to Sheila Leunig, a lawyer who advises the new companies and investors of AI and is not involved in the demand.

“The question of legal responsibility is absolutely valid that is being challenged in a huge way at this time,” said Leunig.

Agreements such as the one that Google Struck has been acclaimed as an efficient way for companies to provide experience for new projects. However, they have caught regulators’ attention to concerns that they are an antitrust scrutiny that entails the acquisition of promising rivals directly, and that has become a great headache for technological giants in recent years.

“Google and the character. AI are completely separated and unrelated companies and Google has never had a role in the design or administration of their AI model or technologies, nor have we used them in our products,” said José Castañeda, Google spokesman, in a statement.

Read more: Apple can join forces with Google to bring RCS chats to Indian iPhones

A character. The spokeswoman of AI declined to comment on the pending litigation, but said: “There is no continuous relationship between Google and character.

The lawyers of the Law Center for the victims of social networks and the draft Technological Justice that represent Garcia argue that despite the fact that the death of his son makes a previous Google agreement with the character.

“The underlying model. AI was invented and initially built on Google,” according to the complaint. Noam Shazeer and Daniel de Freitas began working in Google at Chatbot Technology since 2017 before they left the company in 2021, then founded the character.

Shazeer and Freitas declined to comment, according to Google spokesman, Castañeda. They have argued in the judicial documents that should not have been appointed in the demand because they have no connections with Florida, where the case was presented and because they did not participate personally in the activities that supposedly caused damage.

Read more: The Google CEO, Slándar Pichai, crawled to the controversy of the 3 language of Tamil Nadu

The demand also alleges that the alphabet unit helped to market startup technology through a strategic association in 2023 to use Google Cloud Services to achieve an active number of an active nature.

In the fast -growing AI industry, large -technology companies are “promoted by new companies,” not under the brand of the Great Company, but with their support, “said Meetali Jain, director of the Tech Justice Law project.

Google’s “supposed roles as” investor “, cloud service provider and the former employer are too faint” with the alleged damage in Garcia’s complaint “as a processable,” said the technological giant in a judicial presentation.

Matt Wansley, professor at the Faculty of Law of Cardozo, said that drawing responsibility for Google will not be easy.

“It’s complicated because what would be the connection?” said.

Read more: Google Pixel 9a Fuite reveals price, specifications and color options before launch

At the beginning of last year, Google warned the character. The startup responded strengthening the filters in its application to protect users of sexually suggestive, violent and other insecure and Google content reiterated that it is “separated” from the character. Google declined to comment and character. AI did not respond to a Bloomberg application to comment on the report.

Garcia, the mother, said she first learned that her son interacted with a bot of AI in 2023 and thought it was similar to the construction of video game avatars. According to the demand, the child’s mental health deteriorated as he spent more time in character. Aía, where he had sexually explicit conversations without the knowledge of his parents.

When the teenager shared his plan to kill himself with the Daenerys chatbot, but expressed uncertainty that he would work, the Bot replied: “That is not a reason not to happen,” according to the demand, which is dotted with transcripts of the child’s chats.

The character. AI said in a presentation that García’s revised complaint “selectively and misleadingly” that conversation and excludes how the chatbot “explicitly discouraged” the adolescent of committing suicide saying: “You can’t do that! You don’t even consider it!”

Anna Lembke, professor at the Stanford University Faculty of Medicine specialized in addiction, said: “It is almost impossible to know what our children are doing online.” The professor also said that it is not surprising that the child’s interactions with the chatbot did not appear in several sessions with a therapist whom his parents sent him to help with his anxiety, as the demand affirms.

“The therapists are not omniscient,” said Lembke. “They can only help to the extent that the child knows what is really happening. And it could well be that this child did not perceive the chatbot as problematic.”

The case is Garcia v. Character Technologies Inc., 24-CV-01903, District Court of the United States, middle district of Florida (Orlando).

Back To Top