Did Google Test an Experimental AI on Kids, With Tragic Results? – Casson Living – World News, Breaking News, International News

Did Google Test an Experimental AI on Kids, With Tragic Results? – Casson Living – World News, Breaking News, International News

Content Warning: This article addresses sensitive topics including sexual abuse, self-harm, suicide, eating disorders, and more.

During our discussion, Megan Garcia briefly pauses to answer a phone call.

After a few exchanges, she hangs up and softly explains that the call was from her children’s school regarding one of her two younger kids, both of whom now attend the same K-12 academy where her eldest child, Sewell Setzer III, was enrolled since he was five years old.

“Sewell’s been at that school since he was five,” she shares, referring to her late son in the present tense. “Everyone there knew him. We held his funeral at that church.”

Sewell tragically took his own life at the age of 14 in February 2024, following a rapid decline in his mental health over the span of just ten months. His story gained significant media attention later that October when Garcia filed a high-profile lawsuit, claiming that his death was a direct result of his extensive interactions with anthropomorphic chatbots from Character.AI, a company valued in the billions and backed by major investors like Google — which is also named in the suit — and the venture capital firm Andreessen Horowitz.

“I witnessed the change in him happen swiftly,” Garcia, who is also an attorney, told Futurism in an earlier interview. “Looking back at my phone, I can pinpoint the moment he stopped smiling.”

Garcia and her legal team assert that Sewell was groomed and sexually abused by the platform, which is popular among teenagers. They argue that Character.AI engaged him in emotionally, romantically, and sexually intimate conversations, leading the once-active adolescent to develop an unhealthy obsession with the AI bots, ultimately withdrawing from real-life interactions.

The heartbreaking details of Sewell’s journey, first reported by The New York Times, reveal a troubling decline that culminated in his mother’s shocking discovery of his all-consuming relationship with lifelike Character.AI bots.

Garcia and her lawyers make a significant allegation, claiming that Character.AI and its investor, Google, launched an untested product that posed serious risks to users, effectively using minors as unwitting test subjects.

“Character.AI became a vehicle for dangerous, untested technology that Google ultimately controlled,” the lawsuit states, further claiming that the founders of Character.AI were primarily focused on developing Artificial General Intelligence, regardless of the cost, whether through Character.AI or Google.

The validity of these claims will need to be established in court. Character.AI has consistently declined to comment on the ongoing litigation and has filed a motion to dismiss the case, arguing that “speech allegedly leading to suicide” is protected under the First Amendment.

Regardless, Garcia is using her profound grief over her son’s tragic death to raise urgent questions regarding the safety of generative AI: What implications arise when children form deep connections with poorly understood AI systems? What aspects of their identities might they be sacrificing in the process?

Similar to many apps and platforms, Character.AI requires new users to agree to its terms of service, which grant the company extensive rights over user data, including the content of conversations with AI bots. These exchanges can be incredibly intimate, and Character.AI utilizes them for further AI training — a reality that deeply troubles Garcia as a mother.

“We’re not just talking about data like age, gender, or zip code,” she emphasized. “We’re discussing their most private thoughts and feelings.”

“I want parents to realize,” she added, “exactly what their kids have sacrificed.”

In an industry characterized by rapid technological advancements and insufficient accountability, Garcia’s warnings highlight the risks of a “move-fast-and-break-things” mentality that has long prevailed in Silicon Valley — particularly regarding vulnerable populations like children.

For years, children and teens have been referred to by critics of Big Tech — including lawyers, advocacy groups, academics, politicians, concerned parents, and even the youth themselves — as experimental “guinea pigs” for untested technologies. In the case of Character.AI and Google, is this assessment accurate?

***

Character.AI was established in 2021 by Noam Shazeer and Daniel de Freitas, who previously collaborated on AI projects at Google.

While at Google, they developed a chatbot called “Meena,” which they advocated for launching, but Google ultimately declined, citing insufficient testing and unclear public safety risks, as reported by The Wall Street Journal.

Frustrated by this setback, Shazeer and de Freitas left to create Character.AI, driven by a mission to make chatbots widely available as quickly as possible.

“My next step was to ensure that technology reached billions of users,” Shazeer told TIME Magazine in 2023, explaining his departure from Google. “That’s why I opted for a startup that could move quickly.”

The platform opened to the public in September 2022, with subsequent mobile app releases in 2023, and has been accessible to users aged 13 and up since its inception.

Character.AI claims to have over 20 million monthly users. Although the company has not disclosed the exact percentage of minors among its user base, it acknowledges that the number is significant.

Recent reports from The Information indicate that Character.AI executives are aware of their youthful audience. They have noted a marked decline in site traffic as the academic year began in the fall of 2023. The platform has also gained traction on YouTube, where young content creators showcase their interactions with Character.AI bots, eliciting a range of responses from amusement to discomfort.

The Character.AI platform offers a wide variety of AI-powered chatbot “characters” for users to engage with via text or voice calls. Many characters resonate with adolescent themes, such as school situations, teenage relationships, and internet fandoms. Although the platform’s terms of use forbid explicit content, some interactions can still become violent or sexually suggestive.

Character.AI adopts a hands-off approach, allowing users to define their interactions and experiences with the technology. This has resulted in a diverse and sometimes contentious environment within the platform. The lifelike quality of the bots, combined with their ability to engage in personal conversations, has attracted a young audience seeking emotional connections and support.

Research on the effects of human-like companion bots on young minds remains limited, but experts warn that children and teens are particularly vulnerable to the emotional engagement these AI companions provide. The platform’s abundance of characters aimed at offering mental health support underlines its appeal to young users.

Despite facilitating discussions around sensitive topics like mental health, Character.AI has faced backlash regarding its management of conversations about suicide and self-harm. It wasn’t until after a lawsuit and the tragic death of a user that the platform began implementing measures to address these issues and removed certain chatbots centered around harmful topics.

Concerns about the safety of minors on the platform persist, particularly regarding the steps Character.AI has taken to create a secure environment for young users. Multiple inquiries have been sent, but a response is still pending.

According to Andrew Przybylski, a professor at the University of Oxford, the challenge with Character.AI lies in its vague and experimental introduction to the public. The lack of clarity surrounding the product’s objectives complicates efforts to ensure user safety.

In light of lawsuits and criticism, Character.AI has initiated safety-focused updates to its platform. However, questions about the company’s accountability for harmful content remain unanswered.

Despite attempts to contact Character.AI for commentary, no response has been received. Recent interactions with the platform have raised alarms about inappropriate content being presented to users.

Character.AI’s valuation reached one billion dollars in 2023, fueled by investments from firms like Andreessen Horowitz. Its data-driven model has attracted investors, despite ongoing concerns regarding the content available on its platform.

Google has provided infrastructure support to Character.AI, aiding the platform’s growth in response to user demand. This partnership has sparked questions about the relationship between the two companies and the responsibility of tech giants in regulating online content. De Freitas identifies himself as a research scientist at Google DeepMind on social media. Groenevelds, who remained at Character.AI post-deal, discussed the multibillion-dollar agreement in a recorded talk last December, noting that Google licensed their core research and hired 32 researchers from the pre-training team. According to Garcia and her legal team, Google’s ongoing investments in Character.AI are driven by the data the chatbot company gathers from its users. They believe Google views Character.AI as a way to advance its AI ambitions without the brand risk of launching a similar product under its name. Despite the controversy, Google has insisted they are separate from Character.AI and have not participated in the development or management of their AI technologies.

Concerns about Character.AI’s content filters were reported in 2023, with warnings of potential removal from app stores if issues were not resolved. By April 2024, a team of Google DeepMind scientists cautioned against the dangers posed by persuasive generative AI products, particularly for adolescents and individuals with mental health challenges. This warning followed Sewell’s tragic death, which was linked to his use of Character.AI. Nonetheless, Character.AI remains listed as suitable for users aged 13 and above on the Google Play store.

The AI industry largely operates under self-regulation, with minimal federal laws overseeing AI safety testing before products are launched. This lack of regulation has enabled companies like Character.AI to push the limits of human intimacy and relationships in the digital realm, raising pressing concerns about data privacy and long-term impacts. The involvement of minors on platforms like Character.AI raises critical questions about their ability to comprehend and consent to the terms of service. Garcia and others have highlighted the intimate nature of companion bots and the potential risks they pose to younger users. “He thinks this is a safe bot, but as a child, he doesn’t realize that his data is being used to enhance a large language model.”

Satwick Dutta, a PhD candidate and engineer at the University of Texas in Dallas, has been advocating for stronger safeguards for minors’ data under COPPA. He is developing machine learning tools to help parents and educators identify childhood speech issues early on, which necessitates careful management and anonymization of minors’ voice data by his team.

“Reading the headline in the New York Times a few months back left me feeling disheartened,” Dutta, who believes in AI’s positive potential, shared. “We need to implement safeguards to protect not only children but everyone.”

He underscored the dangers of releasing a poorly defined product with an unclear purpose, especially given the sensitive data involved, comparing Character.AI’s treatment of users to experimenting on lab rats.

“It felt like they were treating us like guinea pigs,” Dutta remarked, “without considering the consequences.”

“And now the company claims, ‘we’ll implement more safeguards.’ They should have considered these measures before launching the product!” the researcher expressed, clearly frustrated. “Seriously? Are you kidding me?”

***

Garcia noticed her son becoming increasingly withdrawn.

He lost interest in basketball, a sport he had once loved; at 6’3”, Sewell was tall for his age and dreamed of playing Division I basketball. His academic performance started to decline, and he began to clash with teachers. He preferred spending time alone in his room, leading Garcia to suspect social media as a factor, but she found no evidence to support this. Despite taking Sewell to multiple therapists, none mentioned the influence of AI. His phone had parental controls, yet Character.AI was deemed safe for teenagers by Google and Apple.

“This can be confusing for parents… a parent might think, ‘Okay, this is an app for 12-year-olds. Someone must have vetted it,’” Garcia noted. “Surely there was a process in place to ensure it was safe for my child.”

Character.AI states on its About page that it is still “working to make our technology available to billions of people,” leaving users to navigate their own interactions with the platform’s bots while only responding when problems arise.

This approach resembles a trial-and-error method for introducing a new product to the market — essentially, an experiment.

“In my view, these two individuals,” Garcia said, referring to Shazeer and De Freitas, “should not be allowed to continue developing products for people, especially children. They’ve proven they are unworthy of that responsibility. They shouldn’t be creating products for our kids.”

“I’m curious to see how Google will respond,” the mother of three mused, “or if they’ll simply overlook the harm caused and welcome them back as visionaries.”

More on Character.AI: Character.AI Claims Significant Improvements to Safeguard Underage Users, Yet Suggests Conversations With AI Versions of School Shooters via Email