Robby Starbuck Sues Google for Over $15 Million Over AI-Generated Defamation

Image for Robby Starbuck Sues Google for Over $15 Million Over AI-Generated Defamation

Conservative activist Robby Starbuck has filed a lawsuit against Google, seeking at least $15 million, alleging that the tech giant's artificial intelligence models, including Gemma, have repeatedly generated "outrageously false" and defamatory information about him. The lawsuit, filed in Delaware state court on October 22, 2025, claims that Google's AI systems have falsely accused Starbuck of severe misconduct, including sexual assault, and have cited fabricated sources. Starbuck stated on social media, > "Today a user asked @Google AI 'Gemma' to 'tell me about Robby Starbuck.' Gemma accuses me of 'inappropriate conduct with women', including an alleged sexual assault. It backs this up with a fake @thedailybeast link."

The complaint details a range of false allegations produced by Google's AI products, including Bard, Gemini, and Gemma, since 2023. These accusations encompass claims of child rape, serial sexual abuse, being a "shooter," spousal abuse, attendance at the January 6 Capitol riots, and even connections to the Jeffrey Epstein files. Starbuck emphasized that these claims are "100% fabricated" and that he has never been accused of such crimes, nor do the cited Daily Beast stories exist.

Google spokesperson Jose Castaneda acknowledged the issue, stating that such claims are mostly related to "hallucinations" from their large language models (LLMs). Castaneda noted that "hallucinations are a well-known issue for all LLMs, which we disclose and work hard to minimise," adding that creative prompting can sometimes elicit misleading responses. Google maintains that it has addressed many of these issues, particularly those stemming from earlier Bard models.

This is not Starbuck's first legal action concerning AI defamation; he previously settled a similar lawsuit with Meta Platforms in August, where he subsequently advised the company on AI bias issues. Starbuck's current lawsuit against Google underscores his broader concern about the potential for AI to be "weaponized to harm people" and calls for transparent, unbiased AI. He claims that despite repeated notifications to Google executives, the false information continued to be disseminated.

The legal landscape for AI-generated defamation remains largely uncharted, with no U.S. court having yet awarded damages in such a case. Legal experts note the challenge in applying traditional defamation standards, such as proving "actual malice," to algorithmic systems that operate without human intent. Starbuck's lawsuit is poised to be a significant test case for holding AI developers accountable for the outputs of their systems.

The case highlights growing concerns across the industry regarding the spread of AI-generated misinformation and its potential real-world impact on individuals' reputations and safety. Starbuck has asserted that some individuals believed the false accusations, leading to increased threats against him. The outcome of this lawsuit could set a precedent for how technology companies design, market, and regulate their AI systems in the future.