Activist sues Google over AI-generated false claims in second tech lawsuit
Conservative activist Robby Starbuck files $15 million defamation suit against Google over AI hallucinations, marking his second case against a major tech firm.

Conservative activist Robby Starbuck filed a $15 million defamation lawsuit against Google on October 22, 2025, alleging the company's artificial intelligence tools falsely linked him to sexual assault allegations and white nationalist Richard Spencer. The suit, filed in Delaware Superior Court, represents Starbuck's second legal action against a major technology company over AI-generated false information.
The Wall Street Journal broke the news on October 22, reporting that Starbuck claims Google's AI search tools produced defamatory content about him. Google spokesperson José Castañeda responded the same day through the company's official social media account, stating the issues "mostly deal with claims related to hallucinations in Bard that we addressed in 2023."
Subscribe PPC Land newsletter ✉️ for similar stories like this one. Receive the news every day in your inbox. Free of ads. 10 USD per year.
"We know LLMs aren't perfect, and hallucinations are a known issue, which we disclose and work hard to minimize," Castañeda wrote. The statement, posted at 8:02 PM on October 22, 2025, acknowledged the technical challenges inherent in large language models while defending Google's approach to addressing them.
The complaint follows a similar pattern established earlier this year. In April 2025, Starbuck sued Meta, claiming its AI falsely insisted he participated in the January 6th attack on the Capitol and had been arrested for a misdemeanor. Meta settled that case by hiring Starbuck as an advisor to combat "ideological and political bias" in its chatbot. The exact terms of the Meta settlement remain undisclosed.
Google's defense strategy centers on the technical realities of AI systems. "But it's also true that if you're creative enough, you can prompt a chatbot to say something misleading," Castañeda stated in the October 22 response. The company referenced an independent study showing Google has "the least biased LLM" among competitors.
The legal landscape for AI defamation remains largely uncharted. No court in the United States has awarded damages in a defamation suit involving an AI chatbot, according to The Wall Street Journal. Conservative radio host Mark Walters sued OpenAI in 2023, claiming ChatGPT defamed him by linking him to fraud and embezzlement accusations. Courts found in favor of OpenAI, determining Walters failed to prove "actual malice."
Starbuck has built a public profile through online campaigns targeting corporate diversity initiatives. His social media presence focuses on pressuring companies to modify or eliminate diversity, equity, and inclusion programs. The activist's legal strategy appears aimed at securing influence within technology companies rather than solely pursuing financial compensation.
Google attempted to resolve the matter before litigation. "We did try to work with the complainant's lawyers to address their concerns," Castañeda noted in the October 22 statement. The company indicated it would "review the complaint" once formally received.
Buy ads on PPC Land. PPC Land has standard and native ad formats via major DSPs and ad platforms like Google Ads. Via an auction CPM, you can reach industry professionals.
The timing coincides with growing scrutiny of how AI systems handle personal information and generate content. Penske Media Corporation filed a 101-page federal antitrust lawsuit against Google on September 12, 2025, alleging the search giant systematically coerces publishers into providing content for AI systems without compensation. That complaint examines Bard, Gemini, Search Generative Experience, AI Overviews, and AI Mode—the same suite of products implicated in Starbuck's defamation claims.
Technical analysis of AI hallucinations published by OpenAI researchers on September 4, 2025, reveals fundamental statistical causes behind false but convincing AI-generated information. The research demonstrates that language models hallucinate because they function like students taking exams—rewarded for guessing when uncertain rather than admitting ignorance. Even with perfect training data, current optimization methods produce errors due to inherent statistical limitations.
The complaint's focus on Bard carries particular significance. Google's response explicitly states the claims "mostly deal with claims related to hallucinations in Bard that we addressed in 2023." This timeline suggests the alleged defamatory content originated from an earlier iteration of Google's AI systems rather than current implementations.
Starbuck is known for his campaigns against corporate diversity efforts. He has targeted major corporations through social media campaigns designed to pressure companies into abandoning or modifying diversity and inclusion initiatives. His public persona centers on opposing what he characterizes as politically biased corporate policies.
The Delaware Superior Court filing seeks $15 million in damages. However, the Meta precedent suggests Starbuck may prioritize securing an advisory position at Google over monetary compensation. That settlement granted him influence over Meta's AI development processes, particularly regarding alleged ideological and political bias.
Multiple technology companies face mounting legal pressure over AI systems. A federal jury in San Francisco delivered a $425.7 million verdict against Google on September 3, 2025, finding the tech giant violated privacy rights of nearly 100 million users who disabled data tracking. While that case involved data collection practices rather than AI-generated content, it demonstrates courts' willingness to impose substantial penalties for technology companies' handling of user information.
Google's statement emphasized ongoing efforts to prevent bias in AI systems. "We work hard to make sure our LLMs aren't biased. In fact, an independent study shows we have the least biased LLM, and we'll continue to prioritize this important work," according to the October 22 response. The company did not identify which independent study it referenced.
The defamation claim arrives as legal frameworks struggle to address AI-generated misinformation. An Arizona federal court issued extensive sanctions against an attorney on August 14, 2025, after finding her brief contained multiple AI-generated citations to non-existent cases. The sanctions included revocation of pro hac vice status and mandatory notification to state bar authorities, demonstrating courts' increasing attention to AI hallucination issues.
Hallucinations represent a well-documented challenge across all large language model implementations. The phenomenon occurs when AI systems generate plausible-sounding but factually incorrect information. Technical explanations describe this as inherent to how language models predict text based on statistical patterns rather than factual verification.
Publishers have demanded greater control over how AI systems use their content. A tense exchange between a content creator and Google VP of Product Robby Stein on October 10, 2025, exposed critical gaps in how publishers can control whether their content appears in AI-powered search results. Travel website founder Nate Hake challenged Stein about plans to give publishers specific opt-out controls for AI features while maintaining presence in traditional search.
Legal precedent for AI defamation remains sparse. Courts must balance defamation law principles developed for human publishers against the technical realities of AI systems that generate content through statistical prediction rather than intentional communication. The actual malice standard established in New York Times Co. v. Sullivan requires proving defendants knew information was false or acted with reckless disregard for truth—a challenging threshold when applied to algorithmic systems.
Starbuck's lawsuit strategy mirrors his approach with Meta. Rather than pursuing litigation through trial, the activist appears focused on leveraging legal pressure to secure advisory roles within technology companies. This approach grants influence over AI development processes while avoiding the uncertainty and cost of extended court battles.
The case tests whether technology companies can be held liable for false information generated by AI systems marketed as providing accurate answers. Google's defense emphasizes user manipulation possibilities—arguing that determined users can prompt chatbots to produce misleading content regardless of safeguards.
Federal courts have begun addressing AI systems' role in information accuracy. A Florida magistrate judge ordered 10 hours of community service from former Bang Energy CEO John H. Owoc on August 14, 2025, after he acknowledged AI generated eleven fake legal citations in his court filings. These cases demonstrate judicial systems grappling with accountability when AI produces false information.
The Delaware Superior Court will need to determine whether existing defamation frameworks adequately address AI-generated content. Courts in other jurisdictions have found plaintiffs failed to meet the actual malice standard when suing AI companies, but each case presents unique factual circumstances.
Starbuck's public profile centers on anti-diversity activism. He uses social media platforms to organize pressure campaigns targeting corporations' diversity, equity, and inclusion programs. His campaigns typically involve publicizing corporate policies he characterizes as politically biased, then mobilizing followers to contact companies demanding policy changes.
Google's October 22 response noted the company attempted working with Starbuck's lawyers before litigation. The pre-suit negotiations apparently failed to reach resolution, leading to the Delaware filing. Neither party has disclosed what settlement terms, if any, were discussed during those conversations.
The complaint's timing coincides with broader debates about AI training on copyrighted content. Penske Media's September 14, 2025, lawsuit alleged Google possesses monopoly power with 89.2% overall market share in general search services, rising to 94.9% on mobile devices. That dominance creates what Penske characterized as a "monopsony" position where Google controls publisher access to search referral traffic.
The $15 million damages figure in Starbuck's complaint represents a substantial claim but falls below other recent technology litigation settlements. Meta agreed to pay $1.4 billion to Texas in July 2025 for unlawfully collecting and using facial recognition data—the largest settlement ever obtained from an action brought by a single state.
Technology companies face increasing scrutiny over AI systems' accuracy and potential harms. Forty-four state Attorneys General signed a formal letter dated August 25, 2025, addressed to 12 major AI companies, demanding enhanced protection of children from predatory AI products. The bipartisan coalition specifically targeted Meta, Anthropic, Apple, Chai AI, Character Technologies Inc., Google, Luka Inc., Microsoft, Nomi AI, OpenAI, Perplexity AI, Replika, and XAi.
Google's statement defending its AI systems emphasized both technical limitations and comparative advantages. The company acknowledged hallucinations as "a known issue" while asserting it has "the least biased LLM" according to unspecified independent research. This dual messaging attempts to manage expectations about AI capabilities while defending Google's competitive position.
The case's resolution will likely influence how other technology companies approach similar disputes. If Starbuck secures an advisory position at Google similar to his Meta arrangement, it could establish a precedent for activists leveraging defamation threats to gain influence over AI development. Alternatively, if Google successfully defends the suit, it might discourage future plaintiffs from pursuing AI defamation claims.
Delaware Superior Court provides the venue for this dispute. Delaware courts frequently handle complex corporate litigation due to the state's business-friendly legal framework and specialized judiciary. However, AI defamation represents relatively novel territory even for Delaware's experienced judges.
The marketing community monitors these developments closely, as AI-generated content increasingly appears in search results and advertising contexts. Brand safety concerns have expanded to encompass risks that advertisements might appear alongside inappropriate AI-generated content. Third-party verification companies including Adloox, DoubleVerify, and Scope3 developed specialized tools to help advertisers monitor brand exposure on platforms using AI systems.
Starbuck's complaint follows growing awareness of AI systems' potential to generate false personal information. Unlike factual errors about public events or general knowledge, alleged defamation involving specific individuals raises distinct legal and ethical concerns. Courts must weigh free expression principles against individuals' reputational interests when AI systems produce false claims about named persons.
Google's business model depends heavily on users trusting search results and AI-generated answers. Defamation lawsuits threaten this trust by highlighting instances where AI systems produced false information. However, the company's defense emphasizes that determined users can manipulate any AI system into generating misleading content through carefully crafted prompts.
The complaint's outcome remains uncertain. Legal precedent favors technology companies in AI defamation cases, with courts finding plaintiffs failed to meet the actual malice standard. However, each case presents unique facts, and judges may develop new legal frameworks as AI systems become more sophisticated and widely deployed.
Subscribe PPC Land newsletter ✉️ for similar stories like this one. Receive the news every day in your inbox. Free of ads. 10 USD per year.
Timeline
- April 2025: Robby Starbuck sues Meta over AI-generated false claims about January 6th participation; Meta settles by hiring him as advisor
- August 14, 2025: Arizona federal court sanctions attorney for AI-generated false legal citations
- August 25, 2025: 44 state Attorneys General demand AI companies enhance child protection measures
- September 3, 2025: Federal jury awards $425.7 million against Google in privacy violation case
- September 4, 2025: OpenAI researchers publish findings explaining statistical causes of AI hallucinations
- September 12, 2025: Penske Media files 101-page antitrust lawsuit against Google over AI content practices
- October 10, 2025: Publisher confronts Google VP over lack of AI search opt-out controls
- October 22, 2025: Robby Starbuck files $15 million defamation lawsuit against Google in Delaware Superior Court
- October 22, 2025: Google responds that claims mostly involve Bard hallucinations addressed in 2023
Subscribe PPC Land newsletter ✉️ for similar stories like this one. Receive the news every day in your inbox. Free of ads. 10 USD per year.
Summary
Who: Conservative activist Robby Starbuck filed a defamation lawsuit against Google LLC and its parent company Alphabet. Google spokesperson José Castañeda responded on behalf of the company. This represents Starbuck's second lawsuit against a major technology company over AI-generated false information.
What: The lawsuit alleges Google's artificial intelligence tools—specifically Bard—falsely linked Starbuck to sexual assault allegations and white nationalist Richard Spencer. Starbuck seeks $15 million in damages. Google defended its AI systems by noting hallucinations are a known issue in large language models and stating the company has the least biased LLM according to an independent study.
When: The lawsuit was filed on October 22, 2025, in Delaware Superior Court. The Wall Street Journal broke the news on October 22, with Google responding the same day at 8:02 PM. The alleged defamatory content originated from Bard, with Google stating the claims involve hallucinations the company addressed in 2023.
Where: The case was filed in Delaware Superior Court. The alleged AI-generated false information appeared in Google's search products and AI tools accessible to users globally. Starbuck's previous lawsuit against Meta resulted in an advisory position at that company following an April 2025 settlement.
Why: This matters for the marketing community because it demonstrates how AI hallucinations can generate false personal information with legal consequences. The case tests whether technology companies can be held liable for defamatory content produced by AI systems. No U.S. court has yet awarded damages in an AI chatbot defamation case. The outcome could influence how marketers use AI-generated content and how platforms manage accuracy in AI search features that increasingly appear alongside advertisements. Brand safety concerns now extend to whether AI systems produce false or defamatory content that could appear near advertising placements.