Minnesota solar company sues Google over false AI-generated claims

Wolf River Electric filed defamation lawsuit March 2025 after Google's Gemini produced fabricated statements about Minnesota Attorney General lawsuit that never existed.

Golden retriever holding worn Google logo toy representing AI system reliability concerns in defamation lawsuit.Retry
Golden retriever holding worn Google logo toy representing AI system reliability concerns in defamation lawsuit.

Solar contractor Wolf River Electric filed a defamation lawsuit against Google on March 11, 2025, alleging the tech giant's artificial intelligence systems generated and published false claims that the Minnesota company faced a lawsuit from the state attorney general. The case marks one of at least six AI defamation lawsuits filed in the United States over the past two years.

Sales representatives for the Isanti-based solar contractor noticed canceled contracts late in 2024. According to the complaint filed in Ramsey County District Court, former customers told Wolf River they had backed out after discovering through Google searches that the company settled a lawsuit with the Minnesota Attorney General over deceptive sales practices. Wolf River had never been sued by the government.

When company executives checked Google themselves on September 2, 2024, they found the fabricated claims displayed prominently in search results generated by Gemini, Google's artificial intelligence technology. "According to recent news reports, Wolf River Electric is currently facing a lawsuit from the Minnesota Attorney General due to allegations of deceptive sales practices regarding their solar panel installations," the AI-generated text stated.

The complaint documents how Google's AI produced extensive false claims about a nonexistent lawsuit. The system alleged Wolf River engaged in "misleading customers about cost savings, using high-pressure tactics, and tricking homeowners into signing binding contracts with hidden fees." Google cited four sources to support its claims: articles from the Minnesota Star Tribune and KROC News, a Minnesota Attorney General press release from April 2022, and consumer reviews from Angie's List and the Better Business Bureau.

None of those sources mentioned Wolf River Electric. The Attorney General's April 2022 announcement named 10 solar companies and lenders in actual litigation—Brio Energy, Bello Solar Energy, Avolta Power, Sunny Solar Utah, GoodLeap, Sunlight Financial, Corning Credit Union Services, and others. Wolf River appeared nowhere in the document. According to the complaint, Google's AI fabricated the entire narrative by conflating unrelated legal actions against different companies with Wolf River's business operations.

Justin Nielsen founded Wolf River in 2014 with three friends. The company grew into Minnesota's largest solar contractor before Google's AI-generated claims began appearing in search results. "We put a lot of time and energy into building up a good name," Nielsen told The New York Times. "When customers see a red flag like that, it's damn near impossible to win them back."

Google's search algorithms compounded the problem through autocomplete suggestions. When users typed "Wolf River Electric" into the search box, Google automatically suggested searches for "Wolf River Electric lawsuit," "Wolf River Electric lawsuit reddit," "Wolf River Electric lawsuit update Minnesota," and "Wolf River Electric lawsuit Minnesota settlement." These suggestions appeared despite no actual lawsuit existing.

The AI-generated false information proliferated across multiple search variations. When users searched "Wolf River Electric lawsuit Minnesota Settlement," Google claimed the Minnesota Attorney General's Office "has sued solar lending companies, including Wolf River Electric, over hidden fees and other practices" and that the office "has obtained consent judgments against solar installers and lenders, returning over $300,000 to consumers." Wolf River was never party to any such judgment.

Google's systems produced different versions of the false narrative depending on search terms. One AI-generated response claimed "the Minnesota Attorney General filed a lawsuit against Wolf River Electric in 2022" for "deceptive sales practices including misrepresenting cost savings, using high-pressure tactics, and tricking consumers into signing binding contracts, leading to many homeowners facing significant financial burdens from poorly installed solar panels." Another stated Attorney General Keith Ellison filed suit against Wolf River in March 2024 for "deceptive business practices" and violations of "Minnesota laws against deceptive trade practices and lending."

Advertise on ppc land

Buy ads on PPC Land. PPC Land has standard and native ad formats via major DSPs and ad platforms like Google Ads. Via an auction CPM, you can reach industry professionals.

Learn more

The complaint documents specific business losses from the false claims. On March 3, 2025, a customer identified as contract KSKAU-MYAVG-KA658-5KUFQ terminated a $39,680 solar system purchase, referencing lawsuits that appeared when he searched Google for Wolf River. A March 4 customer met with a Wolf River sales representative but refused to proceed after online research showed the company being sued by the Attorney General. On March 5, a customer terminated a $150,000 contract for a 44.2 kilowatt solar system after reading Google's claims about the company "misleading customers about cost savings, using high-pressure tactics, and tricking homeowners into signing binding contracts with hidden fees."

Vladimir Marchenko, Wolf River's chief executive, said competitors brought up the fabricated Attorney General claims in consultations with potential clients to dissuade them from using Wolf River. The company documented posts on Reddit citing the false Google results, with one user calling Wolf River a "possible devil corp." According to correspondence included in the lawsuit, Wolf River claimed it lost $25 million in sales during 2024 and seeks damages exceeding $110 million.

The case presents distinct legal questions about AI-generated defamation. Traditional libel suits focus on proving a human publisher acted with negligence or "reckless disregard" for truth. AI systems generate text through statistical prediction rather than intentional decision-making, creating challenges for establishing fault under existing defamation frameworks.

Eugene Volokh, a First Amendment scholar at the University of California, Los Angeles, said AI models can publish damaging assertions. "The question is who is responsible for that?" Volokh dedicated an entire 2023 issue of his publication, Journal of Free Speech Law, to AI defamation questions.

Google acknowledged in a statement that "with any new technology, mistakes can happen," noting the company "acted quickly to fix it" after learning about the problem. However, as recently as November 11, 2025, a Google search of "wolf river electric complaint" produced results stating "the company is also facing a lawsuit from the Minnesota attorney general related to its sales practices."

The Wolf River case differs from earlier AI defamation suits that courts dismissed. A Georgia court rejected radio host Mark Walters' defamation claim against OpenAI in May after determining a journalist who received false information from ChatGPT did not believe the claim. Judge Tracie Cason ruled that "if the individual who reads a challenged statement does not subjectively believe it to be factual, then the statement is not defamatory."

Wolf River documented multiple customers who did believe Google's AI-generated claims and acted on that belief by canceling contracts. The company also faces additional harm from instructions Google's AI provided to users. Several AI-generated responses concluded by advising people to "file a complaint with the Minnesota Attorney General's office" against Wolf River. According to the complaint, Wolf River received notification that several customers filed such complaints with the Attorney General after following Google's recommendations.

Conservative activist Robby Starbuck settled an AI defamation case against Meta in August 2025 after the company's Llama chatbot generated false claims that he participated in the January 6, 2021 Capitol riot and had ties to QAnon conspiracy theories. Meta brought Starbuck on as an adviser focused on policing Meta's AI following the settlement. Starbuck filed a separate $15 million defamation lawsuit against Google in October 2025.

Irish broadcaster Dave Fanning sued Microsoft and an Indian news outlet in Irish court after an AI-generated article falsely claimed he faced trial for sexual misconduct. The article, featuring Fanning's photograph, appeared on Microsoft's MSN portal after an Indian publication used an AI chatbot to produce the content. "What Microsoft did was traumatizing, and the trauma turned to anger," Fanning said.

Publishers have raised broader concerns about Google's AI systems misrepresenting their content. Penske Media Corporation filed a federal antitrust lawsuit against Google on September 12, 2025, alleging the search giant coerces publishers into providing content for AI training while reducing website traffic through AI-generated summaries. Research from Ahrefs found AI Overviews reduced organic clicks by 34.5% for top-ranking websites when comparing March 2024 with March 2025 results.

Google displays a small disclaimer at the bottom of some AI-generated content stating "Generative AI is experimental. For legal advice, consult a professional." Users can only see this notice by clicking a "Show more" option. The complaint argues this concealed disclaimer "says nothing about the publications being untrue, false, utterly fabricated" and provides no guidance on whether users should trust Google's statements.

Nina Brown, a Syracuse University professor specializing in media law, said she expected few AI defamation cases would reach trial. A verdict finding companies liable for AI output could trigger extensive litigation from others discovering falsehoods about themselves. "I suspect that if there is an A.I. defamation lawsuit where the defendant is vulnerable, it's going to go away — the companies will settle that," Brown said.

Legal experts identified the Wolf River case as particularly strong because the company documented specific financial losses from the false claims. Wolf River cited $388,000 in terminated contracts and provided customer names, contract numbers, and correspondence showing direct causation between Google's AI-generated claims and contract cancellations.

The company also benefits from not being categorized as a public figure. Private plaintiffs in defamation cases must prove only negligence rather than "actual malice"—the higher standard requiring proof that defendants knew information was false or acted with reckless disregard for truth. The actual malice standard established in New York Times Co. v. Sullivan applies to public figures but presents challenges when applied to algorithmic systems that generate content through statistical processes.

Wolf River's complaint includes five counts: defamation, defamation per se, defamation by implication, violation of the Minnesota Deceptive Trade Practices Act, and declaratory relief. The case initially filed in Ramsey County District Court was removed to federal court, where Judge John M. Braun is weighing whether to keep the matter in federal jurisdiction or return it to state court.

Nicholas Kasprowicz, Wolf River's general counsel, told potential customers about the fabricated claims. Despite these assurances, customers terminated relationships because of information found through Google searches. A non-profit organization informed Wolf River on March 11, 2025, that it was "pulling the plug" on their business relationship because of "several lawsuits in the last year" with the "Attorney General's Office," terminating $174,044.12 in solar and lighting projects.

The marketing community faces increasing challenges as AI-generated content appears in search results without clear labeling or verification mechanisms. Zero-click searches increased from 56% to 69% of Google queries since AI Overviews launched in May 2024. Research explaining why language models hallucinate reveals fundamental statistical causes—AI systems function like students rewarded for guessing when uncertain rather than admitting ignorance.

Google's AI systems continue facing scrutiny over hallucinations and false information. The company implemented technical improvements including the Gemini 1.5 Flash model's expanded context window and features aimed at reducing AI hallucinations by displaying links to related content for fact-seeking prompts. However, these measures operate alongside persistent issues with AI systems generating convincing but false information.

Marchenko immigrated to Minnesota from Ukraine as a child and played junior hockey with Nielsen before co-founding Wolf River. "There's no Plan B for us," Marchenko said. "We started this from the ground up. We have our reputation, and that's it."

The case on hold pending the federal court's jurisdictional determination represents a test of whether technology companies can be held liable for defamatory content their AI systems produce and continue distributing even after notification of errors.

Timeline

  • April 26, 2022: Minnesota Attorney General sues four solar companies—none named Wolf River Electric
  • September 2, 2024: Wolf River executives discover false AI-generated claims on Google
  • Late 2024: Wolf River experiences uptick in canceled contracts
  • March 3, 2025: Customer terminates $39,680 contract after Google search
  • March 4, 2025: Prospective customer refuses business after researching company
  • March 5, 2025: Customer cancels $150,000 contract citing Google's false claims
  • March 11, 2025: Wolf River files defamation lawsuit in Ramsey County District Court
  • March 11, 2025: Non-profit terminates $174,044 in projects citing Attorney General lawsuits
  • March 12, 2025: Complaint formally served on Google
  • June 12, 2025: Case removed to federal court
  • October 22, 2025: Robby Starbuck files separate AI defamation suit against Google
  • November 11, 2025: False claims still appearing in some Google search results

Summary

Who: LTL LED, LLC doing business as Wolf River Electric, a Minnesota solar contractor founded in 2014, sued Google LLC over AI-generated false claims. Executives Justin Nielsen, Vladimir Marchenko, and Luka Bozek lead the company, which employed sales representatives across Minnesota before the incident.

What: Google's Gemini artificial intelligence system generated and published false statements claiming Wolf River faced a lawsuit from the Minnesota Attorney General for deceptive sales practices, hidden fees, high-pressure tactics, and misleading customers. The AI fabricated these claims by misattributing information from actual Attorney General cases against unrelated companies. Google's autocomplete suggestions reinforced the false narrative by prompting users to search for nonexistent lawsuits and settlements involving Wolf River.

When: Wolf River employees discovered the false AI-generated claims on September 2, 2024. Contract cancellations began accumulating through late 2024, with documented losses occurring on March 3, 4, 5, and 11, 2025. The company filed its defamation lawsuit on March 11, 2025, in Ramsey County District Court. Google removed the case to federal court on June 12, 2025, where proceedings remain on hold pending jurisdictional determination.

Where: The false claims appeared in Google search results accessible globally but primarily affected Wolf River's Minnesota business operations. The company maintains its headquarters at 101 Isanti Parkway Northeast Suite G in Isanti, Minnesota. The lawsuit initially filed in Ramsey County District Court now proceeds in U.S. District Court for the District of Minnesota. Customers across Minnesota encountered the false information when searching for Wolf River Electric.

Why: This matters for the marketing community because AI-generated search content increasingly influences consumer decisions without human editorial oversight or fact-checking. The case tests whether technology companies bear legal responsibility for defamatory content their AI systems produce, particularly when systems continue distributing false information after receiving notice of errors. The outcome could establish precedents affecting how brands protect their reputations against AI hallucinations, how search engines verify information before displaying AI-generated summaries, and whether businesses maintain legal recourse when algorithms damage their market position through fabricated claims.