Two juries have handed Meta significant defeats in cases involving harm to children and teens. The company responded with a public statement from VP of Communications Andy Stone that critics and members of the U.S. Senate have described as evasive, legally dubious, and counter-productive.

The verdicts

On March 24, 2026, a New Mexico jury fined Meta $375 million for misleading consumers about the safety of its platforms and endangering children. The very next day, a California jury fined Meta $4.2 million and Google $1.8 million for designing social media platforms that are harmful to young people. The back-to-back decisions arrived within 24 hours of each other, a sequence that underscored the scale of civil litigation building against the company.

Taken alone, the financial penalties are modest relative to Meta's resources. According to Jonathan Bellack, a former trust-and-safety executive writing on his Platformocracy newsletter on May 8, 2026, Meta could "pay the damages of fifty New Mexico-sized cases a quarter just out of their profits, without dipping into their $80 billion bank account." The real exposure lies elsewhere. According to Bellack, over 2,400 lawsuits against Meta, TikTok, and other social media platforms are currently pending, and the two March verdicts have given those cases precedent-backed momentum. A company that can absorb individual eight-figure fines without difficulty cannot absorb thousands of such verdicts simultaneously.

What Meta's communications VP said - and what critics say it means

When Axios broke the story in early April 2026 that Meta had been removing advertisements posted by class action lawyers seeking new plaintiffs, Meta VP of Communications Andy Stone issued the following statement: "We're actively defending ourselves against these lawsuits and are removing ads that attempt to recruit plaintiffs for them. We will not allow trial lawyers to profit from our platforms while simultaneously claiming they are harmful."

Bellack dissected the statement at length. His central objection is structural: class action lawyers who run advertisements on Meta's platforms are paying Meta for the privilege. "Meta is the only one profiting here, not the lawyers! The lawyers are giving Meta money to run their ads," Bellack wrote. The lawyers operate on contingency, meaning they only receive payment if they win. If Meta is confident it will prevail in the underlying litigation, the argument that the lawyers will profit from the platforms is, in Bellack's framing, internally incoherent.

The decision to pull the ads was itself legally complicated. Axios reported that Meta's apparent justification was a clause in its own Terms of Service allowing the company to remove content to "avoid or mitigate misuse of our services or adverse legal or regulatory impacts to Meta." According to Bellack, this represents a blurring of the line between content that harms users and content that creates legal risk for Meta. That distinction matters enormously in trust-and-safety work, where the credibility of enforcement depends on users believing the company is acting in good faith rather than in corporate self-interest.

Meta's decision to suppress plaintiff recruitment advertising drew a bipartisan response in the U.S. Senate. According to Bellack, senators Marsha Blackburn and Amy Klobuchar jointly wrote to criticize the ads takedown, repeating in their letter a series of recent revelations about what they described as ill-gotten gains and bad-faith enforcement. The letter, from a Republican and a Democrat respectively, illustrated how the issue has cut across conventional political lines.

The history of the Shadow Censorship Clause

The Terms of Service clause Meta invoked to justify removing the plaintiff recruitment ads has a specific origin. According to Bellack's research, using the Applied Social Media Lab's Transparency Hub, Meta first established this provision between October 2020 and April 2021. The Federal Trade Commission filed its antitrust case against Meta in December 2020, during that same window. Bellack draws the connection explicitly: when the government moved in, Meta quietly updated its Terms of Service to give itself broader removal powers. That timing, and the fact that the clause is not part of Meta's publicly visible advertising standards, is the basis for Bellack's description of it as a "shadow censorship system."

The practical effect is that Meta has a contractual mechanism to remove content that creates legal exposure, independent of whether that content violates any of the community guidelines or advertising policies visible to users. This creates what Bellack calls a situation where "people can never be sure whether the company is acting in good faith or selfishly," which he argues "discredits the entire notion of trust and safety."

Why the PR choice matters

Bellack does not focus only on the substance of the statement - he focuses on why a senior communications executive at one of the world's most profitable companies made the choices he did. The statement carried no acknowledgment of child safety concerns, no reflection on the free speech tensions involved in removing legal advertising, and no citation of the specific policy the lawyers were alleged to have violated. It framed the situation entirely in terms of Meta's adversarial posture toward the plaintiffs.

According to Bellack, Stone could have said something that would have served the company's interests better even while saying nothing substantively different. "He could have restated Meta's commitment to child safety. Reflected on the free speech implications of the issue. Cited the policy that the lawyers violated. Issued a bland statement of interest in dialogue versus confrontation." None of that happened. The statement that did issue instead "earned from it a bipartisan letter from senators."

Bellack's reading is that Stone is communicating to internal audiences - Meta executives who are in litigation mode, focused on avoiding large financial payouts, and possibly contemptuous of the jury verdicts. A message calibrated for that audience may be entirely miscalibrated for the press, regulators, and the general public.

The child harm verdicts and the plaintiff advertising controversy sit within a substantially larger pattern of legal and regulatory action against Meta. PPC Land has tracked this pattern extensively.

On April 10, 2026, the Massachusetts Supreme Judicial Court ruled that Section 230 of the Communications Decency Act does not protect Meta from the Commonwealth's claims that Instagram was deliberately designed to addict children. The court affirmed a lower court's denial of Meta's motion to dismiss. That ruling came on top of a January 6, 2026 oral argument at the Ninth Circuit in the federal multidistrict litigation raising similar child addiction claims. Both proceedings are now advancing toward possible trial.

The California court's $50 million injunction against Meta over Facebook data controls, entered March 3, 2026, added another layer to the legal picture. That case involved allegations that Meta misled consumers about their ability to control who could access their personal data shared via Facebook's developer platform - a distinct set of claims but one rooted in the same pattern of alleged public misrepresentation about safety and transparency.

In Europe, German courts have consistently ruled against Meta's cross-site tracking infrastructure, with four Higher Regional Courts across Germany finding in favor of claimants. Damages have ranged from €250 to €3,000 per plaintiff. The Dresden Higher Regional Court delivered final rulings on February 3, 2026 ordering Meta to pay four users €1,500 each - rulings that Meta cannot appeal to Germany's Federal Court of Justice.

A Madrid court ordered Meta to pay €479 million to Spanish digital publishers in November 2025, on the grounds that unlawful data processing gave Meta a competitive advantage that publishers could not replicate. The European Commission found Meta in preliminary breach of Digital Services Act transparency obligations in October 2025, potentially exposing the company to fines of up to 6% of worldwide annual turnover. And Meta's shareholders settled a seven-year derivative lawsuit for $190 million in November 2025 over board failures connected to the Cambridge Analytica scandal.

The scam advertising front has added yet another dimension. Internal Meta documents revealed by Reuters in November 2025 showed that the company's platforms expose users to an estimated 15 billion "higher risk" scam advertisements per day, and that Meta internally projected approximately 10% of its 2024 revenue - roughly $16 billion - came from advertising promoting scams or banned goods. PPC Land covered that internal document disclosure when it emerged. In April 2026, the Consumer Federation of America filed a class action in the District of Columbia alleging Meta systematically misled users about advertising safety while profiting from fraud. PPC Land reported on that lawsuit.

Why this matters for the advertising industry

For advertisers and the broader marketing community, the significance of these legal developments is not just regulatory - it is structural. Meta generated $58.1 billion in advertising revenue in the fourth quarter of 2025 alone, according to earnings reported in January 2026. The platform's scale means that legal actions touching its advertising practices have downstream effects on campaigns, targeting capabilities, data availability, and the terms under which businesses can reach audiences.

The child harm litigation raises specific questions about platform design. If courts find that features such as infinite scroll, algorithmic amplification, and notification systems were deliberately engineered to increase compulsive use in adolescents, the remedies in those cases could require changes to core platform mechanics. Changes to how Meta's platforms function - whether required by courts, regulators, or reputational pressure - would affect the advertising environment that businesses rely on. Reduced engagement time, changes to recommendation systems, and restrictions on data collection from younger users would all have measurable consequences for campaign performance.

The parallel thread in Bellack's analysis - that Meta's communications function is not operating effectively - also has implications for how the company navigates these pressures. A company that consistently issues defiant or evasive public statements when facing serious legal scrutiny tends to generate more regulatory attention, not less. Bipartisan Senate letters, escalating class action filings, and European enforcement actions all create an environment in which the advertising ecosystem Meta hosts becomes harder to predict and plan around.

The FTC has already ordered Meta and six other technology companies to provide detailed information about their child safety measures, according to a September 2025 investigation. PPC Land covered the FTC's child safety information orders. The intersection of advertising revenue, children's data, and platform design is now a primary focus of federal regulators, not a secondary concern.

What the March verdicts and the Stone statement together illustrate is a company that has decided to contest every claim rather than negotiate any of them. With over 2,400 lawsuits pending, that litigation strategy carries substantial financial and reputational risk - and, according to Bellack, is being accompanied by communications choices that appear designed to signal resolve internally while doing little to reduce external pressure.

Timeline

Summary

Who: Meta Platforms, Inc., specifically VP of Communications Andy Stone; plaintiffs in two civil jury cases in New Mexico and California; senators Marsha Blackburn and Amy Klobuchar; Jonathan Bellack, author of the Platformocracy newsletter and former trust-and-safety executive.

What: Two U.S. jury verdicts in March 2026 found Meta liable for conduct harmful to children and teenagers, resulting in combined fines of $379.2 million. Meta responded by removing plaintiff recruitment advertisements from its platforms and issuing a public statement defending that decision. The statement, drafted by Andy Stone, described the removal as preventing trial lawyers from profiting from Meta's platforms - an argument that critics, including a bipartisan pair of U.S. senators, rejected as misleading. Analysis published May 8, 2026 by Jonathan Bellack identified the Terms of Service provision Meta used to justify the ad removal as having been inserted between October 2020 and April 2021, coinciding with the FTC's December 2020 antitrust filing against the company.

When: The New Mexico jury verdict was delivered March 24, 2026. The California verdict followed on March 25. Meta removed the plaintiff recruitment ads in early April 2026. The Axios story reporting the removal was published in early April 2026. Bellack's analysis appeared May 8, 2026.

Where: The New Mexico verdict originated in a New Mexico state court. The California verdict came from a California civil jury. The plaintiff recruitment ads were placed on Facebook and Instagram. Meta's Terms of Service modification was enacted within the company's global platform policies. The bipartisan Senate letter was addressed to Meta's leadership. Bellack's analysis was published on Substack under the Platformocracy publication.

Why: The verdicts and the communications response around them matter because they illustrate the scale and trajectory of civil litigation against Meta over child safety. With more than 2,400 pending lawsuits against Meta, TikTok, and other platforms, and with the Massachusetts Supreme Judicial Court having ruled in April 2026 that Section 230 does not bar such claims, the legal exposure facing Meta from child harm litigation is substantial. The decision to remove plaintiff recruitment ads rather than allow the advertising ecosystem to function normally, and to justify that decision with a statement that has drawn bipartisan political criticism, adds to the reputational and regulatory pressure on the company. For the advertising industry, the design changes that may eventually be required by courts or regulators could materially alter the platform mechanics that campaigns depend on.

Share this article
The link has been copied!