Meta filed multiple lawsuits on February 26, 2026 against deceptive advertisers operating across three countries, targeting operators who used celebrity impersonation and cloaking techniques to defraud users on Facebook and Instagram. The legal action, announced by the company through its official newsroom, covers defendants based in Brazil, China, and Vietnam. Separately, cease and desist letters went out to eight former Meta Business Partners accused of selling services to evade the platform's enforcement systems.

The lawsuits arrive at a fraught moment for Meta's relationship with the advertising fraud problem. Internal documents revealed in November 2025 showed that Meta internally projected roughly $16 billion in 2024 revenue from advertisements promoting scams and banned goods - approximately 10% of its total annual revenue. Those documents, reviewed by Reuters, indicated the company's platforms were exposing users to an estimated 15 billion higher-risk scam advertisements every day. The February 26 legal actions represent one component of what Meta describes as a multi-layered enforcement strategy.

Celeb-bait operations in Brazil and China

The practice of celeb-bait - using altered or fabricated images of well-known public figures to lend false credibility to fraudulent advertisements - forms the core of four out of the five lawsuits. According to the company announcement, Meta currently operates a protection program covering more than 500,000 celebrities and public figures worldwide whose likenesses are repeatedly targeted by scam operators.

The first Brazilian defendants, Vitor Lourenço de Souza and Milena Luciani Sanchez, according to Meta, used altered images and voices of celebrities to promote fraudulent healthcare products. The second Brazilian case names a corporate network: B&B Suplementos e Cosméticos Ltda. (trading as Brites Corp), Brites Academia de Treinamento Ltda., Daniel de Brites Macieira Cordeiro, and José Victor de Brites Chaves de Araújo. According to the announcement, this group used deepfakes of a prominent physician to advertise healthcare products without regulatory approval, and went further still - selling courses that taught the same deceptive tactics to others.

The China-based defendant, Shenzhen Yunzheng Technology Co., Ltd, operated on a different model. According to Meta, this entity used celeb-bait advertisements to target people in the United States and Japan, among other countries, as part of a broader fraud scheme that lured targets into so-called investment groups. Investment group scams - sometimes called "pig butchering" in law enforcement circles - typically involve sustained social engineering over weeks or months before requesting transfers of funds.

Cloaking and subscription fraud in Vietnam

The fifth lawsuit targets Vietnam-based Lý Văn Lâm for a distinct but equally damaging form of deception: cloaking. According to Meta, this technique impairs the ad review process by showing one version of a webpage's content to the company's automated review systems and a completely different version to actual users. A seemingly legitimate advertisement passes through review, then delivers harmful content to the people who click on it.

Lý Văn Lâm's scheme, according to the announcement, used scam ads offering deeply discounted items from recognizable brands, including Longchamp, in exchange for completing a survey. Users who engaged were redirected to websites requesting credit card information to purchase items they ultimately never received. Beyond that initial deception, their credit cards incurred unauthorized recurring fees - a practice known as subscription fraud.

Maison Longchamp provided a statement that appears in the announcement: "Longchamp has a zero tolerance policy and invests a fair amount of resources in combating illicit activities - such as counterfeiting or fraud using our brand - offline and online. For this fight to be efficient, we need to rely on active cooperation between all stakeholders, including intermediaries. We are happy that Meta takes action and demonstrates such cooperation."

The cloaking technique has a documented history on Meta's platforms. Facebook filed a lawsuit in January 2023 against Basant Gajjar, an Indian national operating under the name LeadCloak, who provided software and services designed to circumvent automated ad review systems. The new Vietnam case suggests the practice has continued to evolve since that earlier enforcement action.

According to the February 26 announcement, Meta's latest tools use AI to analyze cloaking attempts and more quickly reject ads that redirect to harmful websites. The AI-based detection systems also accelerate the platform's response when users report suspected malicious ads.

Technical enforcement measures

Across all the defendants named in the February 26 lawsuits, Meta states it has taken a range of technical enforcement actions. These include suspending payment methods linked to the scam operations, disabling related accounts across Facebook and Instagram, blocking the domain names of websites used for fraud, and sharing information with industry partners so those domains and accounts can be blocked elsewhere.

The cross-industry sharing component connects to Meta's broader intelligence infrastructure. In December 2025, Meta reported removing more than 134 million scam advertisements across its platforms throughout 2025 and disclosed that its Fraud Intelligence Reciprocal Exchange program shares data with more than 50 financial institutions worldwide. The company also participates in the Global Signal Exchange alongside Microsoft and Google, tracking more than 380 million threat signals in real time.

Prior to the February 26 lawsuits, Meta worked with law enforcement in the United Kingdom and Nigeria to help take down a scam center, resulting in seven arrests, according to the announcement. The company did not name the specific scam center or provide additional details about the UK-Nigeria operation.

Cease and desist letters to former Business Partners

Beyond the courtroom actions, Meta issued cease and desist letters to eight former Meta Business Partners. According to the announcement, these entities were offering what the company characterizes as abusive services: phony account unbanning or restoration services, and the rental of access to trusted accounts that helped clients evade enforcement. The letters indicate Meta is prepared to escalate to litigation if the parties fail to comply.

The incident reveals a structural vulnerability. Verified Business Partners - companies that receive a degree of platform trust - can potentially exploit that status to offer workarounds to other advertisers facing enforcement. Meta says it is now reviewing its Business Partner ecosystem and actively working to enhance vetting methods for approving these partnerships.

The question of account restoration services has particular resonance for the advertising community. Reports from January 2026 documented how Meta's legitimate support systems repeatedly fail advertisers facing account issues, leaving businesses unable to reach human representatives who can review their cases. That context makes the market for illicit restoration services easier to understand, even as Meta moves to shut them down.

Context: pressure from multiple directions

The February 26 actions come as Meta faces scrutiny over fraud enforcement from regulators and journalists, while also defending against separate allegations. A November 2025 analysis documented Meta's "penalty bid" program, through which the platform charged suspected fraudsters higher advertising rates rather than blocking them outright. A 2024 internal strategy document, according to those reports, showed Meta would only act against suspect advertisers in response to impending regulatory action.

The enforcement team responsible for vetting questionable advertisers was, according to a February 2025 internal document cited in Reuters reporting, constrained to taking actions costing no more than 0.15% of total revenue in the first half of 2025 - approximately $135 million out of $90 billion generated. Meta spokesman Andy Stone disputed that figure represented a hard ceiling, describing it instead as a revenue projection.

An April 2025 internal review of online communities where fraudsters discuss their methods concluded that advertising scams on Meta's platforms was easier than on Google. A May 2025 presentation estimated Meta platforms were involved in one-third of all successful scams in the United States.

The company has previously expanded advertiser verification requirements in markets including Thailand and India as part of efforts to reduce abuse, requiring advertisers who meet specific risk criteria to verify the individuals or organizations that pay for and benefit from advertisements.

Meta's global anti-scam measures announced in February 2025 included the takedown of more than 408,000 accounts engaged in romance scams during 2024, primarily originating from West African countries. The company has now filed more than 60 lawsuits in recent years against those who abuse its platforms with various schemes, including brand impersonation, account takeovers, and bulk messaging, according to the December 2025 Global Anti-Scam Summit announcement.

Why this matters for the advertising industry

For marketing professionals who operate legitimate advertising accounts on Meta platforms, the enforcement actions carry several practical implications. The targeting of account restoration services signals that the unofficial market for circumventing Meta's enforcement - which has grown partly because legitimate support channels have proven unreliable - is now itself a litigation target.

The cloaking lawsuits and the AI tools Meta describes as detecting cloaking attempts suggest the company is investing in pre-approval screening rather than relying solely on post-approval removal. If those tools improve accuracy, legitimate advertisers may experience fewer false positives during review - or may face additional scrutiny if their landing pages show characteristics that trigger cloaking detection.

The celeb-bait protection program, covering more than 500,000 public figures, continues to expand. Brands whose executives, spokespeople, or affiliated public figures have been used in unauthorized ad content may find value in understanding whether those individuals are enrolled in the program and what protections it provides in practice.

Timeline

Summary

Who: Meta Platforms, along with named defendants including Brazil-based individuals Vitor Lourenço de Souza and Milena Luciani Sanchez; Brazilian corporate network B&B Suplementos e Cosméticos Ltda. and related parties; China-based Shenzhen Yunzheng Technology Co., Ltd; Vietnam-based Lý Văn Lâm; and eight unnamed former Meta Business Partners who received cease and desist letters.

What: Meta filed multiple civil lawsuits against scam advertisers for using celeb-bait and cloaking techniques to defraud users, while issuing cease and desist letters to entities offering illicit account restoration and enforcement evasion services. The company also announced AI-powered cloaking detection tools, technical enforcement actions including account suspension and domain blocking, and a review of its Business Partner vetting procedures.

When: The lawsuits and cease and desist letters were announced on February 26, 2026. The enforcement actions against defendants' accounts, payment methods, and domains were taken in parallel with the legal filings. Prior law enforcement cooperation referenced in the announcement, including the UK and Nigeria scam center takedown resulting in seven arrests, predated the February 26 announcement by an unspecified period.

Where: The defendants operated from Brazil, China, and Vietnam. Scam campaigns targeted users in multiple countries including the United States, Japan, and others. The lawsuits were filed in jurisdictions not specified in the public announcement. Meta's platform enforcement spans Facebook and Instagram globally.

Why: According to Meta, scam advertising undermines user trust and violates platform policies. The legal actions are intended to deter future fraud and hold bad actors accountable. The timing also reflects external pressure: internal documents published in November 2025 drew attention to the gap between Meta's public enforcement stance and its internal approach to managing scam advertising revenue, making visible enforcement actions both operationally significant and reputationally important for the company.

Share this article
The link has been copied!