Social media platforms failed to detect or remove the majority of commercially purchased fake engagement in a 2025 NATO experiment, revealing systematic vulnerabilities in defenses against coordinated manipulation despite years of promises to address bot activity.
The NATO Strategic Communications Centre of Excellence purchased fake engagement across Facebook, Instagram, YouTube, TikTok, VKontakte, X, and Bluesky between September and November 2025. According to the experiment, more than 30,000 inauthentic accounts delivered over 100,000 units of fake engagement for approximately 252 euros. Four weeks after purchase, most platforms left the majority of fake accounts and engagement active.
Platform rankings: Fake account removal
VKontakte achieved the highest removal rate at 96% of identified fake accounts, though engagement from deleted accounts remained visible. X removed 82% of fake accounts. YouTube and Bluesky each removed 55%. Facebook removed 39%. Instagram removed 22%. TikTok demonstrated the weakest performance, removing only 4% of identified fake accounts.
Platform rankings: Fake engagement removal
X showed the strongest engagement removal performance, eliminating 57% of purchased fake activity after four weeks. YouTube removed 44%. VKontakte removed 30% while Facebook removed 21%. TikTok removed 17%. Instagram removed 16%. Bluesky removed 0% of fake engagement, leaving all purchased activity active.
Platform rankings: Account creation barriers
VKontakte implemented the strongest account creation barriers through mandatory mobile application registration and government-curated identity verification. Facebook blocked newly created inauthentic accounts entirely. YouTube requested additional phone verification. TikTok temporarily restricted mobile-based account registration. Instagram and X allowed account creation but required CAPTCHA solving. Bluesky implemented no account creation barriers.
Platform rankings: Advertising manipulation resistance
TikTok demonstrated the strongest resistance to fake comment delivery on sponsored content, showing 0% delivery after 72 hours. Facebook showed strong resistance with less than 1% delivery. YouTube showed partial delivery at approximately 21%. X showed approximately 25% delivery. Instagram showed the lowest resistance, delivering 340% of purchased fake comments on sponsored content within 72 hours - more than triple the contracted quantity.
Platform rankings: Manipulation cost
For a standardized basket of 100 likes, 100 comments, 1,000 views, and 100 followers, Bluesky proved cheapest at approximately 1.41 euros. Instagram cost approximately 1.73 euros. TikTok cost approximately 2.36 euros. VKontakte cost approximately 3.52 euros. YouTube cost approximately 12.05 euros. Facebook cost approximately 2.73 euros. X cost approximately 12.08 euros. However, X views proved exceptionally cheap at 156,083 for 10 euros.
Platform rankings: User report responsiveness
Facebook showed the highest responsiveness to user reports, removing 25% of reported fake accounts five weeks after reporting. Bluesky removed 23% of reported accounts. YouTube removed approximately 8%. TikTok removed approximately 7%. Instagram removed approximately 5%. VKontakte removed only 2% of reported accounts. X removed only 2.5% of reported accounts.
Platform rankings: Transparency
TikTok was the only platform to engage directly with experiment findings and publish detailed enforcement data during the reporting period. Meta reported removal actions for Facebook but provided no equivalent reporting for Instagram. YouTube and Bluesky published only partial annual figures. X published no transparency or enforcement updates during the experimental period. VKontakte provided no public transparency reporting.
The experiment examined platforms' ability to detect commercially available manipulation services operating openly through cryptocurrency payment systems. These services offer standardized packages of likes, comments, views, and followers at consistent pricing across providers.
The experiment successfully purchased ready-to-use advertising accounts for Meta platforms, TikTok, and YouTube. These accounts, priced significantly higher than standard inauthentic accounts, enabled manipulation through platform advertising systems. YouTube advertising-ready accounts cost 12.93 euros compared to 0.067 euros for standard accounts. TikTok advertising accounts cost 3.59 euros versus 0.072 euros for standard accounts. Facebook advertising accounts cost 1.44 euros versus 0.086 euros for standard accounts. Instagram advertising accounts cost 1.44 euros versus 0.008 euros for standard accounts.
Financial Intelligence Unit of Latvia analysis revealed manipulation providers predominantly use cryptocurrency payment systems routing through custodial wallets and high-risk exchanges. Cryptomus and Heleket served as primary intermediaries, commingling customer funds and breaking transaction attribution chains. Only four of ten analyzed transactions could be traced end-to-end using blockchain analysis tools.
Transaction volume analysis identified substantial revenue for manipulation service providers. One Russia-based provider received approximately 265,261 dollars between September 2023 and October 2025. A UK-based provider processed approximately 123,714 dollars during the same period. Average monthly transaction volumes ranged from 781 to 10,202 dollars across analyzed providers.
The Latvia financial intelligence unit identified potential sanctions compliance concerns for suspected Russia-based operators using major exchange custody. Council Regulation (EU) No. 833/2014, Article 5b(2) prohibits providing crypto-asset services to Russian nationals, persons residing in Russia, or entities established in Russia. Determining sanctions violations requires confirmation of the specific exchange entity providing services and the legal status of wallet holders.
Inauthentic accounts identified in the experiment amplified diverse political content beyond the non-political scenarios tested. Spam bots on X reposted content critical of the European Digital Services Act and the Biden administration. Bluesky bots followed accounts sharing content critical of Donald Trump. Facebook bots followed Indian politicians and pages critical of the Argentine government. YouTube bots subscribed to Indian political channels. VKontakte bots reposted content invoking historical revisionism about Alaska's sale to the United States.
Platform rankings: Military content amplification by bots
Facebook showed the highest volume of bot activity amplifying pro-China military content, with hundreds of identified bot accounts following China state-controlled media pages posting missile launches, naval operations, and military vessel commissioning. X showed substantial bot amplification of posts portraying the Chinese military as superior to the United States. YouTube showed significant bot activity following channels featuring Chinese military parades and missile demonstrations including Dongfeng ballistic missile systems. TikTok showed moderate bot activity reposting content glorifying Chinese and Russian military forces. VKontakte showed limited military content amplification. Bluesky showed minimal military content amplification. Instagram was the only platform where no bots amplified military content.
A significant shift emerged in bot-promoted content from political matters toward military themes following the 2024 election year. This pattern suggests potential increases in demand for amplification of military narratives.
Cyabra analysis commissioned for the experiment identified sophisticated inauthentic accounts employing AI-generated content and context-aware engagement. These accounts integrated into authentic conversations rather than operating in closed networks. Analysis of discourse surrounding the September 2025 Russian drone incursion into Polish airspace found 22% of analyzed profiles were inauthentic - far above typical baselines. These accounts used AI-generated imagery, credible local naming conventions, and professional media personas.
The experiment tested AI-enabled content orchestration across all platforms. Automated workflows generated text through ChatGPT, created images and videos through Freepik's API, and published content without human intervention. For 10 euros, services generated between 40 and 2,500 pieces of AI content depending on type and quality settings.
Platform rankings: AI-generated content manipulation resistance
Facebook, YouTube, and VKontakte showed relatively higher resistance to AI-generated content manipulation, with manually created content receiving faster fake engagement delivery than AI-generated content. X and Instagram showed lower resistance, with AI-generated content receiving fake engagement faster than manually created content. TikTok and Bluesky showed mixed results with no clear pattern distinguishing AI-generated from manually created content performance.
The Digital Services Act entered into force in November 2022 and became fully operational in February 2024, establishing comprehensive content moderation requirements for major platforms. Meta assembled over 1,000 professionals in 2023 to develop DSA compliance solutions. The European Commission found TikTok and Meta in breach of transparency obligations in October 2025.
Ad fraud investigations conducted in early 2025 found major verification systems routinely failed to detect declared bots operating from data centers. DoubleVerify, Integral Ad Science, and Human Security faced scrutiny after research showed 40% of web traffic consisted of fake users or computerized bots. Bot fraud increased 101% year-over-year in 2024, with 16% stemming from bots linked to AI tools.
Timeline
- November 2022 - Digital Services Act enters into force
- September 2023 - Russia-based manipulation provider begins receiving cryptocurrency payments analyzed in experiment
- February 2024 - Digital Services Act becomes fully operational
- November 2024 - Meta releases comprehensive DSA transparency report
- March 2025 - Investigation reveals bot detection failures across advertising ecosystem
- July 2025 - DoubleVerify reports 101% surge in bot fraud
- September-November 2025 - NATO Strategic Communications Centre conducts manipulation experiment across seven platforms
- October 2025 - European Commission finds TikTok and Meta in breach of DSA transparency obligations
- January 2026 - NATO Strategic Communications Centre publishes experiment findings
Summary
Who: The NATO Strategic Communications Centre of Excellence conducted the experiment with support from Trementum Research, The Financial Intelligence Unit of Latvia, and Cyabra.
What: Researchers purchased fake engagement from ten commercial manipulation providers across seven social media platforms (Facebook, Instagram, YouTube, TikTok, VKontakte, X, and Bluesky), delivering over 100,000 units of inauthentic engagement through more than 30,000 fake accounts for 252 euros. Platform performance varied substantially: VKontakte removed 96% of fake accounts but left 70% of engagement active; X removed 82% of accounts and 57% of engagement; YouTube removed 55% of accounts and 44% of engagement; Facebook removed 39% of accounts and 21% of engagement; Instagram removed 22% of accounts and 16% of engagement; TikTok removed 4% of accounts and 17% of engagement; Bluesky removed 55% of accounts but 0% of engagement.
When: The experiment occurred between September and November 2025, with findings published in January 2026.
Where: The experiment tested major social media platforms operating globally, analyzing manipulation services accessible through cryptocurrency payment systems routing through Cryptomus, Heleket, Binance, and Bybit.
Why: The experiment assessed platform resilience against commercially available manipulation services to evaluate enforcement capabilities under the Digital Services Act and identify systemic vulnerabilities enabling coordinated inauthentic behavior at scale.