Microsoft Advertising removes over one billion ads in 2024 trust safety drive

Microsoft's comprehensive enforcement efforts target deepfake scams and financial fraud while strengthening advertiser protections across its platform.

Microsoft exec reviews AI-powered ad fraud detection systems targeting deepfake scams and financial threats.
Microsoft exec reviews AI-powered ad fraud detection systems targeting deepfake scams and financial threats.

Microsoft Advertising revealed comprehensive enforcement actions in its annual trust and safety review published on June 26, 2025. The platform removed or restricted over one billion advertisements that violated advertising policies during 2024, marking a substantial escalation in content moderation efforts amid rising threats from generative artificial intelligence and sophisticated financial scams.

According to the report by Neha Garg, Principal Program Manager at Microsoft Advertising, the most common infractions involved misleading content, brand infringement, and gambling violations. The enforcement actions extended beyond individual advertisements, with more than 475,000 accounts suspended for abusing the ad network and endangering consumers.

PPC Land Newsletter
CTA Image

Get the PPC Land newsletter ✉️ for more like this.

Subscribe

Summary

Who: Microsoft Advertising platform, led by Principal Program Manager Neha Garg, affecting advertisers, publishers, and consumers across the Microsoft advertising ecosystem.

What: Comprehensive trust and safety enforcement removing over one billion policy-violating advertisements, suspending 475,000 accounts, and implementing AI-powered detection systems targeting deepfake scams and financial fraud.

When: Throughout 2024, with policy revisions in October 2024 and annual review published June 26, 2025.

Where: Across Microsoft Advertising platform globally, impacting Microsoft Audience Network and publisher pages worldwide.

Why: Rising threats from generative AI-powered deepfake content, sophisticated financial scams, and the need for advanced countermeasures beyond conventional detection methods to maintain platform integrity and user trust.

Microsoft observed a notable rise in financial scam attempts throughout 2024, with increasing risks that bad actors could leverage generative AI to create deepfake content. These scams often involve sophisticated imagery and video manipulation techniques, such as fake celebrity endorsements and meticulously constructed phishing sites. The sophistication and scale of these tactics posed a significant challenge that surpassed the capabilities of conventional detection methods, creating an urgent need for advanced, AI-powered countermeasures.

In October 2024, Microsoft proactively revised its policies to specifically address deepfake technology, ensuring that fraudulent endorsements and deceptive claims are quickly identified and removed. The platform improved its moderation capabilities by substituting opaque, black-box risks with transparent, LLM-generated explanations. This empowers reviewers with clear, actionable insights, enabling them to tackle emerging threats with precision.

The company's investment in advanced agentic workflows strengthens its ability to detect phishing attacks and new scam patterns. LLMs quickly decipher context and nuance to pinpoint novel tactics ranging from get-rich-quick schemes disguised as celebrity endorsements to zero-day phishing attacks. A key focus area has been combatting advertiser fraud using early signals, such as suspicious business credentials, impersonation patterns, and anomalous payment activity.

Microsoft's enforcement efforts targeted content beyond advertising violations. The platform proactively removed ads from publisher pages that hosted content violating guidelines, such as sexually explicit material or dangerous products, which impacted over 250,000 publisher pages last year. Brand suitability controls allow advertisers to customize ad placement, ensuring advertisements avoid sensitive topics including political or children's content.

The platform expanded feedback options for Native ads, making it easier for customers to report or hide advertisers they find irrelevant, repetitive, or misleading. These personalized, consumer-driven controls help foster a more respectful environment, protecting both brand integrity and trust.

Microsoft's review teams maintain commitment to fairness by offering clear appeal processes for advertisers or publishers who believe enforcement actions were made in error. Throughout 2024, the platform acted on rejecting nearly 245,000 ads and 5,000 accounts in response to 70,000 complaints. The company overturned over 1.5 million ad rejections and 20,000 accounts in response to 72,000 appeals, demonstrating substantial review activity.

These enforcement statistics reflect broader industry challenges with AI-generated fraud schemes becoming increasingly sophisticated. Recent investigations have revealed networks comprising more than 200 fraudulent properties using AI-generated content to deceive advertisers.

Microsoft's comprehensive strategy blends forward-thinking policies, cutting-edge moderation tools, AI-first defenses, and dynamic algorithms. The approach acknowledges that the regulatory landscape and threat environment are unprecedented and constantly evolving. The platform continues to use insights from appeals to refine systems, ensuring they become more effective over time.

When consumers or other entities escalate concerns, Microsoft acts swiftly to remove violating content. The company remains anchored in founding principles, ensuring that trust and safety remain at the heart of platform operations. Microsoft thrives on change while maintaining agility to continuously evolve policies, strengthen enforcement systems, foster cross-industry collaboration, and empower consumers, publishers, and advertisers with greater control over their experiences.

The trust and safety review acknowledges that despite best efforts, some scams may still slip through detection systems. Microsoft encourages users to report suspicious or harmful advertisements using the Microsoft Advertising Ad Quality Escalation Form. The company values partnership and feedback, maintaining commitment to a safe, trusted Microsoft Advertising ecosystem while working to build a secure and resilient platform for the community.

Digital advertising platforms increasingly face sophisticated fraud targeting Connected TV and streaming environments. Bot fraud has become prevalent within CTV and streaming audio, with fraudsters leveraging generative AI to generate seemingly authentic user agents that mimic human interaction.

Microsoft's enforcement metrics demonstrate the scale of content moderation required for major advertising platforms. The billion-advertisement threshold represents substantial intervention in platform content, while the appeals process statistics indicate significant ongoing dialogue between the platform and its users regarding content decisions.

For marketing professionals managing campaigns across multiple platforms, these trust and safety measures have practical implications for campaign management and brand protection. Microsoft's expanded brand safety partnerships with third-party verification services provide additional layers of protection for advertiser investments.

The emphasis on early fraud detection using business credentials and payment activity patterns suggests that Microsoft is implementing preemptive measures rather than solely reactive enforcement. This approach aims to prevent violations before they occur while minimizing friction for legitimate advertisers who trust the platform for business growth.

Microsoft's integration of AI-powered detection systems reflects industry-wide recognition that traditional rule-based approaches prove insufficient against sophisticated modern threats. The transparency improvements in moderation explanations address longstanding advertiser concerns about black-box enforcement decisions that previously lacked clear justification.

The platform's focus on brand suitability controls and consumer feedback mechanisms indicates recognition that trust-building requires empowering multiple stakeholders in the advertising ecosystem. Advertisers gain control over content adjacency while consumers receive tools to manage their advertising experience.

Looking ahead, Microsoft emphasizes that the regulatory landscape and threat environment remain unprecedented and constantly evolving. The company maintains agility to continuously evolve policies, strengthen enforcement systems, and foster cross-industry collaboration. The platform empowers consumers, publishers, and advertisers with greater control over their experiences while maintaining core safety principles.

The trust and safety review represents Microsoft's effort to provide transparency about platform governance while demonstrating substantial investment in content moderation infrastructure. These efforts align with broader industry initiatives to address advertising fraud and maintain ecosystem integrity.

Recent developments in measurement solutions for Microsoft Advertising Network provide advertisers with enhanced verification capabilities and deeper campaign insights. Third-party measurement partnerships offer increased transparency and improved campaign performance through invalid traffic elimination and brand-safe placements.

Microsoft's comprehensive approach to trust and safety extends across policy development, technology implementation, and community engagement. The platform continues working to build a secure and resilient advertising environment while acknowledging that perfect protection remains an ongoing challenge requiring continuous innovation and adaptation.

Timeline