AI-Driven Ad Fraud on the rise: HUMAN report reveals alarming trends

HUMAN Security's 2024 report uncovers increasing sophistication in ad fraud tactics, highlighting AI's role in cyberthreats.

AI-Driven Ad Fraud on the rise: HUMAN report reveals alarming trends
AI-Driven Ad Fraud

HUMAN Security yesterday released its annual cybersecurity report, The Quadrillion Report: 2024 Cyberthreat Benchmarks, shedding light on the alarming rise of AI-driven advertising fraud. The report, published yesterday, analyzes over one quadrillion online interactions observed by HUMAN's Defense Platform throughout 2023, revealing how cybercriminals are leveraging artificial intelligence to perpetrate increasingly sophisticated ad fraud schemes.

According to the report, the advertising industry faces a growing threat from AI-powered bots and fraud tactics. These advanced techniques are designed to mimic human behavior more convincingly than ever before, making it increasingly challenging for traditional fraud detection systems to identify and block malicious activity.

One of the most concerning trends highlighted in the report is the use of AI to generate fake user profiles and engagement. These artificially created "users" can interact with ads in ways that closely resemble genuine human behavior, potentially skewing engagement metrics and defrauding advertisers. HUMAN's researchers noted a significant increase in the sophistication of these fake profiles, with AI models being used to generate realistic browsing patterns, click behaviors, and even simulated purchase histories.

The report also delves into the growing problem of ad injection, where malicious actors use AI algorithms to insert unauthorized advertisements into legitimate websites or apps. This technique not only diverts revenue from legitimate publishers but also potentially exposes users to malware or phishing attempts. HUMAN's Defense Platform detected and blocked millions of such attempts throughout 2023, highlighting the scale of this issue.

Another area of concern is the rise of AI-driven click fraud. Cybercriminals are now employing machine learning algorithms to analyze and mimic genuine click patterns, making fraudulent clicks increasingly difficult to distinguish from legitimate ones. This sophistication threatens the integrity of pay-per-click advertising models, potentially leading to significant financial losses for advertisers.

The report also touches on the emerging threat of deepfake technology in advertising fraud. While still in its early stages, HUMAN's researchers observed instances where AI-generated deepfakes were used to create fake celebrity endorsements or manipulate video ads. This trend raises concerns about the potential for widespread misinformation and manipulation in the advertising space.

Interestingly, the report reveals that 80% of companies utilizing HUMAN's platform have opted to block known large language model (LLM) user-agents outright. This statistic underscores a growing wariness among businesses towards AI-powered web crawlers and potential AI-assisted cyber threats. The decision to block LLM access represents a significant shift in how organizations are approaching the integration of AI technologies with their online presence.

HUMAN's comprehensive approach to cybersecurity involves the use of over 2,500 individual signals and more than 300 algorithms to determine the legitimacy of online interactions. This multi-faceted strategy allows for the protection of websites, mobile apps, and APIs from a wide array of automated attacks, including those targeting the advertising ecosystem.

The report also highlights the evolving nature of ad fraud attacks. While the rate of certain types of attacks may have decreased, the total number of attempts has risen significantly. This trend suggests that while some fraud techniques may be becoming less effective, cybercriminals are intensifying their efforts and diversifying their tactics.

One particularly alarming statistic from the report reveals that in some cases, up to 98.55% of traffic to certain websites was attempting some form of fraudulent activity. This staggering figure underscores the relentless nature of cyber threats and the critical importance of robust, AI-powered security measures in today's digital landscape.

The rise of connected TV (CTV) and in-game advertising has introduced new vulnerabilities that cybercriminals are quick to exploit. The report notes that these emerging platforms face unique challenges as security and transparency initiatives struggle to keep pace with the rapid evolution of advertising technologies.

Looking ahead, HUMAN's researchers anticipate that 2024 may see an even greater proliferation of AI-based attacks. The report suggests that cybersecurity professionals should be particularly vigilant about AI-assisted credential cracking operations and the processing of scraped content for more convincing fraud attempts.

The advertising industry's reliance on complex, interconnected systems of programmatic buying and selling makes it particularly vulnerable to sophisticated AI-driven attacks. As fraudsters continue to refine their techniques, the need for equally advanced defensive measures becomes increasingly apparent.

In response to these emerging threats, HUMAN emphasizes the importance of a multi-layered approach to ad fraud prevention. This strategy combines advanced machine learning algorithms, real-time threat intelligence, and human expertise to stay ahead of evolving fraud tactics.

The report concludes by stressing the critical need for collaboration within the advertising ecosystem. As fraud techniques become more sophisticated, no single entity can effectively combat the threat alone. Instead, a coordinated effort involving advertisers, publishers, ad networks, and cybersecurity firms is essential to maintain the integrity of the digital advertising landscape.

In summary, HUMAN's The Quadrillion Report: 2024 Cyberthreat Benchmarks paints a picture of an advertising industry under siege from increasingly sophisticated AI-driven fraud attempts. The report serves as a wake-up call for all stakeholders in the digital advertising ecosystem, highlighting the urgent need for advanced, AI-powered security solutions and industry-wide cooperation in the fight against ad fraud.

As the digital advertising landscape continues to evolve, the importance of robust, adaptive security measures cannot be overstated. HUMAN's report serves as a crucial resource for understanding the current state of ad fraud and preparing for the challenges that lie ahead in the continual battle against cybercriminals in the advertising space.

AI is being used to generate convincing fake user profiles and engagement

Ad injection techniques are evolving, with millions of attempts blocked in 2023

Click fraud is becoming more sophisticated through the use of machine learning algorithms

Deepfake technology is emerging as a new threat in advertising manipulation

80% of companies using HUMAN's platform block known large language model user-agents

In extreme cases, up to 98.55% of website traffic was identified as potentially fraudulent

Connected TV and in-game advertising face unique fraud challenges

The report anticipates an increase in AI-based attacks in 2024