AdSecure today added a new product to its platform that helps digital publishers and platforms to eliminate ads with explicit or potentially offensive images. Content Classification has 3 distinct modules Unsafe Content, Ad Labels, and Logo Detection.
“When we look at the challenge of tackling demand-side ad fraud and delivering safe, high quality user experiences, it’s important for publishers and platforms to go beyond identifying and blocking digital threats like malware, or malicious redirects. The visual imagery that site visitors will engage with should be safe for consumption as well,” said Bryan Taylor, AdSecure’s Manager for Sales and Customer Success.
Unsafe Content is powered by Google Cloud Vision, and classifies potentially offensive content across five categories:
- Adult - Identifies content deemed as nudity or sexually explicit
- Racy - Identifies content deemed suggestive or containing mature visual elements
- Medical - Identifies content containing medically graphic images, also referred to as gore
- Violent - Identifies content considered violent and potentially disturbing for end users
- Spoof - Identifies content that can be considered as parody, misleading, or “fake news.”
Ad Labels provides an amount of keyword classification for digital ad content, enabling an understanding of how each visual element in an ad is categorized, including everything from landscape elements like trees or office buildings, to specific products that could be unwanted on a website, like alcohol, or tobacco products.
Logo Detection identifies all ad imagery that contains brand and institution logos, and can help publishers and platforms ensure that the ads appearing on their pages include the intended branding, are not using recognizable brands to deliver misleading or malicious scams, and aren’t mimicking the branding of other companies, all issues commonly seen within the digital advertising ecosystem.