Google to demote sites with high volumes of explicit deepfake removals
Google announces new search ranking policy to demote websites frequently hosting non-consensual explicit deepfakes, aiming to combat online abuse.
Google this week announced updates to its search engine algorithms and content removal processes to combat the proliferation of non-consensual sexually explicit fake content, commonly known as "deepfakes." This development, revealed just three days ago, marks a crucial step in addressing the growing concern surrounding artificially generated explicit imagery that has been plaguing the internet.
Emma Higham, Product Manager at Google, outlined the company's multi-faceted approach to tackling this issue. The initiative aims to protect individuals from the distressing impact of having their likeness used without consent in sexually explicit synthetic media. These measures come in response to the rapid advancement of generative imagery technology, which, while offering numerous benefits, has also led to an alarming increase in the creation and distribution of non-consensual explicit content.
The problem of deepfakes has been escalating in recent years, with advancements in artificial intelligence and machine learning making it increasingly difficult to distinguish between real and fabricated content.
Google's new approach encompasses two main areas of improvement: streamlining the content removal process and enhancing the search ranking system. These changes are the result of extensive collaboration with experts and feedback from victims of such content, highlighting the company's commitment to addressing this complex issue.
The first major update focuses on simplifying the process for individuals to request the removal of non-consensual fake explicit imagery from search results. While Google has had policies in place for years allowing users to request content removal, the new system aims to make this process more efficient and comprehensive. When a person successfully requests the removal of explicit non-consensual fake content featuring them, Google's systems will now automatically filter out similar explicit results for searches related to that individual.
Furthermore, the company has implemented a new feature that scans for and removes duplicates of any image that has been successfully removed from search results under their policies. This proactive approach aims to provide victims with greater peace of mind, addressing concerns about similar content resurfacing in the future.
The second major component of Google's strategy involves improvements to its ranking systems. Recognizing that the most effective protection against harmful content is to prioritize high-quality information in search results, Google has introduced updates to its ranking algorithms. These changes are designed to lower the visibility of explicit fake content for many searches, particularly those that might be at higher risk of returning such results.
For queries that specifically include people's names and might be seeking this type of content, Google aims to surface high-quality, non-explicit content, such as relevant news articles, when available. The company reports that these updates have already reduced exposure to explicit image results for these types of queries by over 70%. This shift allows users to access information about the societal impact of deepfakes rather than being exposed to actual non-consensual fake images.
Google acknowledges the technical challenges involved in distinguishing between legitimate explicit content (such as an actor's consensual nude scenes) and non-consensual fake content. While this remains a complex issue, the company is continuously working to improve its ability to surface legitimate content and downrank explicit fake content.
Another significant change in Google's approach is the implementation of site-wide ranking adjustments based on removal requests. Sites that have had a high volume of pages removed from search results under Google's policies for fake explicit imagery will be demoted in search rankings. This approach, which has proven effective for other types of harmful content, is expected to significantly reduce the visibility of fake explicit content in search results.
These updates represent a major step forward in Google's efforts to protect individuals from the harmful effects of non-consensual explicit deepfakes. However, the company acknowledges that there is still more work to be done to address this issue comprehensively. Google has committed to continuing the development of new solutions and investing in industry-wide partnerships and expert engagement to tackle this problem at a societal level.
The broader context of this announcement reflects the growing concern about the misuse of AI and deepfake technology. As these technologies become more sophisticated and accessible, there is an increasing need for tech companies, policymakers, and society at large to develop robust strategies to mitigate their potential negative impacts. Google's actions represent one approach to addressing this challenge, but a comprehensive solution will likely require coordinated efforts across multiple sectors.
While these measures are specifically focused on Google's search engine, the issue of non-consensual explicit deepfakes extends far beyond search results. Social media platforms, messaging apps, and other online spaces also face similar challenges in combating this type of content. As such, Google's approach may serve as a model for other tech companies grappling with similar issues.
In conclusion, Google's recent announcement marks a significant step in the ongoing battle against non-consensual explicit deepfakes. By improving both content removal processes and search ranking systems, the company aims to provide better protection for individuals and reduce the visibility of harmful content. However, as technology continues to evolve, so too must the strategies to combat its misuse. The fight against non-consensual explicit deepfakes remains an ongoing challenge that will require continued innovation, collaboration, and vigilance from tech companies, policymakers, and society as a whole.
Key facts
Google announced updates to combat non-consensual explicit deepfakes on July 31, 2024.
The company has improved its content removal process, making it easier for individuals to request removal of fake explicit content.
New systems automatically filter similar explicit results for searches related to individuals who have successfully requested content removal.
Google's ranking system updates have reduced exposure to explicit image results for certain queries by over 70%.
Sites with a high volume of removals for fake explicit imagery will be demoted in search rankings.
These measures were developed based on feedback from experts and victim-survivors.
Google acknowledges that more work is needed and commits to ongoing development of solutions and industry-wide partnerships.