American Jewish Committee (AJC) and CyberWell today formalized a partnership to monitor, analyze, and combat online antisemitism, announcing an alliance that pools AJC's policy advocacy experience with CyberWell's AI-powered platform surveillance technology. The announcement arrives as AJC's State of Antisemitism in America 2025 Report records that 73% of American Jews say they have experienced antisemitism online - the highest figure in the history of the survey, now above seven in ten for the first time.
The two organizations have been collaborating informally for some time. But today's formalization places the relationship on a structured footing. According to the press release, the partnership will expand direct engagement with major technology and social media companies, continuing work already begun with Meta and X, and will produce targeted, platform-specific reports backed by both survey data and real-time content analysis. Additional reports beyond the two already delivered are planned in the coming months.
What each organization brings
AJC, founded more than a century ago and self-described as "the global advocacy organization for the Jewish people," operates through 40 offices and engages leaders in more than 110 countries. According to organizational materials, its work spans government lobbying, educational advocacy, and direct engagement with tech executives on content policy. It conducts annual surveys of both American Jews and the broader U.S. adult population, producing the State of Antisemitism in America Report each year.
CyberWell is a smaller, technology-focused nonprofit. According to the press release, its AI systems monitor social media continuously for content that violates the International Holocaust Remembrance Alliance (IHRA) working definition of antisemitism - covering posts that promote Holocaust denial, glorify violence against Jews, and spread conspiracy theories attributing global events to Jewish identity. CyberWell analysts review flagged content and report it to platform moderators while indexing verified posts in what the organization describes as the first-ever open database of antisemitic social media content, accessible at app.cyberwell.org.
A snapshot of that database from May 2026 shows 13,534 examples of online antisemitic content logged across platforms, with entries dated as recently as May 11, 2026. The database records each post by upload date, platform, language, IHRA definition number, and policy violation status. Multiple entries flagged in early May 2026 remain listed as "needs to be reported" rather than removed, illustrating the gap between detection and platform action that the partnership aims to narrow.
The scale of the problem
The numbers in AJC's 2025 report are striking in several respects. According to the report, 73% of American Jews say they have experienced antisemitism online, either by personally being targeted or by witnessing it. Among that group, one-in-five - 21% - said they felt physically threatened by online incidents. For the broader American adult population, the picture is similar in terms of where antisemitism is encountered: among U.S. adults who reported seeing or hearing antisemitism in the past 12 months, nearly three-quarters said they encountered it online or on social media. The next most common source was friends or family, cited by just 20% of respondents.
The report also measures concern about AI specifically. According to the findings, 65% of American Jews say they are concerned that generative AI chatbots - named examples include Grok, ChatGPT, and Claude - will spread antisemitism. A further 69% say they are concerned that information and misinformation shared by generative AI chatbots will lead to antisemitic incidents. These figures are particularly relevant given the speed at which AI-generated content now circulates on social platforms and the difficulty detection systems have in identifying content that is synthetic rather than authored by a human.
The report was based on surveys conducted in fall 2025, a period that followed a year the report characterizes as one of the most violent for American Jews in recent history. According to the documents, 2025 saw an arson attack during Passover on the Pennsylvania Governor's residence, a firebombing during a march in support of hostages in Boulder, Colorado, and the murders of Sarah Milgrim and Yaron Lischinsky outside the Capital Jewish Museum. According to the report, 91% of American Jews say they feel less safe as a Jewish person in the United States as a result of violent attacks in the past year, and 55% say they changed their behavior in the past year out of fear of antisemitism.
Technical dimensions of the partnership
The operational core of the partnership is the combination of survey-based data and platform-level content analysis. AJC brings structured data on user experience: how often American Jews encounter antisemitism, where, what forms it takes, and how it affects behavior. CyberWell brings real-time detection: AI systems scanning posts, analysts verifying flagged content, and structured records of what reaches platform moderators and what gets removed.
According to the press release, over the last six months the two organizations crafted and presented targeted reports to Meta and X, reflecting American Jews' experiences on each specific platform and recommending urgent steps tailored to each company's structure and policy architecture. The individualized approach is deliberate - recognizing that Meta and X face different technical constraints, different regulatory environments, and different moderation architectures.
The March 2026 public report, titled "The State of Antisemitism in America: Findings and Recommendations for Major Digital Platforms," outlined cross-platform priorities. According to the press release, these include strengthening enforcement against the glorification of antisemitic attacks and pro-terror content, improving detection of coded and AI-generated antisemitism, preventing coordinated manipulation and bot activity, and increasing transparency around content moderation and algorithmic amplification.
That last point - algorithmic amplification - is technically significant. Content moderation research has consistently shown that platforms' recommendation and ranking systems can elevate harmful content not because moderators approved it, but because engagement signals such as shares, reactions, and comments cause algorithms to distribute it more widely. Antisemitic content that remains technically within platform rules - because it uses coded language, relies on imagery rather than text, or frames conspiracy claims as factual reporting - can accumulate large engagement numbers before detection systems flag it. CyberWell's database entries include descriptions of this dynamic, including one May 2026 post described as combining an explicit conspiracy claim about Israel causing a disease outbreak with audio framing that links the accusation to Jewish identity collectively rather than to the state specifically.
Joint trust and safety training
Beyond the public reports, according to the press release, AJC and CyberWell will provide joint training for tech and social media platforms' trust and safety teams and senior leadership. This is a relatively unusual form of NGO-platform engagement: rather than only lobbying companies from the outside, the organizations intend to work inside platform teams to improve how antisemitism is understood and recognized by the people building moderation systems.
The relevance to trust and safety practitioners is considerable. Meta's DSA transparency reporting showed the company processed over 61.4 million content actions under EU Digital Services Act obligations in 2024, with hate speech removals reaching 1.15 million instances on Facebook and 1.45 million on Instagram. Meta has simultaneously overhauled its content moderation approach, transitioning away from third-party fact-checking and moving trust and safety teams from California to Texas, changes that raised concerns in parts of the advocacy community about the trajectory of platform enforcement.
The question of what counts as antisemitism - and whether platform moderation systems can reliably detect coded, AI-generated, or context-dependent forms - lies at the heart of both organizations' work. Meta has invested in reinforcement learning systems for content moderation, with researchers publishing findings showing 10x to 100x data efficiency improvements over supervised training methods. But even with those advances, policy definitions involve nuanced distinctions that automated systems handle poorly when content is deliberately evasive.
CyberWell's model is designed to address exactly that gap. Its analysts work alongside AI detection tools rather than relying on automation alone, reviewing flagged content against IHRA definitions before submitting reports to platforms. According to organizational materials, CyberWell partners with social media platforms and digital service companies directly to help them enforce their policies more effectively, including through real-time alerts.
AJC's broader tech engagement
The formalization with CyberWell is one component of a wider AJC strategy on technology platforms. According to AJC materials, the organization calls on tech, social media, and AI companies to train AI models to properly identify antisemitism; to name antisemitism within terms of service; to make it easier for users to report antisemitic content; to avoid policy changes that would increase the visibility and distribution of antisemitic content; to improve moderation systems; and to publish and improve transparency reports.
AJC has also established a "Translate Hate" glossary - an online tool covering dozens of antisemitic words and phrases, including terms that re-emerged or changed meaning after the October 7, 2023 Hamas attack on Israel. The resource is intended for use by the public, educators, and platform teams trying to understand how contemporary antisemitism is articulated in coded or shifting language.
Half of Jewish users who experienced antisemitism online do not report these incidents to social media companies, according to the State of Antisemitism in America 2024-2025: Findings and Recommendations for Major Digital Platforms report released in partnership with CyberWell. The reason most commonly given: they do not think anything will be done. That skepticism tracks with what structural data shows. Platform policies, transparency reports, and enforcement counts represent substantial infrastructure, but the gap between stated policy and experienced enforcement remains a central problem.
Germany's courts have been testing algorithmic transparency obligations through enforcement cases that address how recommendation systems elevate harmful content - a dynamic directly relevant to the spread of antisemitic material online. For advertisers and marketing professionals operating on these platforms, the policy environment around hate speech and content moderation has significant implications for brand safety, platform suitability assessments, and the trust signals that underpin digital advertising investment.
The AJC-CyberWell partnership, by generating public, platform-specific data on antisemitism enforcement gaps and recommendations for each major platform, may also serve as a reference point for advertisers evaluating the gap between platform policy statements and actual moderation performance - a question that has grown more pressing as major platforms have reduced or restructured their moderation investments.
Timeline
- October 7, 2023 - Hamas attacks southern Israel, triggering a surge in online antisemitism that AJC documents in subsequent annual surveys
- Fall 2025 - AJC conducts surveys of American Jews and the U.S. general public that form the basis of the 2025 State of Antisemitism in America Report
- 2025 - AJC and CyberWell begin jointly presenting targeted platform-specific reports to Meta and X, covering antisemitism data and recommendations for each company
- January 7, 2025 - Meta announces end of third-party fact-checking program and restructuring of trust and safety teams
- December 1, 2024 - Meta DSA transparency report shows 61.4 million content actions and 1.15 million hate speech removals on Facebook in 2024
- January 2, 2026 - Meta publishes reinforcement learning content moderation research showing 10x-100x efficiency gains over supervised training
- March 2026 - AJC and CyberWell release "The State of Antisemitism in America: Findings and Recommendations for Major Digital Platforms," a public cross-platform report
- July 25, 2025 - U.S. House Judiciary Committee releases DSA report examining how EU rules affect content moderation globally
- November 30, 2025 - German courts begin testing algorithmic transparency obligations relevant to the spread of harmful content
- May 13, 2026 - AJC and CyberWell formally announce their partnership to combat online antisemitism, expanding joint work with major tech and social media companies
Summary
Who: American Jewish Committee (AJC), a global Jewish advocacy organization with 40 offices operating in more than 110 countries, and CyberWell, an independent tech-based nonprofit that uses AI to monitor social media for antisemitic content.
What: The two organizations formalized a partnership to understand, respond to, and prevent online antisemitism, combining AJC's policy expertise and survey data with CyberWell's real-time content detection capabilities and open antisemitism database. The partnership expands targeted reporting to major tech and social media companies and adds joint trust and safety training for platform teams.
When: The partnership was formalized on May 13, 2026, building on approximately six months of prior joint work including reports delivered to Meta and X and a public cross-platform report released in March 2026.
Where: The organizations operate globally, with CyberWell monitoring content across major social media platforms and AJC maintaining offices on six continents. Platform engagement includes Meta's Facebook and Instagram and X, among others.
Why: AJC's 2025 survey data shows 73% of American Jews have experienced antisemitism online, the highest level recorded in the survey's history. Half of those who experienced online antisemitism did not report it because they expected no action. The partnership aims to generate better data, platform-specific recommendations, and direct training for trust and safety teams, with the goal of closing the gap between platform policy and actual moderation outcomes.