Daily Flux Report

Understanding Online Antisemitism: WJC Technology and Human Rights Institute Releases Preliminary Findings on Human vs. AI Content Moderation - World Jewish Congress


Understanding Online Antisemitism: WJC Technology and Human Rights Institute Releases Preliminary Findings on Human vs. AI Content Moderation - World Jewish Congress

NEW YORK -- The WJC's Technology and Human Rights Institute (TecHRI) unveiled preliminary findings from its Human vs. AI: Comparison of Online Antisemitism Experiences project. The study, presented during a virtual event, explores the effects of online hate on Jewish individuals and assesses the ability of GAI systems to recognize and interpret antisemitism compared to human understanding.

This groundbreaking project, which ran from September to December 2024, focused on antisemitic content directed at two Jewish individuals on social media: Rebecca Cantor, a linguistics graduate student and TikTok content creator in the U.S., and Joshua Bonfante, a former leader of the Italian Union of Jewish Students and co-creator of the TikTok account @AskAJew. Both participants, members of WJC's NextGen programs, meticulously documented the hate-filled comments they received, providing a rich dataset for the study. The study compared their classification of the comments as antisemitic hate speech with those of two AI platforms, ChatGPT and Claude.

The study aimed to compare human and AI perceptions of antisemitism. Using comments categorized according to the International Holocaust Remembrance Alliance's (IHRA) working definition of antisemitism, several important findings emerged:

1. AI Recognition: AI chatbots ChatGPT and Claude were tasked with analyzing the same data. When provided with the IHRA working definition of antisemitism, Claude identified 100% of the comments categorized by human reviewers as antisemiti, while ChatGPT flagged only 88.7%, struggling with contextual nuances.

2. Contextual Review: The importance of connecting the comments and the context in which they are left for identification as antisemitism proved crucial for both human and AI reviewers.

3. Inherent AI Bias: Both chatbots also predicted that comments left by extreme far-right users would be more stressful for the two subjects, with no mention of the rise of the extreme far-left- hateful speech since October 7.

4. Understanding of the Holocaust: Both chatbots were able to successfully understand the nuances and coded language that was used in Holocaust denial and distortion-themed comments. This demonstrated that chatbots have the capacity to understand such content, and thus, the systems used by online platforms could also be trained accordingly.

5. Dynamic Nature of Hate: The research proved an important point of the "dynamic nature of language" meaning that online hate is shaping the language itself. For example, "Auschwitz" has been used as a verb to describe how the commenter perceives the actions of the State of Israel in Gaza - "Maybe stop auschwitzing kids (...)". It goes without saying how impactful language is in shaping the broader picture of discourse and narrative and we all know that: it starts with words.

6. Patterns in Hate Speech: Both Rebecca and Joshua highlighted the use of coded language, emojis, and intentional misspellings designed to bypass content moderation. For example, Rebecca noted the frequent use of snake emojis paired with hateful phrases, making the intent explicit yet subtle enough to evade detection by both human reviewers and AI systems.

7. Psychological Impact: The human participants reflected on the emotional toll of antisemitic rhetoric, emphasizing how certain comments struck deeply personal chords, particularly when they referenced ancestry or invoked coded threats. One doubled down on creating Jewish-oriented content, while the other left the platform for considerable time.

The project revealed both strengths and weaknesses in current AI systems:

Interestingly, the chatbots were skeptical that users would intervene to defend victims of hate. Claude predicted limited intervention, while ChatGPT anticipated no defense at all.

What makes this project particularly innovative is its dual approach, combining the deeply personal experiences of individuals subjected to online antisemitism with the analytical capabilities of AI systems. By analyzing the same dataset from these two perspectives, the study identifies gaps in both human and machine understanding and offers practical recommendations for refining AI moderation tools.

The findings highlight the urgent need for improved AI systems that can better detect and respond to online hate while emphasizing the importance of empowering individuals who experience antisemitism to share their stories. This research underscores the potential for collaboration between technology and lived experience in addressing antisemitism and fostering a safer online environment for all.

The full results of Human vs. AI: Comparison of Online Antisemitism Experience will be available for public evaluation at the start of January 2025.

Previous articleNext article

POPULAR CATEGORY

corporate

4500

tech

4950

entertainment

5480

research

2488

misc

5686

wellness

4340

athletics

5811