Home » world » Building a Safer Europe: Combatting Gender-Based Hate Speech in the Digital Sphere through the C.H.A.S.E. Project

Building a Safer Europe: Combatting Gender-Based Hate Speech in the Digital Sphere through the C.H.A.S.E. Project

by Omar El Sayed - World Editor

European Coalition Launches AI tool to Combat Online Gender-Based Hate Speech

Brussels, Belgium – September 27, 2025 – A groundbreaking collaborative effort involving media organizations, legal experts, and technology specialists from five European nations is poised to reshape teh landscape of online safety. The project,known as C.H.A.S.E., is introducing an Artificial Intelligence-driven tool designed to detect and moderate gender-based hate speech in real time.

Addressing a Growing Crisis

The initiative originates from a shared concern over the rising tide of online harassment and discrimination targeting individuals based on their gender or gender identity. Extensive research spearheaded by the Media Diversity Institute Global (MDIG), alongside partners including WAN-IFRA and the European Centre for Human Rights, revealed meaningful gaps in legal protections and moderation practices across Europe. This research highlights the need for proactive measures to safeguard online spaces.

Legal Frameworks Under Scrutiny

Recent legal analysis, published earlier in september 2025, uncovered inconsistencies in how different European countries address online hate speech. In Greece, existing legislation does not explicitly recognize gender as a protected characteristic, while Italy’s hate speech laws fail to adequately cover gender identity. Similarly, Cyprus faces challenges due to fragmented laws and vague definitions. France, despite legal protections for free expression, struggles with enforcing penalties against online perpetrators. This legal landscape fuels the need for more robust legal safeguards and better training for law enforcement.

The Role of Online Platforms

The C.H.A.S.E. project recognizes that online media platforms are both conduits and potential solutions to the problem of hate speech.Many platforms currently lack the resources, policies, or expertise to effectively moderate harmful content. A recent study by Oxford University, released in February 2025, indicated that a majority of social media users favor restrictions on harmful content like threats and defamation, preferring safety over absolute freedom of expression.

Key Research Findings

The project’s research identified several alarming trends:

  • Hate speech frequently surges in response to real-world events, such as pride parades or debates on gender-related legislation.
  • Victims, particularly women, LGBTQI+ individuals, and migrants, often underreport incidents due to fear, distrust, or lack of awareness.
  • inconsistent legal frameworks across the European Union leave vulnerable groups inadequately protected.

Introducing the AI-Powered Moderation Tool

At the heart of C.H.A.S.E. is a cutting-edge AI-powered tool designed to automatically detect,flag,and report hateful content. Launched after a co-creation workshop in July 2025, the tool uses advanced machine learning algorithms to analyze text, identify harmful patterns, and classify content-all while respecting freedom of expression and privacy regulations. Starting October 1, 2025, partner media outlets across Europe will begin real-world testing of the tool.

Country Legal Protection for Gender-Based Hate Speech Key Findings
Greece Limited; gender not explicitly protected. Laws primarily focus on racist/xenophobic speech.
Italy Incomplete; gender identity not covered. Existing laws may not adequately address hate targeting LGBTQI+ individuals.
Cyprus Fragmented and vague. Challenges in addressing sexism, misogyny, and transphobia.
France Protected, but enforcement is challenging. Difficulty in prosecuting online hate speech perpetrators.

Next Steps and Future Impact

A training of trainers program is scheduled for October 9-10 in Cyprus,focusing on the practical application of the new moderation tool and the implementation of a code of conduct for online media. By December 2025, partners will offer broader training opportunities, enabling more media organizations to independently utilize the C.H.A.S.E. tool. The project will conclude in January 2026 with a summit in Brussels, where recommendations will be presented for integration into European legislation.

The Evolving Threat of Online Hate

Online hate speech is a continuously evolving challenge, adapting to new platforms and emerging trends. the C.H.A.S.E. project represents an crucial step towards creating a safer and more inclusive digital environment,but ongoing vigilance and collaboration are crucial. According to a 2024 report by the Anti-Defamation League, online antisemitism increased by 30% year-over-year, demonstrating the persistent need for proactive measures against hate speech in all its forms.

Frequently Asked Questions About Online Hate Speech


What are your thoughts on the role of AI in moderating online content? Do you believe current laws are adequate to address the evolving challenges of online hate speech?

Share this article and join the conversation!

What specific protocols does the C.H.A.S.E. project outline for law enforcement regarding evidence gathering in cases of online gender-based hate speech?

Building a Safer Europe: Combatting Gender-Based Hate Speech in the Digital Sphere through the C.H.A.S.E. Project

Understanding the Scope of Online Gender-Based Violence

Gender-based hate speech online is a pervasive issue across Europe, impacting women and LGBTQ+ individuals disproportionately. This digital violence manifests in numerous forms, including:

* Cyberstalking: Repeated harassment and intimidation using digital technologies.

* Online Harassment: Abusive or threatening messages, often targeting someone’s gender or sexual orientation.

* Doxing: Publishing private or identifying information online without consent.

* Image-Based Sexual Abuse (revenge Porn): sharing intimate images or videos without consent.

* hate Speech: Attacks based on gender identity, expression, or sexual orientation. This often intersects with other forms of discrimination like racism and xenophobia.

These acts aren’t simply “online” problems; they have real-world consequences, contributing to self-censorship, mental health issues, and even physical violence. The rise of social media platforms and instant messaging apps has exacerbated the problem, providing new avenues for perpetrators and making it harder to track and address. Terms like online abuse, digital harassment, and cyberviolence are frequently used when discussing this issue.

Introducing the C.H.A.S.E. Project: A Collaborative Response

The C.H.A.S.E. (Combating Hate Speech Against Women in the digital Sphere) project represents a critically important, coordinated effort to tackle this growing threat. Funded by the European Union’s Rights,equality and Citizenship Programme,C.H.A.S.E. brings together a consortium of organizations from across Europe – including NGOs, research institutions, and technology companies – to develop and implement effective strategies.

The project’s core objectives include:

  1. Research & Data Collection: Conducting comprehensive research to understand the prevalence, nature, and impact of online gender-based hate speech in different European contexts. This includes analyzing reporting mechanisms and identifying gaps in current legislation.
  2. Capacity Building: Training law enforcement, judicial professionals, and civil society organizations on how to identify, investigate, and prosecute cases of online gender-based violence. This also involves equipping them with the tools to support victims.
  3. Awareness Raising: Launching public awareness campaigns to educate citizens about the issue, promote responsible online behavior, and encourage reporting of incidents.
  4. Developing Best Practices: Creating a toolkit of best practices for preventing and responding to online gender-based hate speech, tailored to different stakeholders.
  5. Policy Recommendations: Formulating evidence-based policy recommendations for governments and social media platforms to improve the legal and regulatory framework.

Key Components of the C.H.A.S.E. Toolkit

The C.H.A.S.E.toolkit is a central deliverable of the project, offering practical guidance for various actors.Its structured around several key areas:

* For Law Enforcement: Detailed protocols for investigating online hate speech, including evidence gathering, digital forensics, and international cooperation. Emphasis is placed on understanding the nuances of online communication and the challenges of identifying perpetrators.

* For Judicial Professionals: Training materials on relevant legislation, case law, and sentencing guidelines.This section also addresses the psychological impact of online violence on victims and the importance of trauma-informed approaches.

* For Civil Society Organizations: Resources for providing support to victims, advocating for policy changes, and conducting awareness-raising campaigns. This includes guidance on setting up safe reporting mechanisms and offering psychological counseling.

* For Social Media Platforms: Recommendations for improving content moderation policies, enhancing reporting tools, and increasing openness. The toolkit encourages platforms to adopt a proactive approach to identifying and removing harmful content. This includes utilizing AI and machine learning responsibly.

* For Educators: Curriculum materials for schools and universities to promote digital literacy, critical thinking, and respectful online behavior. This aims to prevent online gender-based violence by addressing its root causes.

The Role of Technology in Combating Online Hate

Technology plays a dual role in this issue. while social media platforms can be vectors for hate speech, they also offer potential solutions. The C.H.A.S.E. project explores the use of:

* Artificial Intelligence (AI): AI-powered tools can be used to automatically detect and flag potentially harmful content, but these tools must be carefully designed to avoid bias and ensure accuracy.

* Machine Learning (ML): ML algorithms can learn to identify patterns of abusive behavior and predict future incidents.

* **Natural Language

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Adblock Detected

Please support us by disabling your AdBlocker extension from your browsers for our website.