Riot Games & Ubisoft partner on initiative to decrease toxicity

Riot and Ubisoft launch AI initiative Zero Harm in Comms to combat gaming toxicity through industry collaboration

The Zero Harm in Comms Initiative

Two gaming industry giants, Riot Games and Ubisoft, have formed a strategic partnership to address one of gaming’s most persistent challenges: toxic behavior in online communications. Their joint research endeavor, officially titled “Zero Harm in Comms,” represents a significant advancement in using artificial intelligence to create safer gaming environments.

This groundbreaking collaboration between Riot Games and Ubisoft leverages cutting-edge AI technology specifically designed to identify and prevent harmful player interactions across their gaming platforms.

Both development studios are pooling their technological resources and expertise to build sophisticated AI systems capable of monitoring and moderating in-game chat functions. This represents a shift from reactive moderation to proactive prevention of disruptive behavior.

According to official documentation, the initiative focuses on establishing what they term a “cross-industry shared database and labeling ecosystem” specifically for in-game interaction data. This collective database will serve as the training foundation for AI moderation systems to accurately “detect and mitigate disruptive behavior” before it escalates.

How AI Moderation Works

Yves Jacquier, Executive Director at Ubisoft La Forge, emphasized the complexity of addressing player behavior issues in official statements: “Disruptive player behaviors represent a challenge we approach with serious consideration, yet acknowledge the inherent difficulties in creating effective solutions. Our team at Ubisoft has implemented numerous concrete measures to foster safe and enjoyable gaming experiences, but we strongly believe that industry-wide cooperation will dramatically improve our ability to address these concerns effectively.”

The collaboration extends beyond basic technological development to establish foundational frameworks for future industry cooperation through “Zero Harm in Comms.” This includes creating comprehensive ethical guidelines and privacy protection protocols as integral components of the project architecture.

By integrating data from Riot’s intensely competitive titles with Ubisoft’s diverse portfolio of gaming experiences, the resulting database will encompass an extensive range of player interactions and behavioral patterns. This comprehensive data collection is crucial for training sophisticated AI systems that can accurately recognize and address toxic behavior across different gaming contexts and communities.

The AI moderation system operates through multiple detection layers, analyzing text patterns, communication frequency, and contextual cues to identify potentially harmful interactions. Advanced natural language processing algorithms can distinguish between competitive banter and genuinely toxic behavior, reducing false positives while maintaining effective protection.

Industry Impact and Future Plans

Although the “Zero Harm in Comms” project remains in its developmental phases, both Riot and Ubisoft have made a firm commitment to transparency. They plan to share their preliminary findings and research outcomes with the broader gaming industry next year, maintaining this commitment “regardless of the project’s ultimate results.”

Valorant players face harsher penalties for AFKs and dodging

Overwatch 2 reveals major controller & chat update after banning over 1M cheaters

Marvel Rivals is now recording in-game voice chat to automatically ban “toxic” players

Riot Games initially announced their intention to monitor in-game voice communications on North American Valorant servers back in April 2021, providing early indications of this AI-driven initiative’s development trajectory.

The company demonstrated the potential effectiveness of such systems when they provided a progress update in February 2022, revealing they had issued chat restrictions to more than 400,000 accounts during January 2022 alone, highlighting the scale of the moderation challenge.

The industry-wide implications of this collaboration could establish new standards for online behavior management. Other developers may adopt similar AI-driven approaches, creating a more consistent experience for players moving between different gaming platforms and communities.

Practical Tips for Players

As AI moderation systems become more sophisticated, players can take proactive steps to contribute to healthier gaming environments. Understanding what constitutes toxic behavior and how to avoid accidental violations is crucial for maintaining positive gaming experiences.

Avoid Common Toxicity Pitfalls: Many players unintentionally cross into toxic territory during competitive moments. Avoid personal attacks, excessive criticism, and discriminatory language—even when frustrated. The AI systems are trained to detect patterns of behavior, not just individual comments.

Utilize Reporting Systems Effectively: When you encounter toxic behavior, use in-game reporting features promptly. These reports help train AI systems to recognize emerging patterns of harmful behavior and improve detection accuracy over time.

Practice Constructive Communication: Focus on game-related strategy and coordination rather than personal comments. The AI systems are designed to distinguish between competitive discussion and personal attacks, so keeping communications game-focused reduces false positive detections.

Understand Cultural Differences: Gaming communities span global boundaries, and communication norms vary. Be mindful of cultural context in your interactions, as AI systems are being trained to account for these differences in their toxicity assessments.

No reproduction without permission:SeeYouSoon Game Club » Riot Games & Ubisoft partner on initiative to decrease toxicity Riot and Ubisoft launch AI initiative Zero Harm in Comms to combat gaming toxicity through industry collaboration