“Working at this scale and speed would require hundreds of thousands of human moderators working 24/7, not including weekends or vacation—and that’s just to moderate chat messages,” said Naren Koneru, Vice President of Engineering and Safety for Roblox.
Video games nowadays are rarely just games. Increasingly, they are social spaces where millions of people talk, compete, argue, and collaborate. Even when games focus primarily on single-player experiences, there are significant data security and privacy concerns that must be considered as games themselves become increasingly more difficult, expensive, and complex to maintain. That reality has forced developers to confront a difficult question: how do you keep enormous, fast-moving digital communities and solo players safe without crushing what makes them fun in the first place?
Increasingly, the answer is artificial intelligence.
For years, moderation relied on blunt tools, like filters—for obvious slurs—and reporting, the latter of which relies upon active player participation. That approach simply doesn’t scale when a single platform can generate billions of messages a day, many of them in real time, many of them spoken rather than typed.
Modern machine learning systems can scan text and voice chat at a speed no human team could match. Roblox, a game with an average 97.8 million daily active users, is one of the most visible examples. Its AI-driven moderation systems monitor interactions across a platform used heavily by children and teens.
“Working at this scale and speed would require hundreds of thousands of human moderators working 24/7, not including weekends or vacation—and that’s just to moderate chat messages,” said Naren Koneru, Vice President of Engineering and Safety. “We’d need thousands more to moderate all the other content types on Roblox.”
Machine learning, on the other hand, can “make these decisions in milliseconds, repeatedly, consistently and 24 hours a day.”
Similar tools are employed by Riot Games’ League of Legends and VALORANT, as well as Activision’s Call of Duty, and many of the industry’s most popular live service games.
Safety doesn’t stop at speech. Cheating, botting, and exploit abuse can also undermine online communities. AI-powered anti-cheat tools look for abnormal behavior rather than known exploits alone. If a player’s actions don’t resemble how real humans play, the system takes notice. This is particularly useful in maintaining fair play in competitive esports, a market projected to reach $5.1 billion in 2026.
“AI has proven to be instrumental in the fight against cheaters in the esports and gaming industry,” said Jumpstart’s Siddhesh Bawker. “As there are rising innovations and developments in AI and machine learning, we can only expect these software to get better.”
AI tools can also help developers “identify and fix bugs, and protect against vulnerabilities in video games and other software programs, which may be exploited by bad actors,” according to a release by the Entertainment Software Association.
They point to Ubisoft’s Clever-Commit as a prime example:
“Ubisoft partnered with Mozilla, creators of the Firefox web browser, in the development of a coding assistant called Clever-Commit. This AI evaluates whether or not a code change will introduce a new bug, and proactively fixes it by learning from past bugs and fixes. By applying Clever-Commit to both games and web browsing, Ubisoft and Mozilla increase the knowledge of the AI’s toolbox.”
What’s notable is how collaborative much of this work has become. Some companies are open-sourcing parts of their safety infrastructure, sharing detection models and risk-assessment frameworks. Others are partnering with third-party firms that specialize in age assurance, compliance, or contextual moderation.
That shift toward shared infrastructure is especially beneficial for smaller developers who may not have the resources to build sophisticated moderation systems from scratch. Shared tools lower that barrier.
“As we look ahead, we’re focused on how AI can address the barriers and frictions to playing and developing games,” wrote Fatima Kardar, Corporate Vice President of Gaming AI for Xbox. “This means that we’ll share our AI product innovation earlier on, providing opportunities for players and creators to experiment with and co-build new AI features and capabilities with us.”
She also wrote that Microsoft hopes to “develop tools that are used in ways that benefit everyone,” including their new generative AI model: Muse.
“Our innovation will continue to be built on our commitment to Responsible AI and dedication to developing AI solutions guided by six principles: fairness, reliability and safety, privacy and security, inclusiveness, transparency, and accountability.
All of this matters because the stakes are enormous. The global video game industry now generates more than $200 billion annually, making it one of the largest entertainment markets in the world—larger than film and music combined by many estimates. This industry’s future depends on trust. And, at scale, that trust is impossible to maintain without AI.
But getting AI right isn’t just a technical challenge. Poorly designed AI legislation—especially in the West—risks undercutting the systems that keep online games functional and safe. Rules that restrict adaptive learning, real-time moderation, or access to behavioral signals could cripple safety tools while doing little to stop bad actors. If policymakers fail to distinguish between exploitative uses of AI and protective ones, they may end up exporting leadership, talent, innovation, and entire studios elsewhere.
And so, the future of online safety in games will—for better or for worse—rely on AI. When it works well, players won’t think about it at all. They’ll just notice fewer obscenities in chat, fewer ruined matches, and fewer moments that make them log off angry.
That may be the best and simplest measure of success: a space that feels worth staying in.

