
League of Legends developer Riot Games is taking new steps to handle problem players faster and automatically, introducing a system to identify and ban players guilty of “verbal harassment” within 15 minutes of the end of a match.
Riot explains how the new system works in a post on its Player Behavior blog. After teammates or opponents have reported a Competition player for “homophobia, racism, sexism, death threats, and other forms of excessive abuse,” Riot’s automated system will validate those reports, determine if they’re criminal-worthy, and send a “reform card” that links chat log evidence linking the behavior to a explanation of the punishment. “This harmful communication will be punished with two weeks or a permanent ban within 15 minutes of the end of the game,” Riot promises.
In a thread on the League of Legends forums, Riot Lead Designer of Social Systems Jeffrey Lin takes a closer look at the machine learning behind the automated system. The system tries to figure out which phrases often lead to player reports, rather than just looking at an assigned list of “bad words,” Lin writes. “Each report and honor in the game teaches the system about behavior and what looks good or not good, so the system continuously learns over time,” he writes. “If a player displays excessive hate speech (homophobia, sexism, racism, death threats, etc.), the system may permanently ban the player from just one game. But this is quite rare!”
Lin started testing the algorithms behind this kind of “instant feedback” system last July. Previously, however, those automated reports were simply escalated to be manually reviewed by the Player Support team, which could take a lot of time and effort in a game with 67 million players per month. The new system appears to remove that human judgment step from the process, allowing for near-instantaneous punishment.
Riot said it would have its moderation team review the first 1,000 cases covered by the direct feedback system when it rolled out to North American and EU servers last week. In any case, Lin writes on the forums that Player Support reps previously saw “false positive rates in the range of 1 in 6000. So we know the system isn’t perfect, but we think the accuracy is good enough to launch .”
The rollout predictably has over a thousand responses to the League of Legends forums, with many people opposing the idea of automated punishment without human review (or of player chat moderation in general). Lin on Friday tweeted that Riot has already “tuned the NA/EU reform systems a little more conservatively as we observe this weekend”. Lyn too cryptically tweeted that “one case where the system is too aggressive is no reason to shut down the system. Let’s be reasonable everyone!”
In the future, Riot hopes that a similar system could automatically penalize other forms of in-game behavior (such as “intentional feeding”) or even provide rewards for positive play.