Mon. Sep 26th, 2022

"I'm so glad I didn't hear the curse word you just yelled at me during our friendly game <em>uno</em> on Xbox Live!"†<figcaption class=

enlarge “I’m so glad I didn’t hear the curse word you just yelled at me during our friendly game” uno on Xbox Live!”

Microsoft announced today that it is rolling out filters that allow Xbox Live players to automatically limit the text-based messages they receive to four maturity levels: “Friendly, Medium, Mature, and Unfiltered.” That’s a long-awaited feature for a major communications platform that’s now well over a decade old, but it’s not really anything new in the field of online content moderation.

What’s more interesting is a “forward-looking” promise Microsoft made at the end of the announcement (emphasis added):

Ultimately, our vision is to complement our existing efforts and leverage our business efforts in AI and machine learning technology to provide filtration for all types of content on Xbox Live, giving each individual player control. Your feedback is more important than ever as we continue to evolve this experience and make Xbox a safe, welcome and inclusive place to play.

That’s all a bit vague, but The Verge reports on the real gist of that passage: an attempt by the company to “address the challenge of voice chat toxicity on Xbox Live.” That means leveraging Microsoft’s existing speech-to-text machine learning algorithm efforts to automatically filter out swear words that might appear in an Xbox Live party chat.

The ultimate goal, Xbox Live program manager Rob Smith told The Verge, is a system “similar to what you would expect on broadcast TV where people are having a conversation, and in real time we can detect a bad sentence and beep it out for users.” who don’t want to see that.” While broadcast networks still use live engineers to censor their tape-delayed content (with sometimes disastrous errors), Microsoft’s plans include using machine learning to filter voice communications on a much larger scale without centralized human intervention.

That should be welcome news for users concerned about Amazon-style human ratings of Alexa audio commands. Smith emphasized that “Ultimately, we have to respect privacy requirements for our users, so we will step into it thoughtfully, and transparency will be our guiding principle to keep us doing the right thing for our gamers.”

While speech-to-text algorithms are getting pretty fast and accurate these days, we’re probably still a long way from a centralized server that can reliably detect and block spoken swearing in real time. And even a small delay in transmission can have an excessive impact on players who need instant communication to work together in a fast-paced game situation.

In the meantime, Smith said an algorithm could measure a general “level of toxicity” in group chat and dole out automatic muting to problem users. Hopefully, those kinds of measurements can also affect a player’s rating under the Xbox Live reputation system, which could already lead to a ban on excessive profanity in human-rated video clips.

By akfire1

Leave a Reply

Your email address will not be published.