Gaming giant uses AI to eavesdrop on players for ‘toxicity’

Activision last week began listening in to gamer chatter using an AI program which scans conversations for “toxicity.”

The gaming giant has partnered with tech firm Modulate to develop ToxMod, a program which searches both in-game text and voice chats for “hate speech, discriminatory language, sexism, harassment, and more.” As of Wednesday, ToxMod now eavesdrops on North American players of Call of Duty: Warzone™ and Call of Duty: Modern Warfare II. In November, the program will be rolled out for all Call of Duty: Modern Warfare III players worldwide, excluding Asia.

Over one million gaming accounts have so far had their chats restricted by Call of Duty’s “anti-toxicity team,” the company boasted in a blog post Wednesday. Offenders first receive a warning and then penalties if they re-offend.

ToxMod not only listens for language that may be offensive but uses AI to determine if offense has been taken. According to PC Gamer, the program can "listen to conversational cues to determine how others in the conversation are reacting to the use of [certain] terms." Certain words, for example, are seen as humorous when said by some races but considered racial slurs if said by others.

"While the n-word is typically considered a vile slur, many players who identify as black or brown have reclaimed it and use it positively within their communities,” says ToxMod developer Modulate. “If someone says the n-word and clearly offends others in the chat, that will be rated much more severely than what appears to be reclaimed usage that is incorporated naturally into a conversation."

With assistance from the Anti-Defamation League (ADL), Modulate has programmed ToxMOd to recognize “white supremacists” and “alt-right extremists” which Modulate categorizes as “violent radicalization.” 

“Using research from groups like ADL, studies like the one conducted by NYU, current thought leadership, and conversations with folks in the gaming industry, we’ve developed the category to identify signals that have a high correlation with extremist movements, even if the language itself isn’t violent,” explains Modulate on its website. “(For example, “let’s take this to Discord” could be innocent, or it could be a recruiting tactic.).”

Tech corporations are making increasing use of AI technology to censor content, a practice they call “content moderation.”

Last month OpenAI, the company behind popular chatbot ChatGPT, published guidance on how to use artificial intelligence to streamline censorship. Its proposed method involves feeding the AI program a policy outlining which content should be suppressed. The program is then tested with examples and the prompts are adjusted as necessary.

Although OpenAI’s method would no longer require human “moderators,” the censorship guidelines would still be created by human censors, or “policy experts.” Examples include a user asking the program where to buy ammunition or how to make a machete.

Microsoft’s Azure Content Safety is another suppression tool which applies AI algorithms to scour images and text for “harmful” content. The offending text or image is placed into one of four categories: sexual, violent, self-harm, or hate, and will be assigned a severity score between one and six.

While part of Microsoft’s Azure product line, Content Safety is designed as standalone software which third-parties can use to police their own spaces such as gaming sites, social media platforms or chat fora. It understands 20 languages, along with the nuance and context used in each.

Microsoft assures users that the product was programmed by “fairness experts” who defined what constitutes “harmful content.”