Microsoft launches AI-powered censorship tool
Microsoft Tuesday announced the launch of a new product that uses AI technology to censor online content.
Azure Content Safety applies image recognition algorithms and artificial intelligence language learning models — such as those that power ChatGPT and other chatbots — to scour images and text for “harmful” content. The offending text or image will be placed into one of four categories: sexual, violent, self-harm, or hate, and will be assigned a severity score between one and six.
While part of Microsoft’s Azure product line, Content Safety is designed as standalone software which third-parties can use to police their own spaces such as gaming sites, social media platforms or chat fora. It understands 20 languages, along with the nuance and context used in each.
Microsoft assures users that the product was programmed by “fairness experts” who defined what constitutes “harmful content”.
“We have a team of linguistic and fairness experts that worked to define the guidelines taking into account cultural, language and context,” a Microsoft spokesperson told TechCrunch. “We then trained the AI models to reflect these guidelines. . . . AI will always make some mistakes, so for applications that require errors to be nearly non-existent we recommend using a human-in-the-loop to verify results.”
While concerns may be raised about a team of unknown individuals forcing their biases onto millions of users, the objections are not likely to come from within Microsoft. Over two months ago, the company fired its Ethics and Society team, which was tasked with monitoring the ethical build of Microsoft’s AI products.
But while the technology is artificial intelligence, the biases are human. At a globalist corporation like Microsoft, this means the product is likely to run into hurdles caused by its woke programming.
An analysis from the New York Times Monday found that AI-powered image recognition products from Amazon, Google and Apple are deliberately programmed to fail at recognizing gorillas for fear of mistaking them with images of Black people.
“Google, whose Android software underpins most of the world’s smartphones, has made the decision to turn off the ability to visually search for primates for fear of making an offensive mistake and labeling a person as an animal. And Apple, with technology that performed similarly to Google’s in our test, appeared to disable the ability to look for monkeys and apes as well,” reported the Times.
"Artificial Intelligence will simply reflect and magnify the mindset and ideology of its creators — and impress those values upon the rest of us," said Hoover Institution Senior Fellow Victor Davis Hanson.