Quartz: “…Facebook’s challenge is finding all these permutations of hate speech, bullying, threats, and terrorism, in order to train its AI to look out for similar examples. The problem gets messier because not everyone can agree what makes a post harmful or abusive. Facebook faced this problem in 2016 when the social media platform was deluged by reports of fake news during the US presidential election. While Zuckerberg cited AI as a potential savior to the problem then, the company now is hiring 20,000 humans to oversee all content moderation. Mevan Babakar, head of automated fact checking at UK non-profit Full Fact, says that regardless of whether AI or humans are used to moderate content, both methods share the common question about who determines whether certain speech is acceptable.
“AI comes with big questions on definitions. Hate speech is sometimes obvious but in some cases people can’t agree. The same is true for matching factchecks to content. Who will be making these choices? Small choices have big consequences here,” Babakar said.” [h/t Pete Weiss]
Sorry, comments are closed for this post.