Accurate, Focused Research on Law, Technology and Knowledge Discovery Since 2002

Social Media Platforms Increase Transparency About Content Removal Requests

EFF Report – “While social media platforms are increasingly giving users the opportunity to appeal decisions to censor their posts, very few platforms comprehensively commit to notifying users that their content has been removed in the first place, raising questions about their accountability and transparency, the Electronic Frontier Foundation (EFF) said today in a new report.  How users are supposed to challenge content removals that they’ve never been told about is among the key issues illuminated by EFF in the second installment of its Who Has Your Back: Censorship Edition report. The paper comes amid a wave of new government regulations and actions around the world meant to rid platforms of extremist content. But in response to calls to remove objectionable content, social media companies and platforms have all too often censored valuable speech.

EFF examined the content moderation policies of 16 platforms and app stores, including Facebook, Twitter, the Apple App Store, and Instagram. Only four companies—Facebook, Reddit, Apple, and GitHub—commit to notifying users when any content is censored and specifying the legal request or community guideline violation that led to the removal. While Twitter notifies users when tweets are removed, it carves out an exception for tweets related to “terrorism,” a class of content that is difficult to accurately identify and can include counter-speech or documentation of war crimes. Notably, Facebook and GitHub were found to have more comprehensive notice policies than their peers.

Sorry, comments are closed for this post.