Accurate, Focused Research on Law, Technology and Knowledge Discovery Since 2002

Platforms’ policies on AI-manipulated and generated misinformation

EU Disinformation Lab: “The development of artificial intelligence (AI) technologies has long been a challenge for the disinformation field, allowing content to be easily manipulated and contributing to accelerate its distribution. Focusing on content, recent technical developments, and the growing use of generative AI systems by end-users have exponentially increased these challenges, making it easier not just to modify but also to create fake texts, images, and audio pieces that can look real. Despite offering opportunities for legitimate purposes (e.g., art or satire), AI content is also widely generated and disseminated across the internet, causing – intentionally or not – harm and deception. In view of these rapid changes, it is crucial to understand how platforms face the challenge of moderating AI-manipulated and AI-generated content that may end up circulating as mis- or disinformation. Are they able to distinguish legitimate uses from malign uses of such content? Do they see the risks embedded in AI as an accessory to disinformation strategies or copyright infringements, or consider it a matter on its own that deserves specific policies? Do they even mention AI in their moderation policies, and have they updated these policies since the emergence of generative AI to address this evolution? Answers to these questions are crucial as the Digital Services Act (DSA) will provide new complaint mechanisms for users on the lack of enforcement of terms and conditions. The DSA will also require platforms to assess their mitigation measures (and results) against systemic risks. The present factsheet delves into how some of the main platforms – Facebook, Instagram, TikTok, X (formerly Twitter), and YouTube – approach AI-manipulated or AI-generated content in their terms of use, exploring how they address its potential risk of becoming mis- and disinformation.

The analysis concluded that definitions are divergent, leaving users and regulators with diverse mitigation and resolution measures. First, only Facebook and TikTok mention “artificial intelligence” (including deepfakes in the case of Facebook) directly in their policies aiming to tackle disinformation. TikTok and X include “synthetic media” in their policies about manipulated and misleading media. Therefore, it is not always possible to distinguish between the general misinformation policy and AI-specific considerations. Besides, the platforms often overlook mentioning AI-generated text and refer mainly to images and videos in their policies…”

Sorry, comments are closed for this post.