Nieman Lab: “An overwhelming majority of readers would like news publishers to tell them when AI has shaped the news coverage they’re seeing. But, new research finds, news outlets pay a price when they disclose using generative AI. That’s the conundrum at the heart of new research from University of Minnesota’s Benjamin Toff and Oxford Internet Institute’s Felix M. Simon. Their working paper “‘Or they could just not use it?’: The paradox of AI disclosure for audience trust in news” is one of the first experiments to examine audience perceptions of AI-generated news. More than three-quarters of U.S. adults think news articles written by AI would be “a bad thing.” But, from Sports Illustrated to Gannett, it’s clear that particular ship has sailed. Asking Google for information and getting AI-generated content back isn’t the future, it’s our present-day reality. Much of the existing research on perceptions of AI in newsmaking has focused on algorithmic news recommendation, i.e. questions like how readers feel about robots choosing their headlines. Some have suggested news consumers may perceive AI-generated news as more fair and neutral owing to the “machine heuristic” in which people credit technology as operating without pesky things like human emotions or ulterior motives. For this experiment, conducted in September 2023, participants read news articles of varying political content — ranging from a piece on the release of the “Barbie” film to coverage of an investigation into Hunter Biden. For some stories, the work was clearly labeled as AI-generated. Some of the AI-labeled articles were accompanied by a list of news reports used as sources…”
Sorry, comments are closed for this post.