Accurate, Focused Research on Law, Technology and Knowledge Discovery Since 2002

Fact-checking information from large language models can decrease headline discernment

psypost.org – “A recent study published in the Proceedings of the National Academy of Sciences investigates how large language models, such as ChatGPT, influence people’s perceptions of political news headlines. The findings reveal that while these artificial intelligence systems can accurately flag false information, their fact-checking results do not consistently help users discern between true and false news. In some cases, the use of AI fact-checks even led to decreased trust in true headlines and increased belief in dubious ones. Large language models (LLMs), such as ChatGPT, are advanced artificial intelligence systems designed to process and generate human-like text. These models are trained on vast datasets that include books, articles, websites, and other forms of written communication. Through this training, they develop the ability to respond to a wide range of topics, mimic different writing styles, and perform tasks such as summarization, translation, and fact-checking. The motivation behind this study stems from the growing challenge of online misinformation, which undermines trust in institutions, fosters political polarization, and distorts public understanding of critical issues like climate change and public health. Social media platforms have become hotspots for the rapid spread of false or misleading information, often outpacing the ability of traditional fact-checking organizations to address it. LLMs, with their ability to analyze and respond to content quickly and at scale, have been proposed as a solution to this problem. However, while these models can provide factual corrections, little was known about how people interpret and react to their fact-checking efforts…”

Sorry, comments are closed for this post.