KFF: “…Most U.S. adults are not confident that they can tell what is true versus what is false when it comes to information from AI chatbots, such as Chat-GPT and Microsoft Copilot. Fewer than half say they are either “very confident” (9%) or “somewhat confident” (33%) that they can tell the difference between true and false information from an AI chatbot, while a majority say they are either “not too confident” (35%) or “not at all confident” (22%). While adults who say they have used or interacted with AI are more likely than non-users to say they are at least somewhat confident in their ability to tell fact from fiction in information from AI chatbots (49% vs. 32%), even among users of this technology, half say they are not confident they can tell what is true from what is false.
When it comes to health information, the public is not yet convinced that AI chatbots can provide accurate information. Just one in three adults say they are “very confident” (5%) or “somewhat confident (31%) that the health information and advice they may come across on AI chatbot platforms is accurate. About six in ten adults – including a majority (56%) of AI users – say they are “not too confident” or “not at all confident” in the accuracy of health information provided by AI chatbots. Adults under age 50 and Black and Hispanic adults are somewhat more likely than those over age 50 and White adults, respectively, to say they have confidence in the accuracy of health information from AI chatbots, though about half or more across age and racial and ethnic groups say they are not confident.”
Sorry, comments are closed for this post.