Accurate, Focused Research on Law, Technology and Knowledge Discovery Since 2002

AI assistants risk misleading audiences by distorting BBC Journalism

Representation of BBC News content in AI Assistants. Research by Oli Elliott, Principal Data Scientist, BBC Responsible AI Team. February 2025. “AI assistants risk misleading audiences by distorting BBC Journalism The media landscape is being changed by AI. It offers new capabilities and opportunities for media companies like the BBC; and new formats and ways to consume content for audiences. The BBC is excited about the future of AI and the value it can bring audiences and our staff. We’re already using it to add subtitles to programmes on BBC Sounds and translate content into different languages on BBC News. We’re developing tools that use AI to assist our staff in everyday tasks; and exploring how it can provide our audience with new experiences, like a personal tutor on Bitesize. AI will bring real value when it’s used responsibly. But AI also brings significant challenges for
audiences, and the UK’s information eco-system. A key feature of the emerging landscape is AI assistants like those from OpenAI, Google, and Microsoft. AI assistants are adept at many tasks including drafting emails and documents; analysing data; and summarising information. They can also provide answers to questions about news and current affairs. They do this, in part, by repurposing content from publishers’ websites, often without publishers’ permission. To better understand the news related output from AI assistants we undertook research into four prominent, publicly available AI assistants – OpenAI’s ChatGPT; Microsoft’s Copilot; Google’s Gemini; and Perplexity. We wanted to know whether they provided accurate responses to questions about the news; and if their answers faithfully represented BBC news stories used as sources. We gave the AI assistants access to our website for the duration of the research and asked them questions about the news, prompting them to use BBC News articles as sources where possible. AI answers were reviewed by BBC journalists, all experts in the question topics, on criteria including accuracy, impartiality and how they represented BBC content. The BBC is the UK’s most widely used and trusted news provider and the world’s most trusted international news provider. We take time and care to ensure the accuracy and impartiality of our news. Our journalists approached this task with the same level of care.
The answers produced by the AI assistants contained significant inaccuracies and distorted content from the BBC. In particular:
• 51% of all AI answers to questions about the news were judged to have significant issues of some form.
• 19% of AI answers which cited BBC content introduced factual errors – incorrect factual statements, numbers and dates.
• 13% of the quotes sourced from BBC articles were either altered from the original source or not present in the article cited…”
See also Reuters – Survey Highlights an Emerging Divide Over Artificial Intelligence in the U.S. The survey also found Americans trust news produced by mainstream journalists more than AI-generated content. While 62% of respondents said they trust journalistic content “some” or “a lot,” 48% said the same about AI-generated information.

Sorry, comments are closed for this post.