The New York Times [unpaywalled]: “…BNN went dormant in April, while The New York Times was reporting this article. The company and its founder did not respond to multiple requests for comment. Microsoft had no comment on MSN’s featuring the misleading story with Mr. Fanning’s photo or his defamation case, but the company said it had terminated its licensing agreement with BNN. During the two years that BNN was active, it had the veneer of a legitimate news service, claiming a worldwide roster of “seasoned” journalists and 10 million monthly visitors, surpassing the The Chicago Tribune’s self-reported audience. Prominent news organizations like The Washington Post, Politico and The Guardian linked to BNN’s stories. Google News often surfaced them, too. A closer look, however, would have revealed that individual journalists at BNN published lengthy stories as often as multiple times a minute, writing in generic prose familiar to anyone who has tinkered with the A.I. chatbot ChatGPT. BNN’s “About Us” page featured an image of four children looking at a computer, some bearing the gnarled fingers that are a telltale sign of an A.I.-generated image. How easily the site and its mistakes entered the ecosystem for legitimate news highlights a growing concern: A.I.-generated content is upending, and often poisoning, the online information supply. Many traditional news organizations are already fighting for traffic and advertising dollars. For years, they competed for clicks against pink slime journalism — so-called because of its similarity to liquefied beef, an unappetizing, low-cost food additive. Low-paid freelancers and algorithms have churned out much of the faux-news content, prizing speed and volume over accuracy. Now, experts say, A.I. could turbocharge the threat, easily ripping off the work of journalists and enabling error-ridden counterfeits to circulate even more widely — as has already happened with travel guidebooks, celebrity biographies and obituaries. The result is a machine-powered ouroboros that could squeeze out sustainable, trustworthy journalism. Even though A.I.-generated stories are often poorly constructed, they can still outrank their source material on search engines and social platforms, which often use A.I. to help position content. The artificially elevated stories can then divert advertising spending, which is increasingly assigned by automated auctions without human oversight…”
Sorry, comments are closed for this post.