The Atlantic – “Large language models make things up, but the worse problem may be in how they present those falsehoods…If you spend any time on the internet, you’re likely now familiar with the gray-and-teal screenshots of AI-generated text. At first they were meant to illustrate ChatGPT’s surprising competence at generating human-sounding prose, and then to demonstrate the occasionally unsettling answers that emerged once the general public could bombard it with prompts. OpenAI, the organization that is developing the tool, describes one of its biggest problems this way: “ChatGPT sometimes writes plausible-sounding but incorrect or nonsensical answers.” In layman’s terms, the chatbot makes stuff up. As similar services, such as Google’s Bard, have rushed their tools into public testing, their screenshots have demonstrated the same capacity for fabricating people, historical events, research citations, and more, and for rendering those falsehoods in the same confident, tidy prose. This apparently systemic penchant for inaccuracy is especially worrisome, given tech companies’ intent to integrate these tools into search engines as soon as possible. But a bigger problem might lie in a different aspect of AI’s outputs—more specifically, in the polite, businesslike, serenely insipid way that the chatbots formulate their responses. This is the prose style of office work and email jobs, of by-the-book corporate publicists and LinkedIn influencers with private-school MBAs. The style sounds the same—pleasant, measured, authoritative—no matter whether the source (be it human or computer) is trying to be helpful or lying through their teeth or not saying anything coherent at all. In the United States, this is the writing style of institutional authority, and AI chatbots are so far exquisitely capable of replicating its voice, while delivering information that is patently unreliable. On a practical level, this will pose challenges for people who must navigate a world with this kind of technology suddenly thrust into it. Our mental shortcuts used for evaluating communicative credibility on the fly have always been less than perfect, and the very nature of the internet already makes such judgment calls more difficult and necessary. AI could make them nearly impossible.”
Sorry, comments are closed for this post.