Lifehacker: “This post is part of Lifehacker’s “Exposing AI” series. We’re exploring six different types of AI-generated media, and highlighting the common quirks, byproducts, and hallmarks that help you tell the difference between artificial and human-created content. From the moment ChatGPT introduced the world to generative AI in late 2022, it was apparent that, going forward, you can no longer trust that something you’re reading was written by a human. You can ask an AI program like ChatGPT to write something—anything at all—and it will, in mere seconds. So how can you trust that what you’re reading came from the mind of a person, and not the product of an algorithm? If the ongoing deflation of the AI bubble has shown us anything, it’s that most people kind of hate AI in general, which means they probably aren’t keen on the idea that what they are reading was thoughtlessly spit out by a machine. Still, some have fully embraced AI’s ability to generate realistic text, for better or, often, worse. Last year, CNET quietly began publishing AI content alongside human-written articles, only to face scorn and backlash from its own employees. Former Lifehacker parent company G/O Media also published AI content on its sites, albeit openly, and experienced the same blowback—both for implementing the tech with zero employee input, and because the content itself was just terrible. But not all AI-generated text announces itself quite so plainly. When used correctly, AI programs can generate text that is convincing—even if you can still spot clues that reveal its inhuman source…”
Sorry, comments are closed for this post.