Wired: “When pundits and researchers tried to guess what sort of manipulation campaigns might threaten the 2018 and 2020 elections, misleading AI-generated videos often topped the list. Though the tech was still emerging, its potential for abuse was so alarming that tech companies and academic labs prioritized working on, and funding, methods of detection. Social platforms developed special policies for posts containing “synthetic and manipulated media,” in hopes of striking the right balance between preserving free expression and deterring viral lies. But now, with about three months to go until Nov. 3, that wave of deepfaked moving images seems never to have broken. Instead, another form of AI-generated media is making headlines, one that is harder to detect and yet much more likely to become a pervasive force on the internet: Deepfake text. Last month brought the introduction of GPT-3, the next frontier of generative writing: an AI that can produce shockingly human-sounding (if at times surreal) sentences. As its output becomes ever more difficult to distinguish from text produced by humans, one can imagine a future in which the vast majority of the written content we see on the internet is produced by machines. If this were to happen, how would it change the way that we react to the content that surrounds us?…
Sorry, comments are closed for this post.