Washington Post [unpaywalled]: “AI-generated images are everywhere. They’re being used to make nonconsensual pornography, muddy the truth during elections and promote products on social media using celebrity impersonations. When Princess Catherine released a video last month disclosing that she had cancer, social media went abuzz with the latest baseless claim that artificial intelligence was used to manipulate the video. Both BBC Studios, which shot the video, and Kensington Palace denied AI was involved. But it didn’t stop the speculation. Experts say the problem is only going to get worse. Today, the quality of some fake images is so good that they’re nearly impossible to distinguish from real ones. In one prominent case, a finance manager at a Hong Kong bank wired about $25.6 million to fraudsters who used AI to pose as the worker’s bosses on a video call. And the tools to make these fakes are free and widely available. A growing group of researchers, academics and start-up founders are working on ways to track and label AI content. Using a variety of methods and forming alliances with news organizations, Big Tech companies and even camera manufacturers, they hope to keep AI images from further eroding the public’s ability to understand what’s true and what isn’t. “A year ago, we were still seeing AI images and they were goofy,” said Rijul Gupta, founder and CEO of DeepMedia AI, a deepfake detection start-up. “Now they’re perfect.” Here’s a rundown of the major methods being developed to hold back the AI image apocalypse…”
Sorry, comments are closed for this post.