MIT Technology Review [unpaywalled]: “…Last week Google said it is taking steps to keep explicit deepfakes from appearing in search results. The tech giant is making it easier for victims to request that nonconsensual fake explicit imagery be removed. It will also filter all explicit results on similar searches and remove duplicate images. This will prevent the images from popping back up in the future. Google is also downranking search results that lead to explicit fake content. When someone searches for deepfakes and includes someone’s name in the search, Google will aim to surface high-quality, non-explicit content, such as relevant news articles. This is a positive move, says Ajder. Google’s changes remove a huge amount of visibility for nonconsensual, pornographic deepfake content. “That means that people are going to have to work a lot harder to find it if they want to access it,” he says. In January, I wrote about three ways we can fight nonconsensual explicit deepfakes. These included regulation; watermarks, which would help us detect whether something is AI-generated; and protective shields, which make it harder for attackers to use our images…”
Sorry, comments are closed for this post.