Nature: “…But lately the attacks have become more sophisticated, and harder to debunk. “With AI, they can make an image look and speak exactly like me, with my mannerisms,” he says. In one video, he is depicted as saying that people could live to 100 years if they take a certain herbal product. Such videos pose a reputational risk, as well as a professional one because, Mohan says, he could face legal action from bodies such as India’s medical association. In 2022, the association sued the Indian herbal-products company Patanjali Ayurved, based in Haridwar, for alleged false advertising over claims that its products could cure a range of ailments. Discussions about the dangers of deepfakes have so far focused on politicians and celebrities. A video of the rapper Snoop Dogg reading tarot cards might seem harmless, but the same technology has been used to generate pornographic images of singer-songwriter Taylor Swift. Deepfaked voice recordings have been used to sow disinformation in elections from Slovenia to Nigeria, and most people in the United States expect AI abuses to affect this year’s US presidential election. But AI researchers say that scientists — particularly those in the public eye — are also at risk. “When you think of ways to spread misinformation, you want to manipulate what people think are the trusted sources of information,” says Christopher Doss, a quantitative researcher who works in Washington DC for the RAND Corporation, a non-profit policy-research think tank. So, deepfakes involving scientists “are probably going to be something that we see more of”, he says. Last year, Doss published a study with colleagues at RAND, Carnegie Mellon University in Pittsburgh, Pennsylvania, and the Challenger Center in Washington DC to test the ability of US schoolchildren, university students and adults to distinguish fake science-information videos from real ones; between 27% and 50% of the respondents could not identify the fakes. The videos featured well-known climate commentators, including activist Greta Thunberg and retired atmospheric physicist and climate doubter Richard Lindzen, in Cambridge, Massachusetts; all of the clips were generated from publicly available material. “Deepfakes definitely aren’t perfect, but we’re at the point now where they’re probably good enough to fool at least a substantial percentage of people,” says Doss. And generating one doesn’t require the technical expertise that it used to, he adds. Despite current efforts, few technological means are available to stop legitimate videos being used to generate deepfakes, says Siwei Lyu, a specialist in machine learning and digital media at the University at Buffalo in New York…”
Sorry, comments are closed for this post.