Center for Data Innovation: “Deepfakes—realistic-looking images and videos altered by AI to portray someone doing or saying something that never actually happened—have been around since the end of 2017, yet in recent months have become a major focus of policymakers. Though image and video manipulation have posed challenges for decades, the threat of deepfakes is different. The early examples were created mostly by people editing the faces of celebrities into pornography, but in April 2018, comedian and filmmaker Jordan Peele worked with BuzzFeed to create a deepfake of President Obama, kicking off a wave of fears about the potential for deepfakes to turbocharge fake news. Congress has introduced a handful of bills designed to help address this threat but preventing deepfakes from hurting people and society will require additional solutions. The risks posed by deepfakes, a portmanteau of “deep learning” and “fake,” fall into two camps: that this technology will intrude on individual rights, such as using a person’s likeness for profit or to create pornographic videos without their consent; and that this technology could be weaponized as a disinformation tool. To address these risks, Senator Ben Sasse (R-NE) introduced the Malicious Deep Fake Prohibition Act of 2018 late last year, which would make it illegal to create, with the intent to distribute, or knowingly distribute, deepfakes that would facilitate criminal or “tortious conduct” (i.e. conduct that causes harm, but is not necessary unlawful, such as creating a deepfake that might harm someone’s reputation). And at a June House Intelligence Committee hearing, Representative Yvette Clark (D-NY) introduced the DEEPFAKES Accountability Act, which would require anyone creating a deepfake to include an irremovable digital watermark indicating it as such…”
Sorry, comments are closed for this post.