Efforts from members of Congress to clamp down on deepfake pornography are not entirely new. In 2019 and 2021, Representative Yvette Clarke introduced the DEEPFAKES Accountability Act, which requires creators of deepfakes to watermark their content. And in December 2022, Representative Morelle, who is now working closely with Francesca, introduced the Preventing Deepfakes of Intimate Images Act. His bill focuses on criminalizing the creation and distribution of pornographic deepfakes without the consent of the person whose image is used. Both efforts, which didn’t have bipartisan support, stalled in the past.
But recently, the issue has reached a “tipping point,” says Hany Farid, a professor at the University of California, Berkeley, because AI has grown much more sophisticated, making the potential for harm much more serious. “The threat vector has changed dramatically,” says Farid. Creating a convincing deepfake five years ago required hundreds of images, he says, which meant those at greatest risk for being targeted were celebrities and famous people with lots of publicly accessible photos. But now, deepfakes can be created with just one image.
Farid says, “We’ve just given high school boys the mother of all nuclear weapons for them, which is to be able to create porn with [a single image] of whoever they want. And of course, they’re doing it.”
Clarke and Morelle, both Democrats from New York, have reintroduced their bills this year. Morelle’s now has 18 cosponsors from both parties, four of whom joined after the incident involving Francesca came to light—which indicates there could be real legislative momentum to get the bill passed. Then just this week, Representative Kean, one of the cosponsors of Morelle’s bill, released a related proposal intended to push forward AI-labeling efforts—in part in response to Francesca’s appeals.
Recent Comments