“All we need is a human form to be a victim.” This statement from attorney Carrie Goldberg, who specializes in online abuse and sex crimes, captures the increased risks posed by deepfake pornography in the age of artificial intelligence.
The alarming rise of AI-generated deepfake pornography poses a huge threat to everyone, regardless of whether they have shared explicit images online or not. From high-profile individuals to ordinary people, including minors, the psychological toll on victims is enormous.
The technology behind deepfakes
Unlike revenge porn, where actual images are shared non-consensually, deepfake technology allows perpetrators to create completely fabricated content by superimposing a person's face into explicit photos or manipulating existing images to appear compromising. Even those who have never taken private photos can fall prey to this technology.
According to CNN, high-profile cases in the past have involved celebrities like Taylor Swift and Rep. Alexandria Ocasio-Cortez. But young people are also being targeted.
Protect yourself: preserve evidence
For those who discover that their image has been weaponized in this way, the immediate instinct is often to try to get it removed. But Goldberg emphasizes the importance of preserving evidence first by taking screenshots. “The knee-jerk reaction is to get this off the internet as quickly as possible. But if you want the ability to file criminal charges, you need the evidence,” Goldberg told CNN.
After documenting the content, victims can use tools from tech companies such as Google, Meta and Snapchat to request the removal of explicit images. Organizations like StopNCII.org and Take It Down also help facilitate the removal of harmful content across multiple platforms.
Legal progress
The fight against deepfake pornography has rarely attracted bipartisan attention. In August 2024, US senators called on major tech companies like X (formerly Twitter) and Discord to participate in programs aimed at curbing non-consensual explicit content. A hearing on Capitol Hill featured testimony from teens and parents affected by AI-generated pornography. Following this, a bill was introduced in the US to criminalize the publication of deepfake pornography. The proposed legislation would also require social media platforms to remove such content upon notification by victims.
Goldberg emphasizes that while victims can take steps to respond, it is also up to society to act responsibly. “My proactive advice is actually to potential offenders, which is don't be scum of the earth and try to steal someone's image and use it for humiliation. There is not much victims can do to prevent this. We can never be completely safe in a digital society, but it takes some effort to not be total assholes,” Goldberg told CNN.