close
close

Association-anemone

Bite-sized brilliance in every update

The Growing Threat of Deepfake Porn: How to Protect Yourself
asane

The Growing Threat of Deepfake Porn: How to Protect Yourself

“All we have to have is just a human form to be a victim.” This statement from attorney Carrie Goldberg, who specializes in online abuse and sexual crimes, captures the increased risks deepfake pornography poses in the age of artificial intelligence.

The alarming rise of AI-generated deepfake pornography poses a massive threat to anyone, whether or not they have shared explicit images online. From high-profile individuals to ordinary people, including minors, the psychological toll on victims is immense.

The technology behind deepfakes

Unlike revenge porn, which involves the non-consensual sharing of real images, deepfake technology allows perpetrators to create completely fabricated content by superimposing someone’s face on explicit photos or manipulating existing images to appear compromising. Even those who have never taken private photos can fall prey to this technology.

Conformable CNNhigh-profile cases in the past have included celebrities such as Taylor Swift and Rep. Alexandria Ocasio-Cortez. But young people are also targeted.

Protect yourself: preserve the evidence

For those who discover that their image has been so reinforced, the immediate instinct is often to try to remove it. But Goldberg stresses the importance of first preserving evidence by taking screenshots. “The knee-jerk reaction is to get this off the internet as soon as possible. But if you want to be able to have the option of criminal reporting, you need evidence,” Goldberg said, quoted by CNN.

After documenting the content, victims can use tools provided by technology companies such as Google, Meta and Snapchat to request removal of explicit images. Organizations like StopNCII.org and Take It Down also help facilitate the removal of harmful content across multiple platforms.

Legal progress

The fight against deepfake pornography has drawn bipartisan attention. In August 2024, US senators called on big tech companies like X (formerly Twitter) and Discord to participate in programs aimed at limiting explicitly non-consensual content. A hearing on Capitol Hill featured testimony from teenagers and parents affected by AI-generated pornography. Following this, a bill was introduced in the US to criminalize the publication of deepfake pornography. The proposed legislation would also require social media platforms to remove such content upon notification to victims.

Goldberg points out that while victims can take steps to respond, the onus is on society to act responsibly as well. “My proactive advice is really for would-be offenders, which is just, like, don’t be a total asshole and try to steal someone’s image and use it for humiliation. Victims can’t do much to prevent this. We can never be completely safe in a digital society, but we kind of depend on each other not to be total a-holes,” Goldberg told CNN.