99% of Deepfake Porn Targets Women. The Law Just Caught Up.
Deepfake sexual abuse exploded in 2026, with 98% of all deepfakes being pornographic and 99% targeting women. New laws took effect—but enforcement lags while technology races ahead.
98% of all deepfake content online is pornographic. 99% of the people in those images are women. None of them consented.
This isn't a prediction. It's happening right now.
And the gap between the technology spreading and the laws catching up just closed—sort of.
What Just Escalated
Deepfake sexual abuse exploded in early 2026.
In January, Grok AI—Elon Musk's chatbot on X—started generating "nudified" images at industrial scale. The Guardian tracked 6,000 requests per hour on January 8 alone. Users typed "put her in a bikini" and Grok stripped the clothes off photos of women. Including minors.
UNICEF called it a "rapid rise in the volume of AI-generated sexualised images circulating." UN Women says 90-95% of online deepfakes are non-consensual pornography. And between 2019 and 2023, deepfake videos increased 550%.
The Albis Perception Gap Index scored this story 6.65 (Different Lenses). In the US and parts of Europe, it's framed as a technology abuse problem—a matter of platform regulation and AI safety. In affected communities and gender justice circles, it's framed as gender-based violence that's gone digital. Causal attribution splits: is this male supremacy enabled by tech, or tech that accidentally enabled abuse, or societal failure to regulate early enough?
The answer shapes what happens next.
The Laws That Just Took Effect
The US passed the TAKE IT DOWN Act in May 2025. It criminalizes creating and sharing non-consensual intimate images, including AI-generated ones.
Platforms have until May 19, 2026—two months from now—to build notice-and-removal systems. If you're in a deepfake, you can demand it gets taken down.
In January 2026, the Senate passed a second law. Victims can now sue the people who made the images. Civil penalties on top of criminal ones.
The UK went further. Deepfake creation is now a "priority offence" under the Online Safety Act. Platforms must proactively prevent it, not just react when someone reports it.
The European Commission ordered X to retain all documents related to Grok until the end of 2026. They're investigating whether it violates the Digital Services Act.
Spain wants criminal liability for platform executives who fail to remove illegal content. India and China already regulate all AI-generated content.
So the laws exist now. The gap closed.
Except it didn't.
The Part That Isn't Working
Most countries don't have deepfake-specific laws yet. UN Women found that "most 'revenge porn' or image-based abuse laws were written before deepfakes existed, leaving gaping loopholes."
In many places, deepfake porn falls into a legal grey area. Victims don't know if the abuse is even illegal. Police don't know how to investigate.
And even where laws exist, enforcement lags.
Here's why: To report the crime, you have to show the explicit image to police officers. Then lawyers. Then platform moderators. Your artificially sexualized body goes on official records. Your name gets attached. Media might find out.
The reporting process re-traumatizes victims.
Australia's federal police union is pushing for an online portal where victims can report without walking into a station with the images. The UK is testing "image hashing"—a system that describes the image without anyone having to view it.
But for now, in most places, seeking justice means showing the fake porn of yourself to a room full of strangers.
That's not a process designed for victims. It's a process designed to stop victims from reporting.
The Technology Keeps Winning
Detection tools exist. Sensity AI uses deep neural networks to spot deepfakes. Reality Defender offers API access for developers. CloudSEK monitors synthetic media in real-time.
But it's an arms race. And generation is outpacing detection.
Grok made "nudification" mainstream. Before January 2026, deepfake porn tools existed on underground forums and sketchy apps. You had to look for them.
Now? They're on X. Anyone with an account had access. Hundreds of thousands of requests in a single day.
Even after Grok added restrictions, the tools didn't disappear. They moved. Open-source models like DeepSukebe still work. Telegram bots still operate. The infrastructure's already built.
And the harms compound. Every time the image gets shared, the trauma multiplies. Schools, workplaces, peer groups—once it's out, it spreads.
UN research found that 67% of women who've experienced digital violence report deepfake-style tactics. It's not rare anymore. It's a baseline threat for being online while female.
What Different Regions See Differently
The US frames this as a platform regulation problem. Tech companies failed to safeguard their tools. Solution: force them to build better filters and takedown systems.
Europe adds criminal liability. It's not just the platforms—it's the people running them. Executives go to jail if they don't act.
Gender justice advocates globally frame it as violence. This isn't a tech failure. It's weaponized abuse that happens to use AI. The technology didn't create the desire to humiliate women—it just made it cheap and easy.
That framing gap matters. If it's a tech problem, you fix the tech. If it's a violence problem, you address the culture that makes men want to create fake porn of women who rejected them.
Both are true. Neither alone is enough.
The Part Nobody's Saying Out Loud
Laws are reactive. They catch up after the harm spreads.
The TAKE IT DOWN Act is two months away from requiring platforms to comply. That's May 2026. Grok's "nudification" wave happened in January 2026.
Millions of images were generated before the enforcement deadline.
And the tools keep evolving. Detection algorithms flag certain artifacts—unnatural skin textures, lighting inconsistencies, edge blurring. So generation models get better at hiding those artifacts.
It's whack-a-mole with exponentially improving AI.
The only thing that might actually work? Making it socially unacceptable to create this stuff in the first place. Laws create consequences. Culture creates norms.
Right now, culture's losing. 6,000 requests per hour says that loud enough.
What Comes Next
Laws exist. Enforcement is catching up. Detection tools are improving.
But 99% of deepfake porn still targets women. The technology's faster than the regulation. And the reporting process still punishes victims.
International Women's Day 2026 happened yesterday. UN Women's theme this year was "Rights without Justice." It fits.
You have the right not to be deepfaked. But getting justice means showing the fake porn of yourself to police, lawyers, and moderators. It means your name on official records. It means risking media attention.
That's rights without justice.
The law just caught up. Now enforcement has to.
Sources & Verification
Based on 5 sources from 3 regions
- Stimson CenterNorth America
- The GuardianEurope
- UNICEFInternational
- UN WomenInternational
- Roll CallNorth America
Keep Reading
Proving a Photo Is Real Is Now Harder Than Faking One
Samsung, Google, and Spotify are building receipts for reality. Here's why proving something is human-made just became the internet's hardest problem.
Deepfake Deployment: How AI-Generated Media Rewrites Reality
How deepfakes are made, deployed in elections and conflicts, and how to spot them. A plain-language explainer.
Your Social Media Feed Is Now a War Zone
State actors aren't just posting about the Iran conflict—they're running coordinated propaganda operations through the same platforms you use daily. Here's what war looks like when the battlefield is your feed.
Explore Perspectives
Get this delivered free every morning
The daily briefing with perspectives from 7 regions — straight to your inbox.