AI has made it far easier for criminals to generate and share gratuitous material. Today, they can produce highly realistic child sexual abuse images (CSAM) with affordable hardware, often without any physical contact between offender and victim, or authentic source material.
And the images are so realistic that most can be prosecuted under existing laws as though they depict real children. As such, current legislation, platform accountability, and forensic capabilities are struggling to keep pace.
The Internet Watch Foundation (IWF) suggests that AI videos of abuse have now ‘crossed the threshold’ of being clearly distinguishable from real images – and have also become far more prevalent online this year.
In the first half of 2025, the watchdog verified 1,286 AI-made videos as featuring CSAM that broke the law, compared with two in the same period during 2024. The IWF says that over 1,000 of the videos portray category A abuse, the most severe kind of material.
Our new study examines the alarming rise of AI-generated CSAM and its implications for law enforcement, legislators, and criminal defense professionals.
The study draws on verified data not only from the IWF but also from the Department of Homeland Security, Thorn, the National Center for Missing and Exploited Children (NCMEC), and other watchdogs.
The findings reveal that generative AI is exponentially accelerating the creation, realism, and accessibility of exploitative images and videos.
AI-Generated CSAM: Not a Victimless Crime
For some, synthetic CSAM is seen as ‘victimless’ since it depicts fictional minors. Yet the reality is far more complex. AI tools can now be used to modify and manipulate real images and create new exploitative material from photographs of known children, famous minors, or even private individuals.
The scale of this issue is significant and is getting worse. For example, the Department of Homeland Security reported a 400% increase in AI-generated CSAM webpages in the first half of 2025.
In one single month during 2025, the IWF found 20,254 AI-generated CSAM images in a single forum, 90% of which were classified as realistic enough to be prosecuted as if they were depictions of real children. And, increasingly, offender communities openly share engineering techniques to quickly refine image realism.
Even in cases where no physical abuse occurs, victims can still be harmed through re-victimization, reputational damage, and the psychological toll of seeing their likeness defiled.
AI Tools Are Being Used To Harm Children, With No Consequences
And it’s easier than ever to create abusive images with modern-day generative AI image tools, including so-called ‘nudify’ and ‘unclothe’ apps, which are being exploited to create CSAM at scale. Such platforms often operate without safeguards, age restrictions, or reporting obligations. For example, only five AI platforms are registered to report CSAM to NCMEC, and many have never filed a single report.
Additionally, ‘nudify’ and ‘unclothe’ app administrators are not yet legally obliged to report exploitative images and do not collaborate with watchdogs. And offenders can legally obtain AI tools and work offline, producing unlimited material without detection.
This regulatory gap has created an environment where illegal content can proliferate largely unchecked, leaving law enforcement to play catch-up.
Huge Spike in AI CSAM Overwhelms Law Enforcement
The surge in AI CSAM has put unprecedented pressure on investigators. Not only are such images increasing in number, their realism also makes forensic review more time-consuming and resource-intensive. Law enforcement and other stakeholders must now negotiate unprecedented challenges.
For example, the NCMEC received over 7,000 reports of AI-generated CSAM in 2024, a small fraction of the more than 36 million CyberTipline reports filed that year. That number will rapidly rise in the coming years, and even with advanced detection systems, the sheer scale means many offenders and images will remain uninvestigated.
The Platforms Know — But Do Little To Nothing About It
Unhelpfully, watchdogs have documented a striking lack of engagement from tech platforms whose tools are being used to generate CSAM.
Of the AI platforms registered with NCMEC, most have not even submitted reports. Platforms that enable explicit image modification are yet to engage in discussions on prevention measures.
Without mandatory reporting requirements, much of this content will remain in indefinite circulation. And that means prolonged harm and trauma for victims.
1 in 8 Teens Know a Deepfake Victim
The danger of AI exploitation goes beyond dark-web forums, with mainstream platforms and schools already seeing the damage.
Thorn’s 2025 research revealed that 31% of teens are familiar with deepfake nudes, 1 in 8 teens know a deepfake nude victim, 1 in 5 children aged 9 to 17 have seen nonconsensually reshared sexual images, and 1 in 10 know peers who’ve created deepfake sexual images.
For minors, even false images can create lasting emotional trauma, bullying, and reputational harm.
Legal Precedents and Prosecutions
Real-world cases already showcase the fact that the justice system treats AI-generated abuse as seriously as traditional CSAM. In Charlotte, North Carolina, David Tatum was sentenced on child pornography charges after using AI ‘unclothe’ apps to modify minor images.
One altered image depicted a 15-year-old waiting for a bus; the original photo was taken 25 years before it was digitally manipulated. The victim was in her 40s when identified, illustrating the fact that AI CSAM can still claim victims decades after original images are created.
The Road Ahead: Technology, Law, and Accountability
AI-generated CSAM can be created in many different ways. Adult content can be manipulated to appear underage; entirely synthetic (and yet incredibly real) minor images are easily generated. Whatever method is used, it presents a unique challenge for legislative bodies, forensic teams, and ethical watchdogs. Addressing this crisis will require the following.
- Stronger platform reporting mandates
- Enhanced AI detection tools to help law enforcement efforts
- Clearer legal definitions regarding synthetic exploitation
- Ongoing public education to reduce demand and misuse
At Suzuki Law Offices, we believe addressing AI-driven exploitation requires the same level of integrity and diligence we bring to every case.
Our team is armed with decades of combined experience, including our founder’s background as a federal prosecutor. We are committed to understanding the technology, the law, and the human impact at the heart of these cases.