British Tech Companies and Child Protection Officials to Test AI's Capability to Create Abuse Images
Tech firms and child protection agencies will receive authority to evaluate whether artificial intelligence systems can produce child exploitation images under new UK legislation.
Significant Increase in AI-Generated Illegal Material
The announcement came as revelations from a safety monitoring body showing that reports of AI-generated child sexual abuse material have more than doubled in the past year, rising from 199 in 2024 to 426 in 2025.
New Legal Structure
Under the amendments, the government will permit designated AI companies and child safety organizations to inspect AI systems – the foundational technology for chatbots and visual AI tools – and verify they have sufficient safeguards to stop them from producing images of child exploitation.
"Ultimately about preventing exploitation before it happens," declared Kanishka Narayan, adding: "Specialists, under strict protocols, can now detect the danger in AI systems early."
Addressing Legal Challenges
The changes have been implemented because it is illegal to produce and possess CSAM, meaning that AI developers and other parties cannot create such content as part of a testing process. Until now, officials had to delay action until AI-generated CSAM was uploaded online before dealing with it.
This legislation is designed to averting that problem by helping to halt the creation of those materials at their origin.
Legislative Structure
The changes are being introduced by the authorities as revisions to the crime and policing bill, which is also establishing a prohibition on possessing, producing or distributing AI models developed to generate exploitative content.
Practical Impact
This recently, the minister visited the London headquarters of a children's helpline and listened to a simulated conversation to counsellors involving a report of AI-based abuse. The interaction portrayed a adolescent seeking help after facing extortion using a explicit deepfake of themselves, constructed using AI.
"When I learn about young people facing extortion online, it is a source of extreme frustration in me and justified anger amongst families," he stated.
Alarming Statistics
A leading internet monitoring organization reported that cases of AI-generated exploitation content – such as online pages that may contain numerous files – had significantly increased so far this year.
Cases of the most severe material – the most serious form of exploitation – increased from 2,621 images or videos to 3,086.
- Female children were overwhelmingly targeted, making up 94% of prohibited AI images in 2025
- Portrayals of newborns to toddlers increased from five in 2024 to 92 in 2025
Industry Response
The legislative amendment could "constitute a crucial step to guarantee AI products are safe before they are launched," stated the chief executive of the online safety organization.
"AI tools have made it so survivors can be targeted repeatedly with just a few clicks, providing offenders the capability to make potentially limitless quantities of advanced, photorealistic exploitative content," she added. "Material which additionally commodifies survivors' suffering, and makes children, particularly girls, more vulnerable both online and offline."
Support Interaction Data
Childline also released information of support interactions where AI has been referenced. AI-related risks mentioned in the sessions include:
- Employing AI to rate weight, physique and appearance
- Chatbots discouraging children from consulting trusted guardians about abuse
- Facing harassment online with AI-generated material
- Online extortion using AI-faked pictures
Between April and September this year, Childline delivered 367 counselling interactions where AI, conversational AI and associated terms were discussed, significantly more as many as in the same period last year.
Fifty percent of the references of AI in the 2025 interactions were connected with mental health and wellbeing, encompassing using AI assistants for assistance and AI therapy apps.