British Technology Companies and Child Safety Agencies to Test AI's Ability to Create Exploitation Images
Technology companies and child safety agencies will receive permission to evaluate whether AI tools can generate child abuse material under new British laws.
Significant Increase in AI-Generated Harmful Material
The declaration coincided with findings from a protection watchdog showing that reports of AI-generated child sexual abuse material have increased dramatically in the past year, rising from 199 in 2024 to 426 in 2025.
Updated Legal Structure
Under the changes, the government will permit approved AI companies and child safety organizations to examine AI models – the underlying technology for conversational AI and visual AI tools – and verify they have adequate safeguards to stop them from producing images of child sexual abuse.
"Fundamentally about preventing exploitation before it occurs," declared Kanishka Narayan, noting: "Specialists, under rigorous conditions, can now identify the danger in AI models promptly."
Tackling Regulatory Obstacles
The changes have been introduced because it is against the law to produce and possess CSAM, meaning that AI creators and others cannot create such images as part of a testing process. Until now, officials had to wait until AI-generated CSAM was uploaded online before addressing it.
This legislation is designed to preventing that problem by enabling to halt the production of those materials at their origin.
Legislative Structure
The amendments are being introduced by the government as revisions to the criminal justice legislation, which is also establishing a prohibition on owning, creating or distributing AI systems designed to create exploitative content.
Practical Consequences
This week, the official visited the London headquarters of Childline and listened to a simulated conversation to advisors featuring a account of AI-based abuse. The call depicted a teenager requesting help after facing extortion using a sexualised AI-generated image of himself, created using AI.
"When I hear about young people experiencing blackmail online, it is a source of intense frustration in me and rightful anger amongst parents," he stated.
Concerning Data
A leading internet monitoring organization stated that cases of AI-generated exploitation content – such as online pages that may include numerous images – had significantly increased so far this year.
Instances of category A material – the gravest form of abuse – rose from 2,621 visual files to 3,086.
- Female children were predominantly targeted, accounting for 94% of illegal AI images in 2025
- Portrayals of infants to toddlers rose from five in 2024 to 92 in 2025
Industry Reaction
The law change could "constitute a crucial step to guarantee AI products are safe before they are released," commented the chief executive of the internet monitoring organization.
"Artificial intelligence systems have made it so victims can be targeted all over again with just a simple actions, providing offenders the ability to create possibly limitless amounts of advanced, lifelike child sexual abuse material," she added. "Material which further exploits victims' trauma, and renders young people, especially female children, less safe both online and offline."
Support Interaction Data
Childline also published details of counselling sessions where AI has been mentioned. AI-related risks discussed in the conversations include:
- Employing AI to rate body size, physique and appearance
- AI assistants discouraging young people from consulting safe adults about harm
- Being bullied online with AI-generated material
- Digital extortion using AI-manipulated images
Between April and September this year, Childline delivered 367 counselling sessions where AI, conversational AI and associated terms were mentioned, four times as many as in the equivalent timeframe last year.
Half of the mentions of AI in the 2025 interactions were connected with mental health and wellness, encompassing using chatbots for assistance and AI therapeutic applications.