British Technology Firms and Child Safety Agencies to Examine AI's Ability to Generate Abuse Content
Tech firms and child protection agencies will be granted permission to assess whether AI systems can generate child exploitation images under recently introduced UK laws.
Substantial Rise in AI-Generated Illegal Material
The declaration coincided with findings from a protection watchdog showing that cases of AI-generated child sexual abuse material have increased dramatically in the last twelve months, rising from 199 in 2024 to 426 in 2025.
New Regulatory Framework
Under the amendments, the authorities will allow designated AI developers and child safety groups to examine AI systems – the foundational technology for conversational AI and image generators – and verify they have adequate safeguards to stop them from producing depictions of child exploitation.
"Fundamentally about preventing abuse before it occurs," declared the minister for AI and online safety, noting: "Specialists, under rigorous conditions, can now detect the danger in AI models promptly."
Tackling Legal Obstacles
The amendments have been implemented because it is against the law to create and possess CSAM, meaning that AI developers and other parties cannot generate such images as part of a evaluation process. Previously, officials had to delay action until AI-generated CSAM was published online before dealing with it.
This law is aimed at averting that issue by helping to stop the production of those materials at their origin.
Legislative Framework
The changes are being introduced by the government as modifications to the criminal justice legislation, which is also implementing a prohibition on owning, creating or distributing AI systems developed to create child sexual abuse material.
Real-World Consequences
This week, the minister visited the London headquarters of a children's helpline and heard a mock-up conversation to counsellors featuring a account of AI-based abuse. The interaction portrayed a teenager requesting help after facing extortion using a explicit deepfake of themselves, constructed using AI.
"When I learn about children experiencing blackmail online, it is a source of intense anger in me and justified anger amongst parents," he stated.
Concerning Data
A leading online safety foundation reported that cases of AI-generated abuse content – such as online pages that may contain multiple images – had significantly increased so far this year.
Cases of the most severe material – the most serious form of exploitation – rose from 2,621 images or videos to 3,086.
- Girls were predominantly victimized, accounting for 94% of illegal AI depictions in 2025
- Portrayals of infants to two-year-olds rose from five in 2024 to 92 in 2025
Industry Response
The legislative amendment could "represent a vital step to guarantee AI tools are secure before they are launched," stated the head of the internet monitoring foundation.
"AI tools have enabled so victims can be victimised repeatedly with just a simple actions, giving criminals the ability to create potentially endless amounts of advanced, photorealistic child sexual abuse material," she continued. "Material which further commodifies survivors' trauma, and makes young people, particularly female children, less safe on and off line."
Support Interaction Data
Childline also released details of support interactions where AI has been mentioned. AI-related harms discussed in the sessions comprise:
- Using AI to evaluate weight, body and appearance
- AI assistants dissuading children from consulting trusted adults about harm
- Being bullied online with AI-generated material
- Online extortion using AI-faked images
During April and September this year, Childline conducted 367 support sessions where AI, conversational AI and associated topics were discussed, four times as many as in the same period last year.
Fifty percent of the mentions of AI in the 2025 interactions were connected with psychological wellbeing and wellbeing, encompassing using chatbots for support and AI therapy apps.