UK Technology Firms and Child Protection Officials to Examine AI's Capability to Generate Exploitation Content
Technology companies and child protection organizations will receive authority to evaluate whether artificial intelligence tools can produce child exploitation material under recently introduced UK legislation.
Substantial Increase in AI-Generated Harmful Content
The declaration came as revelations from a safety watchdog showing that reports of AI-generated CSAM have more than doubled in the last twelve months, rising from 199 in 2024 to 426 in 2025.
New Legal Framework
Under the changes, the authorities will allow designated AI companies and child safety groups to examine AI systems – the underlying systems for chatbots and image generators – and ensure they have sufficient protective measures to prevent them from producing depictions of child sexual abuse.
"Ultimately about preventing abuse before it occurs," stated Kanishka Narayan, adding: "Experts, under rigorous conditions, can now identify the danger in AI models promptly."
Tackling Legal Challenges
The amendments have been implemented because it is illegal to create and own CSAM, meaning that AI developers and others cannot create such content as part of a evaluation process. Previously, authorities had to delay action until AI-generated CSAM was published online before dealing with it.
This law is designed to averting that problem by enabling to stop the production of those images at source.
Legal Framework
The amendments are being introduced by the government as revisions to the criminal justice legislation, which is also establishing a ban on owning, creating or distributing AI systems designed to create exploitative content.
Real-World Impact
This recently, the official visited the London headquarters of a children's helpline and listened to a mock-up conversation to counsellors involving a report of AI-based exploitation. The call portrayed a teenager requesting help after being blackmailed using a sexualised AI-generated image of themselves, created using AI.
"When I hear about children facing blackmail online, it is a source of extreme anger in me and rightful concern amongst families," he said.
Alarming Statistics
A leading online safety organization stated that instances of AI-generated abuse content – such as online pages that may contain multiple files – had significantly increased so far this year.
Instances of category A material – the most serious form of exploitation – increased from 2,621 visual files to 3,086.
- Female children were overwhelmingly targeted, accounting for 94% of prohibited AI images in 2025
- Portrayals of infants to toddlers increased from five in 2024 to 92 in 2025
Industry Response
The legislative amendment could "constitute a vital step to ensure AI products are safe before they are released," stated the head of the online safety foundation.
"Artificial intelligence systems have made it so survivors can be targeted repeatedly with just a few clicks, giving offenders the ability to make possibly endless amounts of advanced, photorealistic child sexual abuse material," she continued. "Content which additionally exploits survivors' trauma, and renders children, particularly female children, more vulnerable on and off line."
Counseling Interaction Data
Childline also published details of support sessions where AI has been referenced. AI-related risks discussed in the conversations include:
- Employing AI to rate weight, body and appearance
- Chatbots dissuading young people from consulting trusted adults about abuse
- Facing harassment online with AI-generated content
- Digital blackmail using AI-faked images
Between April and September this year, Childline conducted 367 counselling sessions where AI, chatbots and related topics were discussed, significantly more as many as in the equivalent timeframe last year.
Half of the references of AI in the 2025 sessions were connected with mental health and wellness, encompassing using AI assistants for assistance and AI therapeutic applications.