Tech firms and child safety agencies will receive permission to evaluate whether AI systems can generate child abuse images under new UK laws.
The announcement coincided with findings from a protection monitoring body showing that reports of AI-generated child sexual abuse material have more than doubled in the past year, rising from 199 in 2024 to 426 in 2025.
Under the amendments, the authorities will allow designated AI developers and child protection groups to inspect AI models – the underlying systems for chatbots and image generators – and ensure they have sufficient protective measures to prevent them from producing images of child exploitation.
"Fundamentally about preventing exploitation before it happens," stated Kanishka Narayan, adding: "Experts, under strict protocols, can now identify the risk in AI models promptly."
The amendments have been implemented because it is illegal to produce and possess CSAM, meaning that AI creators and others cannot generate such images as part of a evaluation process. Previously, officials had to wait until AI-generated CSAM was published online before dealing with it.
This law is designed to preventing that problem by enabling to halt the production of those images at source.
The amendments are being introduced by the authorities as modifications to the crime and policing bill, which is also implementing a prohibition on possessing, creating or sharing AI systems developed to create child sexual abuse material.
This week, the official toured the London base of a children's helpline and listened to a simulated conversation to advisors involving a account of AI-based abuse. The interaction portrayed a adolescent requesting help after facing extortion using a explicit deepfake of themselves, created using AI.
"When I hear about children experiencing extortion online, it is a cause of extreme frustration in me and justified anger amongst parents," he said.
A leading online safety foundation stated that cases of AI-generated exploitation material – such as webpages that may include multiple files – had more than doubled so far this year.
Instances of category A content – the gravest form of exploitation – increased from 2,621 visual files to 3,086.
The law change could "represent a crucial step to guarantee AI products are secure before they are released," stated the chief executive of the internet monitoring foundation.
"AI tools have enabled so survivors can be targeted all over again with just a simple actions, providing offenders the capability to make potentially limitless quantities of advanced, lifelike child sexual abuse material," she added. "Material which further commodifies survivors' trauma, and makes children, particularly female children, less safe on and off line."
The children's helpline also released information of support sessions where AI has been mentioned. AI-related risks mentioned in the conversations comprise:
During April and September this year, the helpline conducted 367 support sessions where AI, conversational AI and related terms were discussed, four times as many as in the equivalent timeframe last year.
Fifty percent of the references of AI in the 2025 sessions were connected with psychological wellbeing and wellbeing, including using chatbots for assistance and AI therapy apps.
Elara is a home improvement expert with a passion for sustainable bathroom designs and innovative plumbing solutions.