Technology companies and child safety organizations will be granted permission to evaluate whether artificial intelligence systems can generate child abuse material under new British laws.
The announcement came as revelations from a safety watchdog showing that reports of AI-generated child sexual abuse material have more than doubled in the last twelve months, growing from 199 in 2024 to 426 in 2025.
Under the changes, the government will permit designated AI developers and child safety groups to examine AI systems – the foundational technology for conversational AI and visual AI tools – and verify they have sufficient protective measures to stop them from creating depictions of child sexual abuse.
"Fundamentally about preventing exploitation before it happens," declared Kanishka Narayan, noting: "Specialists, under strict conditions, can now detect the risk in AI models early."
The changes have been introduced because it is against the law to produce and possess CSAM, meaning that AI creators and others cannot generate such images as part of a testing process. Previously, authorities had to wait until AI-generated CSAM was published online before dealing with it.
This legislation is designed to preventing that problem by helping to halt the creation of those images at source.
The amendments are being introduced by the government as revisions to the crime and policing bill, which is also establishing a ban on owning, producing or sharing AI systems designed to generate child sexual abuse material.
This week, the official toured the London headquarters of a children's helpline and heard a simulated conversation to counsellors involving a account of AI-based exploitation. The call portrayed a teenager seeking help after being blackmailed using a explicit deepfake of himself, created using AI.
"When I hear about children facing extortion online, it is a cause of intense anger in me and rightful concern amongst parents," he said.
A leading online safety organization stated that cases of AI-generated exploitation content – such as webpages that may contain multiple files – had significantly increased so far this year.
Instances of category A content – the gravest form of exploitation – increased from 2,621 images or videos to 3,086.
The law change could "represent a crucial step to ensure AI products are secure before they are launched," stated the head of the internet monitoring foundation.
"AI tools have made it so victims can be victimised repeatedly with just a simple actions, providing criminals the ability to make potentially endless amounts of sophisticated, photorealistic child sexual abuse material," she added. "Content which additionally exploits survivors' trauma, and makes young people, especially female children, less safe on and off line."
Childline also released information of counselling sessions where AI has been referenced. AI-related harms discussed in the conversations include:
During April and September this year, the helpline conducted 367 counselling sessions where AI, chatbots and related terms were mentioned, significantly more as many as in the equivalent timeframe last year.
Fifty percent of the mentions of AI in the 2025 sessions were connected with mental health and wellbeing, encompassing utilizing chatbots for support and AI therapeutic apps.