Ed Miliband Calls on Labour to Look Ahead After Keir Starmer Says Sorry to Streeting for Hostile Media Leaks
-
- By Katherine Foster
- 03 Mar 2026
Technology companies and child safety organizations will be granted permission to evaluate whether AI tools can produce child exploitation material under recently introduced British legislation.
The announcement coincided with findings from a protection watchdog showing that reports of AI-generated CSAM have increased dramatically in the last twelve months, rising from 199 in 2024 to 426 in 2025.
Under the amendments, the authorities will allow designated AI companies and child protection groups to examine AI systems – the underlying technology for chatbots and image generators – and verify they have adequate protective measures to stop them from producing images of child sexual abuse.
"Ultimately about preventing exploitation before it occurs," stated the minister for AI and online safety, noting: "Experts, under strict conditions, can now detect the risk in AI models early."
The amendments have been introduced because it is against the law to produce and own CSAM, meaning that AI creators and other parties cannot create such content as part of a evaluation regime. Previously, authorities had to wait until AI-generated CSAM was uploaded online before addressing it.
This law is aimed at averting that problem by enabling to halt the creation of those materials at their origin.
The amendments are being added by the authorities as modifications to the criminal justice legislation, which is also establishing a ban on owning, producing or sharing AI models developed to generate exploitative content.
This recently, the minister visited the London headquarters of Childline and heard a mock-up conversation to advisors involving a account of AI-based exploitation. The call portrayed a teenager requesting help after facing extortion using a sexualised AI-generated image of themselves, constructed using AI.
"When I learn about young people facing blackmail online, it is a source of extreme frustration in me and justified anger amongst families," he stated.
A prominent online safety organization reported that instances of AI-generated abuse material – such as webpages that may contain numerous files – had more than doubled so far this year.
Instances of the most severe material – the most serious form of exploitation – increased from 2,621 visual files to 3,086.
The law change could "constitute a vital step to guarantee AI products are secure before they are released," stated the chief executive of the online safety foundation.
"AI tools have enabled so survivors can be victimised all over again with just a simple actions, giving offenders the ability to create potentially limitless amounts of advanced, lifelike exploitative content," she added. "Content which further commodifies survivors' suffering, and makes children, especially girls, more vulnerable on and off line."
Childline also released details of counselling sessions where AI has been mentioned. AI-related risks discussed in the sessions comprise:
During April and September this year, the helpline delivered 367 support interactions where AI, chatbots and related terms were mentioned, four times as many as in the same period last year.
Fifty percent of the mentions of AI in the 2025 interactions were connected with mental health and wellness, encompassing utilizing chatbots for support and AI therapeutic applications.
Elara is a seasoned gaming journalist with a passion for slot mechanics and player strategies.