British Tech Firms and Child Safety Officials to Examine AI's Capability to Generate Exploitation Images

Technology companies and child safety organizations will be granted permission to assess whether artificial intelligence systems can produce child exploitation images under new UK legislation.

Substantial Increase in AI-Generated Harmful Content

The announcement came as revelations from a protection watchdog showing that reports of AI-generated CSAM have increased dramatically in the last twelve months, rising from 199 in 2024 to 426 in 2025.

Updated Legal Framework

Under the changes, the government will allow approved AI companies and child protection organizations to inspect AI models – the underlying technology for conversational AI and image generators – and ensure they have adequate protective measures to prevent them from producing images of child sexual abuse.

"Ultimately about preventing exploitation before it occurs," stated Kanishka Narayan, adding: "Experts, under rigorous conditions, can now identify the risk in AI systems early."

Addressing Regulatory Challenges

The changes have been introduced because it is against the law to produce and own CSAM, meaning that AI creators and others cannot generate such images as part of a evaluation regime. Until now, officials had to delay action until AI-generated CSAM was uploaded online before dealing with it.

This law is designed to preventing that problem by helping to stop the creation of those images at their origin.

Legal Framework

The amendments are being added by the government as revisions to the crime and policing bill, which is also establishing a ban on owning, creating or distributing AI models developed to generate child sexual abuse material.

Practical Impact

This recently, the official visited the London headquarters of Childline and listened to a simulated call to advisors involving a report of AI-based abuse. The call portrayed a adolescent seeking help after being blackmailed using a explicit deepfake of themselves, constructed using AI.

"When I learn about young people experiencing blackmail online, it is a cause of intense frustration in me and rightful concern amongst parents," he stated.

Alarming Data

A leading internet monitoring organization stated that cases of AI-generated abuse content – such as online pages that may include numerous files – had more than doubled so far this year.

Instances of category A material – the most serious form of exploitation – rose from 2,621 images or videos to 3,086.

  • Girls were overwhelmingly victimized, making up 94% of prohibited AI depictions in 2025
  • Depictions of infants to two-year-olds rose from five in 2024 to 92 in 2025

Industry Response

The legislative amendment could "constitute a crucial step to ensure AI products are safe before they are launched," commented the chief executive of the online safety foundation.

"AI tools have made it so victims can be victimised repeatedly with just a simple actions, giving criminals the capability to create possibly limitless amounts of sophisticated, photorealistic child sexual abuse material," she added. "Material which further exploits victims' suffering, and makes young people, particularly girls, less safe both online and offline."

Counseling Session Information

Childline also published details of support sessions where AI has been referenced. AI-related risks discussed in the sessions comprise:

  • Employing AI to evaluate body size, body and appearance
  • AI assistants discouraging children from talking to safe adults about abuse
  • Being bullied online with AI-generated material
  • Online extortion using AI-faked images

During April and September this year, the helpline conducted 367 support interactions where AI, chatbots and associated topics were discussed, four times as many as in the equivalent timeframe last year.

Half of the mentions of AI in the 2025 sessions were connected with mental health and wellbeing, including utilizing chatbots for assistance and AI therapeutic applications.

Brandon Allen
Brandon Allen

An art historian and cultural enthusiast with a passion for Italian heritage and museum curation.