UK to require pre‑release AI testing to block child abuse image creation
Britain is moving to put a checkpoint in front of every new AI model that could be misused. The proposal, announced this week, would force developers to run a safety assessment before any software reaches the public. Officials say the focus is on tools that can generate images, a capability that has already been weaponised to produce illegal child‑abuse material.
The Internet Watch Foundation, a charity that flags harmful online content, warned that without pre‑release testing, “apparently benign” generators can be repurposed for abuse. Child‑rights advocates have rallied behind the plan, arguing that voluntary checks are not enough. They are pressing for mandatory safety tests, a demand that has found a receptive audience in government circles.
The upcoming regulations aim to close a gap that critics say has allowed offenders to stay one step ahead of law‑enforcement. As the debate sharpens, Jess Phillips is poised to explain the rationale behind the measures.
Safeguarding Minister Jess Phillips explained that the regulations are designed to stop seemingly harmless AI tools from being turned into instruments for creating abusive content. Child protection groups push for mandatory AI safety tests The Internet Watch Foundation (IWF), one of the few organizations authorized to proactively search for CSAM, supports the initiative. According to the IWF, reports of AI-generated abuse imagery have surged.
Between January and October 2025, the group removed 426 AI-related CSAM items, up from 199 during the same period in 2024. IWF CEO Kerry Smith warned that AI enables abusers to revictimize survivors.
The proposal marks a concrete step toward curbing the misuse of generative AI, but its practical impact is still uncertain. By extending the Crime and Policing Bill, the government would give authorised testers—tech firms and child‑protection groups—a legal foothold to probe models before they reach the market. Jess Phillips framed the move as a safeguard against “seemingly harmless” tools being repurposed for illegal imagery.
Child‑safety advocates have pressed for mandatory testing, arguing that voluntary checks may leave gaps. The Internet Watch Foundation, one of the few bodies equipped to assess such risks, is slated to play a role, yet the article does not detail how its findings would be enforced. Consequently, it is unclear whether the testing regime will be uniformly applied or how compliance will be monitored.
If the framework succeeds, it could prevent AI systems from becoming vectors for CSAM; if not, the risk of loopholes remains. The legislation’s final shape and its effectiveness in practice will only become clear after implementation.
Further Reading
- The UK's AI Strategy: Balancing Economic Potential with Security - Infosecurity Europe
- EU & UK AI Round-up – July 2025 - King & Spalding
- AI Watch: Global regulatory tracker - United Kingdom - White & Case
- 2025 AI Safety Index - Future of Life Institute
- AI Now Statement on the UK AI Safety Institute transition to the UK AI Security Institute - AI Now Institute
Common Questions Answered
What new requirement will the UK impose on AI models that can generate images?
The UK will require developers to conduct a pre‑release safety assessment for any AI model capable of generating images before it can be released to the public. This mandatory testing aims to prevent the tools from being repurposed to create illegal child‑abuse imagery.
How does the Internet Watch Foundation (IWF) view the proposed AI safety tests?
The IWF supports the initiative, noting a sharp rise in AI‑generated child‑abuse imagery reports between January and October 2025. As one of the few bodies authorized to proactively search for CSAM, the IWF believes mandatory testing will help curb this emerging threat.
Which legislation will be amended to give authorised testers legal authority over AI models?
The government plans to extend the Crime and Policing Bill, granting authorised testers—such as tech firms and child‑protection groups—a legal foothold to probe AI models before market release. This amendment is intended to formalise the pre‑release testing regime.
What role does Safeguarding Minister Jess Phillips attribute to the new AI regulations?
Jess Phillips explains that the regulations are designed to stop seemingly harmless AI tools from being turned into instruments for creating abusive content. She frames the move as a safeguard against the repurposing of generative AI for illegal imagery.