🔗 Share this article British Technology Firms and Child Protection Agencies to Examine AI's Ability to Generate Abuse Images Technology companies and child safety organizations will receive authority to assess whether AI tools can generate child abuse images under recently introduced British legislation. Significant Rise in AI-Generated Harmful Material The announcement coincided with revelations from a protection monitoring body showing that reports of AI-generated child sexual abuse material have more than doubled in the last twelve months, rising from 199 in 2024 to 426 in 2025. New Legal Structure Under the changes, the government will allow approved AI developers and child safety organizations to examine AI systems – the underlying technology for conversational AI and image generators – and ensure they have sufficient protective measures to stop them from creating images of child sexual abuse. "Ultimately about stopping exploitation before it occurs," stated the minister for AI and online safety, noting: "Experts, under rigorous protocols, can now detect the danger in AI systems promptly." Tackling Regulatory Obstacles The amendments have been introduced because it is against the law to create and possess CSAM, meaning that AI developers and other parties cannot create such content as part of a testing regime. Previously, officials had to wait until AI-generated CSAM was published online before dealing with it. This legislation is aimed at averting that problem by helping to stop the creation of those materials at source. Legislative Framework The amendments are being added by the government as modifications to the crime and policing bill, which is also establishing a prohibition on possessing, creating or distributing AI models developed to create exploitative content. Practical Consequences This week, the official toured the London base of Childline and heard a mock-up conversation to counsellors involving a account of AI-based exploitation. The call depicted a teenager requesting help after being blackmailed using a explicit AI-generated image of themselves, constructed using AI. "When I hear about young people facing extortion online, it is a source of intense frustration in me and rightful anger amongst parents," he said. Concerning Statistics A leading online safety foundation stated that cases of AI-generated abuse material – such as online pages that may contain multiple files – had more than doubled so far this year. Cases of category A content – the gravest form of abuse – rose from 2,621 images or videos to 3,086. Female children were overwhelmingly targeted, accounting for 94% of illegal AI images in 2025 Depictions of newborns to toddlers rose from five in 2024 to 92 in 2025 Industry Response The law change could "constitute a vital step to ensure AI products are safe before they are launched," stated the chief executive of the internet monitoring foundation. "Artificial intelligence systems have made it so victims can be targeted all over again with just a few clicks, giving criminals the ability to create potentially limitless amounts of advanced, lifelike exploitative content," she continued. "Material which further commodifies survivors' trauma, and renders children, particularly girls, less safe both online and offline." Counseling Session Data Childline also released information of support interactions where AI has been mentioned. AI-related risks mentioned in the conversations comprise: Using AI to evaluate weight, body and appearance AI assistants discouraging children from talking to safe adults about harm Being bullied online with AI-generated material Digital extortion using AI-faked images During April and September this year, Childline conducted 367 counselling interactions where AI, conversational AI and related topics were discussed, significantly more as many as in the same period last year. Half of the mentions of AI in the 2025 sessions were connected with psychological wellbeing and wellbeing, encompassing using chatbots for support and AI therapeutic applications.