Cracking the whip on AI-enabled child abuse
The UK’s plan to bring a law to shield children from AI-mediated abuse and punish perpetrators should interest other jurisdictions
The dark side of Artificial Intelligence (AI) portends harm for children, a demography whose inchoate psychosocial and physiological development leaves them vulnerable even otherwise. AI tools have changed the way those preying on children operate, vastly reducing the time and effort involved in such vile activities and generating child sexual abuse material (CSAM) that can be traded. To illustrate, images of real-life child sex abuse victims are being fed into AI models to generate newer depictions from them, as reported by the United Kingdom (UK) based Internet Watch Foundation (IWF) in 2023.

AI tools are also being used to “de-age” celebrities with pictures collected from the internet. Predators have been found to use AI tools to create graphic images of children who may not have suffered sex abuse, which is then used to blackmail them. IWF reports a multifold surge in CSAM after AI‘s reach expanded. Apart from such implications, AI is being used by predators to disguise themselves, identify, and “groom” potential targets.
Against such a backdrop, the UK’s plan to bring a law to shield children — annually, some 500,000 children in the country fall prey to child sexual abuse — from AI-mediated abuse and punish perpetrators should interest other jurisdictions. The UK home office has proposed five-year jail terms for possession, creation, or distribution of AI-generated CSAM and 10-year imprisonment for predators who run websites to serve other predators via CSAM, advice and facilitation in grooming. The new law would also ban AI models currently being used for child abuse. Enacting such laws becomes imperative for other jurisdictions given the borderless challenge that AI and other digital technology pose. The UK law should pave the way.
