Sign in

How AI fuels child sexual exploitation and brainwashing by extremist groups

It is necessary to monitor the use of AI tools by children to prevent them from being misused for harmful purposes

Published on: Dec 01, 2025 2:31 PM IST
Share
Share via
  • facebook
  • twitter
  • linkedin
  • whatsapp
Copy link
  • copy link

Globally, 650 million (or 1 in 5) girls, and between 410-530 million (or around 1 in 7) boys have experienced sexual violence, according to a 2025 report by the United Nations Children’s Fund (UNICEF).

AI-generated images have become so ‘lifelike’ that, in some cases, it is difficult to determine whether real children have been subjected to real harms for their production. (Representational image)
AI-generated images have become so ‘lifelike’ that, in some cases, it is difficult to determine whether real children have been subjected to real harms for their production. (Representational image)

An 830% increase was recorded in online child sexual abuse imagery from 2014 to last year by the Internet Watch Foundation (IWF), which was first authorised to proactively hunt child sexual abuse in 2014 by UK Prime Minister Keir Starmer, who was then head of the Crown Prosecution Service.

Unfortunately, AI has been a known catalyst for such crimes. AI image generation tools that help in creating Child Sexual Abuse Material (CSAM) promote and facilitate instances of child sexual exploitation and abuse. CSAM is often referred to as “child pornography,” or an indecent image/video, including a pseudo-photograph (a digitally-created photorealistic image), of a child. Shortly after the publication of IWF’s report earlier this year, the UK announced new legislation to criminalise AI child abuse tools, making it the first country to do so.

According to a Generative AI (GenAI) bulletin from the U.S. Department of Homeland Security, an offender can use AI to take an image of a child and make it appear as though the child is nude or engaged in sexual acts; create an image of a child being sexually abused via text prompts; manufacture images of children being abused who look like real people but are fabricated; teach other offenders how to engage with children online (i.e., grooming); and revictimise CSAM victims by using AI to edit previously created and shared content to create new CSAM.

Moreover, AI-generated images have become so “lifelike” in recent years that, in some cases, it is difficult to determine whether real children have been subjected to real harms for their production, according to several news reports.

“We know without doubt that the AI-produced child sexual abuse material is a rapidly growing problem … There is a misconception that AI-generated images are ‘victimless’ and this could not be further from the truth. We found that many of the offenders are sourcing images of children in order to manipulate them, and that the desire for ‘hardcore’ imagery, escalating from ‘softcore’ is regularly discussed,” said researcher Dr. Deanna Davy in an article published by Anglia Ruskin University, Cambridge, UK, last year.

Safeguarding children by acting promptly is key. “Taking down sensitive, personal content that has been acquired and distributed unlawfully provides major relief to the victim. I, along with my team, ensure that the reported content is taken down as early as 36 hours after the complaint has been filed. We get such complaints regularly, some even naming multiple places/websites where the content has been distributed or uploaded illegally. We take down all instances,” Vinit Kumar, deputy commissioner of police (DCP), Intelligence Fusion & Strategic Operations (IFSO) Special Cyber Crime Cell, Delhi, told HT.

“Especially for women and children, there is a specialised provision on the National Cyber-Crime Reporting Portal (NCRP), developed and maintained by the Indian Cyber Crime Coordination Centre (I4C), Ministry of Home Affairs, where the victim can even file a complaint anonymously,” said DCP Kumar. The NCRP portal can be reached at www.cybercrime.gov.in, where the very first interactive tab is “Women/Children Related Crime” that allows anonymous complaint registration. Your identity and personal information, like phone number(s), will not be asked for. If it’s a matter concerning a take-down of unlawful, sensitive content, your issue will be resolved “as early as 36 hours,” as indicated by DCP Kumar. If you wish to share personal details, you can register a complaint through “Register & Track.”

AI-enabled brainwashing by extremist groups

“Vulnerable populations are frequently targeted by terrorist groups, including children, who can be easily accessed through online channels. Children spend a significant amount of time online, playing video games or watching videos, and can become targets for terrorist content,” said a 2024 report International Centre for Counter-Terrorism (ICCT), which is HQ-ed in the Netherlands.

“There have been reported situations of children acknowledging abuse and seeking support through AI chatbots, giving opportunities for these terrorists to intervene,” the report said. “Once they influence children’s behaviour, terrorist groups could start their exploitation through diverse forms such as child sexual exploitation, promoting self-harm, or recruitment to their ideology.”

Another report published last year by the Combating Terrorism Center (CTC) of the United States Military Academy in West Point (New York) says, “With the arrival and rapid adoption of sophisticated deep-learning models such as ChatGPT, there is growing concern that terrorists and violent extremists could use these AI tools to enhance their operations online and in the real world. Therefore, it is necessary to monitor the use of ChatGPT and other AI tools to prevent them from being misused for harmful purposes.”

The report also mentions data released by the Global Internet Forum to Counter Terrorism (GIFCT), which delineated the potential uses of AI by extremist groups as in generating and distributing propaganda; interactive recruitment through AI-powered chatbots; automated attacks by using drones or other autonomous vehicles; social media exploitation to brainwash and recruit followers; and cyber attacks.

  • Aaryamitra Pateriya
    ABOUT THE AUTHOR
    Aaryamitra Pateriya

    Aaryamitra covers vice, business, law, and US news that surrounds artificial intelligence. He is situated at HT Central Editorial, New Delhi.

Check India news real-time updates, latest news from India, latest South Africa vs Canada Live Cricket Score at HindustanTime