AI-generated child sex abuse images are based on real victims, report finds

23 July 2024, 01:08

A laptop user with their hood up
Laptop User Stock. Picture: PA

The Internet Watch Foundation warned that the AI tools used to create the images remain legal in the UK.

AI is being used to generate deepfake child sexual abuse images of real-life victims, a new report has found.

The Internet Watch Foundation (IWF) said the AI tools used to create the images remain legal in the UK, even though AI child sexual abuse images are illegal.

Images of one victim of child rape and torture were uploaded by her abuser when she was between three and eight years old, the IWF said.

The non-profit reported that Olivia, not her real name, was rescued by police in 2013 but years later dark web users are using AI tools to computer-generate images of her in new abusive situations.

Offenders are compiling collections of images of named victims, such as Olivia, and using them to fine-tune AI models to create new material, the IWF said.

The organisation said it has discovered one model for generating new images of Olivia, who is now in her 20s, that is available to download for free.

A dark web forum user reportedly shared an anonymous webpage containing links to AI models for 128 different named victims of child sexual abuse.

Other fine-tuned models can generate AI child sexual material of celebrity children.

A spokesperson for the group said: “Although now free of her abuser, Olivia, like many other survivors, is repeatedly victimised every time imagery of her abuse continues to be shared, sold and viewed online.

“This torment has now reached a new level because of the advent of generative text-to-image AI which is being exploited by offenders

“Fine-tuned models like Olivia’s have been trained on the imagery that IWF analysts were seeing daily but despite best efforts were unable to eradicate.

“This means that the suffering of survivors is potentially without end, since perpetrators can generate as many images of the children as they want.

“The IWF knows, from talking to adults who have suffered repeated victimisation, that it’s a mental torture to know that their imagery continues to be circulated online.

“For many survivors, the knowledge that they could be identified, or even recognised from images of their abuse is terrifying.”

IWF analysts found 90% of AI images were realistic enough to be assessed under the same law as real child sexual abuse material (CSAM), and that they are becoming increasingly extreme.

It warned “hundreds of images can be spewed out at the click of a button” and some have a “near flawless, photo-realistic quality”.

IWF chief executive Susie Hargreaves said: “We will be watching closely to see how industry, regulators and Government respond to the threat, to ensure that the suffering of Olivia, and children like her, is not exacerbated, reimagined and recreated using AI tools.”

Richard Collard of the NSPCC said: “The speed with which AI generated child abuse is developing is incredibly concerning but is also preventable. Too many AI products are being developed and rolled out without even the most basic considerations for child safety, retraumatising child victims of abuse.

“It is crucial that child protection is a key pillar of any Government legislation around AI safety. We must also demand tough action from tech companies now to stop AI abuse snowballing and ensure that children whose likeness are being used are identified and supported.”

A Government spokesperson said: “We welcome the Internet Watch Foundation report and will carefully consider their recommendations.

“We are committed to further measures to keep children safe online and go after those that would cause harm, including where AI is used to do so.”

By Press Association

More Technology News

See more More Technology News

CES 2025 signage

CES ‘doesn’t have the same support’ from the UK as other nations, show boss says

The firm said it would begin a pilot of the new system with a L'Oreal brand in stores in Asia later in 2025. (L'Oreal)

New L’Oreal skin analysis tool can help predict aging and cosmetic issues

Samsung's Vision AI smart assistant, which are built into Samsung's TVs to act as a virtual assistant

Samsung unveils plans to turn TVs into AI assistants

Signage and staging at the CES show in Las Vegas

AI, car tech and ‘weird’ gadgets expected to dominate at CES trade show

Sir Nick Clegg

Clegg leaves Meta role as Republican promoted ahead of Trump presidency

A Polestar 4 electric car

Does the Polestar 4 offer a glimpse of the cars of the future?

The Duchess of Sussex

Meghan returns to Instagram with beach video

The app intervenes when smoking is detected (University of Bristol/PA)

Smartwatch technology could help people quit smoking, study finds

Elon Musk

Downing Street rejects Musk’s suggestion companies are turning away from UK

A person using their phone at a pedestrian crossing

Predicting the future in 1999: Tech predictions 25 years on

Manny Wallace, known as Big Manny on TikTok, smiling and standing inside a science lab

TikToker teaching science hopes short-form video will become part of curriculum

An information screen in the South Terminal at Gatwick Airport (PA)

How the CrowdStrike outage made IT supply chains the new big issue in tech

The Airbnb app icon

Airbnb activates ‘defences’ to stop unauthorised New Year parties

Artificial Intelligence futuristic light sign

Regulations needed to stop AI being used for ‘bad things’ – Geoffrey Hinton

Elon Musk

How Elon Musk’s influence has grown both online and offline in 2024

Hands holding the iPhone 16

How smartphones powered the AI boom in 2024