Daniel Barnett 9pm - 10pm
Charity issues AI warning over online child sexual abuse
15 February 2024, 00:04
New research from the Lucy Faithfull Foundation shows many adults worry about AI but were not aware it was already being used in online abuse.
Most UK adults have fears around the advances in artificial intelligence (AI), in particular in relation to its potential to harm children, but many are unaware about how it is already being used in sexual abuse, according to new research.
The study, from child protection charity the Lucy Faithfull Foundation, found that 66% of adults said they had concerns about the impact of the technology on children.
This was despite the research uncovering that the majority (70%) were not even aware that AI was already being used to generate sexual abuse images of children, and 40% did not know that such content was illegal.
The Lucy Faithfull Foundation said it was publishing its findings in an effort to boost awareness around the use of AI to exploit children.
The foundation, which runs the anonymous and confidential helpline Stop It Now, said it wanted to remind people of the law and what constitutes online child sexual abuse in the UK.
Donald Findlater, director of the helpline, said: “With AI and its capabilities rapidly evolving, it’s vital that people understand the dangers and how this technology is being exploited by online child sex offenders every day.
“Our research shows there are serious knowledge gaps amongst the public regarding AI – specifically its ability to cause harm to children.
“Unfortunately, the reality is that people are using this new, and unregulated, technology to create illegal sexual images of children, as well as so-called ‘nudified images’ of real children, including children who have been abused.
“People must know that AI is not an emerging threat – it’s here, now. Stop It Now helpline advisers deal with hundreds of people every week seeking help to stop viewing of sexual images of under-18s.
“These callers include people viewing AI-generated child sexual abuse material. We want the public to be absolutely clear that viewing sexual images of under-18s, whether AI-generated or not, is illegal and causes very serious harm to real children across the world.
“To anyone that needs support to change their online behaviours, contact the Stop It Now helpline. We can offer free, confidential support before it’s too late.”
The campaign to boost awareness has been backed by the Internet Watch Foundation (IWF), which proactively tracks down and removes child sexual abuse imagery online.
Dan Sexton, IWF chief technical officer, said: “There is a very real risk we could face a landslide of AI generated child sexual abuse which is completely indistinguishable from real images of children being raped and sexually tortured.
“These new technologies are allowing offenders to mass produce imagery of children who have already suffered abuse in real life, and create imagery of those same children in new scenarios.
“The potential impact for those trying to rid the internet of this material is profound – and the impact on children, who are made victims all over again every time this imagery is shared, is hard to comprehend.
“This is a bleak new front in the war on child sexual abuse imagery, and one which could well get out of control if serious action is not taken now to get a grip on it.”