AI action plans should be slowed until safeguards for children in place – NSPCC

28 January 2025, 00:04

A child using a laptop
Social media ban for under-16s. Picture: PA

The children’s charity has called for statutory safeguards around generative AI to help protect youngsters.

The Government should slow down its artificial intelligence action plans until a statutory duty of care for children is in place around the technology, a leading charity has said.

The NSPCC said generative AI is already being used to create sexual abuse images of children, and urged the Government to consider adopting specific safeguards into legislation to regulate AI.

The charity added that it had also found more than three-quarters of the public (78%) would prefer more robust safety checks on new generative AI tools, even if that meant the launch of such products was delayed.

A new study commissioned by the NSPCC also found that 89% of those asked had some level of concern around AI and its potential safety for children.

We can’t continue with the status quo where tech platforms 'move fast and break things' instead of prioritising children’s safety. For too long, unregulated social media platforms have exposed children to appalling harms that could have been prevented

Chris Sherwood

The charity said it had been receiving reports from children about AI through Childline since 2019.

Earlier this month, the Prime Minister announced plans to boost the AI industry in the UK, and to increase its use in daily life, starting with the civil service, as part of wider Government plans to help grow the economy.

The US has also reached announced major investment plans in the technology, which was sparked by the introduction of generative AI chatbot ChatGPT in late 2022, and has since grown into the key innovation area of the tech sector.

Chris Sherwood, the NSPCC’s chief executive, said: “Generative AI is a double-edged sword.

“On the one hand it provides opportunities for innovation, creativity and productivity that young people can benefit from; on the other it is having a devastating and corrosive impact on their lives.

“We can’t continue with the status quo where tech platforms ‘move fast and break things’ instead of prioritising children’s safety.

“For too long, unregulated social media platforms have exposed children to appalling harms that could have been prevented.

“Now the Government must learn from these mistakes, move quickly to put safeguards in place and regulate generative AI, before it spirals out of control and damages more young lives.

“The NSPCC and the majority of the public want tech companies to do the right thing for children and make sure the development of AI doesn’t race ahead of child safety.

The potential for harm is unimaginable. AI companies must prioritise the protection of children and the prevention of AI abuse imagery above any thought of profit

Derek Ray-Hill

“We have the blueprints needed to ensure this technology has children’s wellbeing at its heart, now both Government and tech companies must take the urgent action needed to make generative AI safe for children and young people.”

International conference the AI Action Summit is due to take place in Paris next month.

Derek Ray-Hill, interim chief executive at the Internet Watch Foundation, which seeks out and helps remove child sexual abuse imagery from the internet, said existing laws, as well as future AI legislation, must be made robust enough to ensure children are protected from being exploited by the technology.

“Artificial intelligence is one of the biggest threats facing children online in a generation, and the public is rightly concerned about its impact,” he said.

“While the technology has huge capacity for good, at the moment it is just too easy for criminals to use AI to generate sexually explicit content of children – potentially in limitless numbers, even incorporating imagery of real children. The potential for harm is unimaginable.

“AI companies must prioritise the protection of children and the prevention of AI abuse imagery above any thought of profit. It is vital that models are assessed before they go to market, and rigorous risk mitigation strategies must be in place, with protections built into closed-source models from the outset.

“The upcoming AI Bill is a key opportunity to introduce safeguards for models to prevent the generation of AI-generated child sexual abuse material, and child sexual abuse laws must be updated in line with emerging harms, to prevent AI technology being exploited to create child sexual abuse material.”

By Press Association

More Technology News

See more More Technology News

A Barclays sign outside a branch

Barclays to hand share award to staff after yearly profit surges by a quarter

A bin of seized knives. A new AI tool from the University of Surrey has been unveiled which could help police forces more quickly identify and trace knives.

New AI tool to identify knives could ‘transform’ policing of knife crime

Former executive chairman of Google Eric Schmidt

Former Google boss warns of ‘extreme risk’ from terrorists posed by AI

A laptop displaying a ‘Matrix’-style screensaver

MPs: Ministers must give protections to creative sector amid AI copyright fears

French President Emmanuel Macron addresses the audience in a closing speech at the Grand Palais during the Artificial Intelligence Action Summit in Paris

Refusal to sign AI declaration was ‘based on what’s best for British people’

Someone at a computer keyboard

Airbnb issues warning over holiday scams fuelled by AI and social media

An HSBC branch

HSBC online and mobile banking working again after service outage

HSBC on growth across the UK

HSBC hit by outage as users complain of being unable to log on

The summit in Paris (Michel Euler/AP)

UK did not sign AI communique over ‘opportunity and security’ concerns – No 10

Sky Glass Gen 2

Sky unveils second generation Sky Glass TV promising ‘better picture and sound’

Technology Stock

UK announces sanctions against Russian cyber crime network

Participants in the AI Action Summit pose for a group photo at the Grand Palais in Paris

UK appears not to have signed leaders’ declaration at AI summit

OpenAI CEO Sam Altman

Sam Altman reiterates OpenAI ‘not for sale’ after Elon Musk-led bid

A young girl uses the TikTok app on a smartphone.

Data of dead British children may have been deleted, TikTok boss says

Elon Musk

Elon Musk offers $97bn to buy ChatGPT-maker OpenAI

Alesha Dixon (Jordan Pettitt/PA)

Alesha Dixon working ‘super hard’ to stop children having phones