Social media companies must ‘get a grip’ on removing child abuse images

13 September 2021, 11:04

A child using a laptop computer. (Dominic Lipinski/PA)
Record levels of internet grooming spark calls for stronger Online Safety Bill. Picture: PA

The All-Party Parliamentary Group (APPG) on Social Media said there are ‘very real concerns’ about the impact of encryption on child protection.

Social media companies must not encrypt messages unless they can guarantee they can keep platforms free of illegal content, an inquiry has warned.

The All-Party Parliamentary Group (APPG) on Social Media is calling for companies to step up and do more to protect children from online grooming and sexual abuse.

It launched its inquiry into the “disturbing” rise of so-called “self-generated” child sexual abuse material last November.

The cross-party MPs say the Home Office must review legislation to ensure it is as easy as possible for children to have their images removed from the internet.

Self-generated content can include material filmed using webcams, very often in the child’s own room, and then shared online.

In some cases, children are groomed, deceived or extorted into producing and sharing a sexual image or video of themselves.

The report, Selfie Generation – What’s Behind The Rise Of Self-Generated Indecent Images Of Children?, says the trend “seems to have been exacerbated by the Covid-19 crisis”.

The amount of “self-generated” child sex abuse material recorded by the Internet Watch Foundation (IWF) between January and April this year was more than double that detected during the same period in 2020.

Experts believe an increase in the number of offenders exchanging child sex abuse material during lockdowns may stimulate demand beyond the pandemic.

The MPs say many witnesses “raised very real concerns” about the impact of encryption on child protection, saying it could “cripple” the ability of programmes to detect illegal imagery.

They write: “The APPG believes it is completely unacceptable for a company to encrypt a service that has many child users.

“Doing this would do so much damage to child protection.

“We recommend that technology companies do not encrypt their services until a workable solution can be found that ensures equivalency with the current arrangements for the detection of this imagery.”

Labour MP Chris Elmore, chairman of the APPG, said social media companies must be more proactive in rooting out abusive images, and be clear to young users how they can complain about them.

He said: “It’s high time that we take meaningful action to fix this unacceptable mess.

“Children are daily at real risk of unimaginable cruelty, abuse and, in some instances, death.

“Social media companies are fundamentally failing to discharge their duties, and simply ignoring what should be an obvious moral obligation to keep young users safe.

“They need to get a grip, with institutional re-design, including the introduction of a duty-of-care on the part of companies toward their young users.”

The term “self-generated” should “not be taken to imply that such children have any share in the moral responsibility for their abuse”, he added.

Among 10 recommendations, the report says it should be replaced by “first person produced imagery” to avoid inadvertent victim blaming.

Susie Hargreaves, director of the UK Safer Internet Centre, said: “We see the fallout of abuse and, when children are targeted and made to abuse themselves on camera by criminal adult predators, it has a heart-breaking effect on children and their families.

“There is hope, and there are ways for children and young people to fight back.

“The Report Remove tool we launched this year with Childline empowers young people to have illegal images and videos of themselves removed.”

She added: “New legislation will also help make a difference, and the forthcoming Online Safety Bill is a unique opportunity to make the UK a safer place to be online, particularly for children.”

A Home Office spokeswoman said: “Keeping children safe is one of our highest priorities and the strongest measures contained in the Online Safety Bill are designed to protect children.

“If social media companies do not properly assess or take action against the risks their sites pose to children, they will face heavy fines or have their sites blocked. The Bill will further make tech companies accountable to an independent regulator.

“We are clear that companies must continue to take responsibility for stopping the intolerable level of harmful material on their platforms and embed public safety in their system designs, which is why the Bill will also compel them to consider the risks associated with all elements of their services and take robust action to keep their users safe.”

By Press Association

More Technology News

See more More Technology News

The Pinwheel Watch, a smartwatch designed for children, unveiled at the CES technology show in Las Vegas.

CES 2025: Pinwheel launches child-friendly smartwatch with built in AI chatbot

The firm said the morning data jumps had emerged as part of its broadband network analysis (PA)

Millions head online at 6am, 7am and 8am as alarms go off, data shows

A mobile phone screen

Meta ends fact-checking on Facebook and Instagram in favour of community notes

Mark Zuckerberg

Meta criticised over ‘chilling’ content moderation changes

Apps displayed on smartphone

Swinney voices concern at Meta changes and will ‘keep considering’ use of X

sam altman

Sister of OpenAI CEO Sam Altman files lawsuit against brother alleging sexual abuse as child

OpenAI chief executive Sam Altman with then-prime minister Rishi Sunak at the AI Safety Summit in Milton Keynes in November 2023

OpenAI boss Sam Altman denies sister’s allegations of sexual abuse

A super-resolution prostate image

New prostate cancer imaging shows ‘extremely encouraging’ results in trials

Gadget Show

AI will help workers with their jobs, not replace them, tech executives say

Zuckerberg said he will "work with President Trump to push back on governments around the world that are going after American companies and pushing to censor more”.

Meta’s ‘chilling’ decision to ditch fact-checking and loosen moderation could have ‘dire consequences’ says charity

Twitter logo

X boss Linda Yaccarino praises Meta’s decision to scrap fact checkers

People walk by the Las Vegas Convention Centre

Smart home tech, AI and cars among central themes as CES 2025 prepares to open

An Apple phone

Apple to update AI tools after BBC complaint over inaccurate news alerts

Meta is ditching its fact-checking service

Meta ditches fact-checking on Facebook and Instagram in favour of X-style 'community notes'

A wallet with bank cards cash

35% of young adults ‘are concerned about their finances on a daily basis’

Broadcaster Cathy Newman at the Women of The Year Lunch and Awards 2019 in London

‘Haunting’ to see deepfake pornography of myself, says journalist Cathy Newman