Social media firms ‘failing to detect and remove’ suicide and self-harm content

15 August 2024, 16:54

The icons of social media apps, including Facebook, Instagram, YouTube and WhatsApp, are displayed on a mobile phone screen, in London
Technology Stock – Social Media. Picture: PA

The Molly Rose Foundation is calling for a new Online Safety Bill to strengthen content regulation.

Some of the biggest social media platforms are failing to detect and remove dangerous suicide and self-harm content, according to a new study.

Research from the Molly Rose Foundation found that of more than 12 million content moderation decisions made by six of the biggest platforms, over 95% were detected and removed by only two sites – Pinterest and TikTok.

The charity said Meta’s Instagram and Facebook were each responsible for just 1% of all suicide and self-harm content detected by the major sites studied, and X, formerly known as Twitter, is responsible for just one in 700 content decisions.

The study analysed publicly available records of over 12 million content moderation decisions taken by six sites: Facebook, Instagram, Pinterest, Snapchat, TikTok and X, and said it found the response of most platforms to suicide and self-harm content was “inconsistent, uneven and unfit for purpose”.

The Foundation is now warning that the Online Safety Act does not go far enough to address what it says are clear systematic failures in the content moderation approach of social media firms.

The charity’s chairman, Ian Russell, has urged the Government to commit to a new Online Safety Bill that can further strengthen regulation.

Mr Russell and his family set up the Molly Rose Foundation in memory of his daughter, Molly, who ended her life at age 14, in November 2017, after viewing harmful content on social media.

“Almost seven years after Molly died, it’s shocking to see most major tech companies continue to sit on their hands and choose inaction over saving young lives,” Mr Russell said.

“As the last few weeks have shown, it’s abundantly clever that much more ambitious regulation is required.

“That’s why it’s time for the new Government to finish the job and commit to a strengthened Online Safety Act.

Ian Russell
Ian Russell is campaigning for change (Yui Mok/PA)

“Parents across the country will be rightly appalled that the likes of Instagram and Facebook promise warm words but continue to expose children to inherently preventable harm.

“No ifs, no buts, it’s clear that assertive action is required.”

In its report, the Foundation said it had found that social media sites were routinely failing to detect harmful content on the highest risk parts of its services.

For example, it said only one in 50 suicide and self-harm posts detected by Instagram were videos, despite the short-form video feature Reels now accounting for half of all time spent on the app.

The study also accused sites of failing to enforce their own rules, noting that while TikTok detected almost three million items of suicide and self-harm content, it suspended only two accounts.

The research was based on content moderation decisions made in the EU, which are required to be made publicly accessible.

In response to the study, a Meta spokesperson said: “Content that encourages suicide and self-injury breaks our rules.

“We don’t believe the statistics in this report reflect our efforts. In the last year alone, we removed 50.6m pieces of this kind of content on Facebook and Instagram globally, and 99% was actioned before it was reported to us.

“However, in the EU we aren’t currently able to deploy all of our measures that run in the UK and the rest of the world.”

A spokesperson for Snapchat said: “The safety and wellbeing of our community is a top priority. Snapchat was designed to be different to other platforms, with no open newsfeed of unvetted content, and content moderation prior to public distribution.

“We strictly prohibit content that promote or encourage self-harm or suicide, and if we identify this, or it is reported to us, we remove it swiftly and take appropriate action.

“We also share self-harm prevention and support resources when we become aware of a member of our community in distress, and can notify emergency services when appropriate.

“We also continue to work closely with Ofcom on implementing the Online Safety Act, including the protections for children against these types of harm.”

TikTok did not provide a statement but said its rules were clear that it did not allow showing, promoting or sharing plans for suicide or self-harm.

A Department for Science, Innovation and Technology spokesperson said: “Social media companies have a clear responsibility to keep the people using their platforms safe and their processes to do so must be effective.

“Under the Online Safety Act, those who encourage self-harm with intent currently face up to five years in prison. Once the Act is fully implemented platforms will also have to proactively remove illegal content that encourages serious self-harm and stop children seeing material promoting self-harm or suicide, even when it falls below the criminal threshold.

“We want to get these new protections in place as soon as possible, but companies should not wait for laws to come into force – they must take effective action to protect all users now.”

Pinterest said it continues to heavily invest in the policies and systems that help it combat the spread of suicide and self-harm content on its platform, and that this work was ongoing.

X has not responded to a request for comment.

By Press Association

More Technology News

See more More Technology News

A rendering of a computer chip with a human brain image superimposed on it

Most people happy to share health data to develop artificial intelligence – poll

Hands on a keyboard with code on a computer screen

Cyber risk facing UK being ‘widely underestimated’, security chief warns

Ms Barkworth-Nanton, from Swindon was honoured for services to people affected by domestic abuse and homicide at Buckingham Palace on Thursday (Aaron Chown/PA)

Social media ban for children ‘brilliant idea’ for tackling abuse – charity boss

Baroness Cass sounded the note of caution as she made her maiden speech in the House of Lords (Yui Mok/PA)

Mobiles in schools could become like ‘smoking behind the bike shed’

A young girl looks at social media apps, including TikTok, Instagram, Snapchat and WhatsApp, on a smartphone.

Australian social media ban for under-16s a ‘retrograde step’, UK charity says

Australia will ban social media for under-16s.

Australia passes world-first law banning under-16s from social media

Pacific 24 rigid inflatable boat

‘Robot Rib’ drone boat tested by Royal Navy in UK waters for first time

A child using a laptop

Girls to learn AI skills as part of new Girlguiding activities

A young girl using a mobile phone in the dark

Women spend more time online than men, but worry more about online harms – Ofcom

A person using the Uber app on a smartphone

Uber launches teen accounts, giving parents option to track children’s journeys

A woman using her mobile phone

O2 launches AI-powered scam call detection tool

Google's homepage

Google needs ‘right conditions’ to build more AI infrastructure in UK

Prime Minister Sir Keir Starmer gives a speech during a visit to Google’s new AI Campus in Somers Town, north-west London

Starmer encourages young people to get involved in AI ‘revolution’

Online misinformation

Four in 10 UK adults encountered misinformation in a single month, Ofcom says

Teens are to be banned from using beauty filters on TikTok

TikTok to ban teenagers from using beauty filters over mental health concerns

A person using their smartphone

Just 18% of teachers think phone ban would improve pupil behaviour – poll