TikTok ‘pushing harmful content into teenagers’ feeds’, study says

16 December 2022, 11:15

Social media apps
Social media apps. Picture: PA

A new study from the Center for Countering Digital Hate says it saw harmful content being recommended to accounts within minutes of joining TikTok.

Some young TikTok users are being shown potentially dangerous content which could encourage eating disorders, self-harm and suicide, an online safety group has claimed.

Research into the TikTok algorithm by the Center for Countering Digital Hate (CCDH) found certain accounts were repeatedly being served content around eating disorders and other harmful topics in the minutes after joining the platform.

The group created two accounts in each of the US, UK, Australia and Canada posing as 13-year-olds. One account in each country was given a female name and the other was given a similar name but with a reference to losing weight included in the username.

TikTok fine
The study created two accounts in each of the US, UK, Australia and Canada posing as 13-year-olds (PA)

The content served to both accounts in their first 30 minutes on TikTok was then compared.

The CCDH said it used this username method as previous research has shown that some users with body dysmorphia issues will often express this through their social media handles.

In its report methodology, the CCDH also said accounts used in the study expressed a preference for videos about body image, mental health, and eating disorders by pausing on relevant videos and pressing the like button.

In addition, the report does not distinguish between content with a positive intent and that of a clearer negative intent – with the CCDH arguing it was not possible in many cases to definitively determine the intent of a video and that even those with a positive intention could still be distressing to some.

The online safety group’s report argues that the sheer speed with which TikTok recommends content to new users is harmful.

During its test, the CCDH said one of its accounts was served content referencing suicide within three minutes of joining TikTok and eating disorder content was served to one account within eight minutes.

It said on average, its accounts were served videos about mental health and body image every 39 seconds.

And the research indicated that the more vulnerable accounts – which included the references to body image in the username – were served three times more harmful content and 12 times more self-harm and suicide-related content.

The CCDH said the study had found an eating disorder community on TikTok which uses both coded and open hashtags to share material on the site, with more than 13 billion views of their videos.

The video-sharing platform includes a For You page, which uses an algorithm to recommend content to users as they interact with the app and it gathers more information about a user’s interests and preferences.

Online Safety Bill
Imran Ahmed, chief executive from the Center for Countering Digital Hate (PA)

Imran Ahmed, chief executive of the CCDH, accused TikTok of “poisoning the minds” of younger users.

“It promotes to children hatred of their own bodies and extreme suggestions of self-harm and disordered, potentially deadly, attitudes to food,” he said.

“Parents will be shocked to learn the truth and will be furious that lawmakers are failing to protect young people from big tech billionaires, their unaccountable social media apps and increasingly aggressive algorithms.”

In the wake of the research, the CCDH has published a new Parents’ Guide alongside the Molly Rose Foundation, which was set up by Ian Russell after his daughter Molly ended her own life after viewing harmful content on social media.

The guide encourages parents to speak “openly” with their children about social media and online safety and to seek help from support groups if concerned about their child.

In response to the research, a TikTok spokesperson said: “This activity and resulting experience does not reflect genuine behaviour or viewing experiences of real people.

“We regularly consult with health experts, remove violations of our policies, and provide access to supportive resources for anyone in need.

“We’re mindful that triggering content is unique to each individual and remain focused on fostering a safe and comfortable space for everyone, including people who choose to share their recovery journeys or educate others on these important topics.”

By Press Association

More Technology News

See more More Technology News

Peter Kyle speaks to the press outside Broadcasting House in London

UK will not pit AI safety against investment in bid for growth, says minister

Molly Russell who took her own life in November 2017 after she had been viewing material on social media

UK going ‘backwards’ on online safety, Molly Russell’s father tells Starmer

Ellen Roome with her son Jools Sweeney

Bereaved mother: Social media firms ‘awful’ in search for answers on son’s death

A remote-controlled sex toy

Remote-controlled sex toys ‘vulnerable to attack by malicious third parties’

LG AeroCatTower (Martyn Landi/PA)

The weird and wonderful gadgets of CES 2025

Sinclair C5 enthusiasts enjoy the gathering at Alexandra Palace in London

Sinclair C5 fans gather to celebrate ‘iconic’ vehicle’s 40th anniversary

A still from Kemp's AI generated video

Spandau Ballet’s Gary Kemp releases AI generated music video for new single

DragonFire laser weapon system

Britain must learn from Ukraine and use AI for warfare, MPs say

The Pinwheel Watch, a smartwatch designed for children, unveiled at the CES technology show in Las Vegas.

CES 2025: Pinwheel launches child-friendly smartwatch with built in AI chatbot

The firm said the morning data jumps had emerged as part of its broadband network analysis (PA)

Millions head online at 6am, 7am and 8am as alarms go off, data shows

A mobile phone screen

Meta ends fact-checking on Facebook and Instagram in favour of community notes

Mark Zuckerberg

Meta criticised over ‘chilling’ content moderation changes

Apps displayed on smartphone

Swinney voices concern at Meta changes and will ‘keep considering’ use of X

sam altman

Sister of OpenAI CEO Sam Altman files lawsuit against brother alleging sexual abuse as child

OpenAI chief executive Sam Altman with then-prime minister Rishi Sunak at the AI Safety Summit in Milton Keynes in November 2023

OpenAI boss Sam Altman denies sister’s allegations of sexual abuse

A super-resolution prostate image

New prostate cancer imaging shows ‘extremely encouraging’ results in trials