Molly Russell: How regulated is social media and how could it change?

30 September 2022, 12:24

Molly Russell
Molly Russell inquest. Picture: PA

The inquest into the 14-year-old’s death has put the mechanisms of social media platforms and how safe they are into sharp focus.

The inquest into the death of Molly Russell has placed renewed focus on the regulation of social media platforms and the need for better systems to protect users from harmful content.

The 14-year-old, from Harrow, north-west London, is known to have viewed material linked to topics such as depression, self-harm and suicide before ending her life in November 2017, prompting her family to campaign for better internet safety.

Currently, most social media and search engine platforms that operate in the UK are not subject to any large-scale regulations specifically concerning user safety beyond a handful of laws that refer to the sending of threatening or indecent electronic communications.

Instead, these platforms are relied upon to self-regulate, using a mixture of human moderators and artificial intelligence to find and take down illegal or harmful material proactively or when users report it to them.

Platforms lay out what types of content are and are not allowed on their sites in their terms of service and community guidelines, which are regularly updated to reflect on the evolving themes and trends that appear in the rapidly moving digital world.

However, critics say this system is flawed for a number of reasons, including that what is and is not regarded as safe or acceptable online can vary widely from site to site, and many moderation systems struggle to keep up with the vast amounts of content being posted.

Concerns have also been raised about the workings of algorithms used to serve users with content a platform thinks might interest them – often this is based on a user’s habits on the site and can mean that someone who searches for material linked to depression or self-harm could be shown more of it in the future.

In addition, some platforms argue that certain types of content which are not illegal – but could be considered offensive or potentially harmful by some – should be allowed to remain online to protect free speech and expression.

As a result, large amounts of harmful content can be found on social media today as platforms struggle with moderating the sheer scale of content being posted and the balancing act of allowing users to express themselves while trying to keep their online spaces safe.

Molly Russell death
Molly Russell took her own life in November 2017 after viewing online material linked to anxiety, depression, self-harm and suicide (Family handout/PA)

During the inquest, evidence given by executives from both Meta and Pinterest highlighted these issues.

Pinterest executive Judson Hoffman admitted the platform was “not safe” when Molly accessed it in 2017 because it did not have in place the technology it has now.

And Meta executive Elizabeth Lagone’s evidence highlighted the issue of understanding the context of certain posts when she said some of the content seen by Molly was “safe” or “nuanced and complicated”, arguing that in some instances it was “important” to give people a voice if they were expressing suicidal thoughts.

During the inquest, coroner Andrew Walker said the opportunity to make social media safe must not “slip away”, as he voiced concerns about the platforms.

He outlined a range of concerns including a lack of separation of children and adults on social media; age verification and the type of content available and recommended by algorithms to children; and insufficient parental oversight for under-18s.

The UK’s plan to change this landscape is the Online Safety Bill, which would for the first time compel platforms to protect users from online harm, particularly children, by requiring them to take down illegal and other harmful content, and is due to be reintroduced to Parliament soon.

Companies in scope will be required to spell out clearly in their terms of service what content they consider to be acceptable and how they plan to prevent harmful material from being seen by their users.

It is also expected to require firms to be more transparent about how their algorithms work and to set out clearly how younger users will be protected from harm.

The new regulations will be overseen by Ofcom and those found to breach the rules could face large fines or be blocked in the UK.

The conclusion of the inquest into Molly’s death is expected to see renewed calls for the new rules to be swiftly introduced.

Social media expert and industry commentator Matt Navarra said the Bill could close some of the gaps in protecting people online.

“There is very little regulation of social media in the UK,” he said.

“Most that already exists is related to advertising, copyright law, defamation and libel laws and a limited set of specific laws to protect people from threats of violence, harassment and offensive, indecent, menacing behaviour online.

“And very little of these laws focus on the liability of the big tech platforms in any meaningful way.

“These laws are there for litigation between individuals and businesses, rather than the big tech platforms hosting the content or harmful activity.

“The Online Safety Bill aims to address the limited liability of social media platforms by creating new laws that force social media platforms to take action or put in place measures which protect users from a range of online harms.”

By Press Association

More Technology News

See more More Technology News

Google screen

Google brings more AI to search engine in ‘significant’ update

UK Information Commissioner John Edwards

Accountability comes in many forms – Information Commissioner

The ChatGPT website

OpenAI raises £5 billion in largest ever funding round

A woman using a laptop as she holds a bank card

Meta partners with UK banks to combat fraud

The word Google in white on dark glass at the company's offices

Google breached TV company trademark through YouTube Shorts service, court told

The Vodafone logo on a smartphone

Vodafone and Three UK promise £10-a-month price cap for some mobile deals

An Asda store

Asda apologises after stores open later than planned due to till issue

The game developer has been fighting big tech firms for years over anti-competitive behaviour on their app stores (AP)

Epic Games sues Google and Samsung over anti-competition collusion claims

A woman using a mobile phone

Nearly a quarter of adults feel digitally excluded, survey finds

Minister for Justice Helen McEntee

Internet companies could face huge fines over content glorifying terrorism

GCHQ

UK issues alert over threat from cyber attackers working for Iranian state

An Amazon sign at the fulfillment centre in Hemel Hempstead, Hertfordshire

Competition regulator clears Amazon’s partnership with AI firm Anthropic

Meta logo on sign outside building

Meta fined 91m euro over password breach

Revolut, Chase and Modulr have agreed to join the 159 short-code phone service that people can call to speak to their bank when they are worried about a potential scam (Yui Mok/PA)

Revolut, Chase and Modulr agree to join 159 anti-scam service

Network Rail ‘cyber security incident’

Man arrested after cyber vandalism hit wifi at UK’s biggest railway stations

Passengers waiting for trains at London King’s Cross Station

‘Cyber vandalism’ shuts down wifi at 19 Network Rail stations