Social media moderation: How does it work and what is set to change?

7 August 2024, 16:34

Person holding a smartphone with lots of apps
Problematic smartphone. Picture: PA

Social media sites have been at the heart of the planning and incitement of the riots seen on Britain’s streets.

The role of social media in the violence and disorder on Britain’s streets has become a key issue in recent days, with the moderation and regulation of platforms coming under scrutiny.

Here is a closer look at how content moderation currently works and what regulation of the sector could change it.

– How do social media sites moderate content currently?

All major social media platforms have community rules that they require their users to follow, but how they enforce these rules can vary depending on how their content moderation teams are set up and how they carry out that process.

Most of the biggest sites have several thousand human moderators looking at content that has been flagged to them or has been found proactively by human staff or software and AI-powered tools designed to spot harmful material.

– What are the limitations as it stands?

There are several key issues with content moderation in general, including; the size of social media makes it hard to find and remove everything harmful posted; moderators – both human and artificial – can struggle to spot nuanced or localised context and therefore sometimes mistake the harmful for the innocent; and moderation is heavily reliant on users reporting content to them – something which doesn’t always happen in online echo chambers.

Furthermore, the use of encrypted messaging on some sites means not all content is publicly visible and can be spotted and reported by other users; instead, they rely on those inside encrypted groups reporting potentially harmful content.

Crucially, a number of cuts have also been made to content moderation teams at many tech giants recently, often because of financial pressures, which have also impacted content teams’ ability to respond.

At X, formerly Twitter, Elon Musk drastically cut back the site’s moderation staff after taking over the company as part of his cost-saving measures, and as he repositioned the site as a platform that would allow more “free speech”, substantially loosening its policies around prohibited content.

The result is harmful material is able to spread on the biggest platforms, and why there have long been calls for tougher regulation to force sites to do more.

– So how realistic is it to expect all harmful content to be removed?

Under the current set-up, not very.

In many instances, social media platforms are taking action against posts inciting or encouraging the disorder.

As well as through enforcing their own rules, offences around incitement of violence are covered under the Public Order Act 1986, meaning the police as well as social media firms can take action based on any such posts.

However, the speed at which this harmful or misleading content spreads can make it difficult for platforms to get every post taken down or have its visibility restricted before it is seen by many other users.

New regulation of social media platforms – the Online Safety Act – became law in the UK last year but has not yet fully come into effect.

Once in place, it will require platforms to take “robust action” against illegal content and activity, including around offences such as inciting violence.

– So how will the Online Safety Act help?

The new laws will, for the first time, make firms legally responsible for keeping users, and in particular children, safe when they use their services.

Overseen by Ofcom, the new laws will not specifically focus on the regulator removing pieces of content itself, but it will require platforms to put in place clear and proportionate safety measures to prevent illegal and other harmful content from appearing and spreading on their sites.

Crucially, clear penalties will be in place for those who do not comply with the rules.

Ofcom will have the power to fine companies up to £18 million or 10% of their global revenue, whichever is greater – meaning potentially billions of pounds for the largest platforms.

In more severe cases, Ofcom will be able to seek a court order imposing business disruption measures, which could include forcing internet service providers to limit access to the platform in question.

And most strikingly, senior managers can be held criminally liable for failing to comply with Ofcom in some instances.

A set of penalties it hopes will compel platforms to take greater action on harmful content.

In an open letter published on Wednesday, Ofcom urged social media companies to do more to deal with content stirring up hatred or provoking violence on Britain’s streets.

The watchdog said: “In a few months, new safety duties under the Online Safety Act will be in place, but you can act now – there is no need to wait to make your sites and apps safer for users.”

The letter, signed by Ofcom director for online safety Gill Whitehead, said it would publish guidance “later this year” setting out what social media companies are required to do to tackle “content involving hatred, disorder, provoking violence or certain instances of disinformation”.

It added: “We expect continued engagement with companies over this period to understand the specific issues they face and we welcome the proactive approaches that have been deployed by some services in relation to these acts of violence across the UK.”

By Press Association

More Technology News

See more More Technology News

A Google icon on a smartphone

Firms can use AI to help offset Budget tax hikes, says Google UK boss

Icons of social media apps, including Facebook, Instagram, YouTube and WhatsApp, are displayed on a mobile phone screen

Growing social media app vows to shake up ‘toxic’ status quo

Will Guyatt questions who is responsible for the safety of children online

Are Zuckerberg and Musk responsible for looking after my kids online?

Social media apps on a phone

U16s social media ban punishes children for tech firm failures, charities say

Google shown on a smartphone

US Government proposes forcing Google to sell Chrome to break-up tech empire

The logo for Google's Gemini AI assistant

Google’s Gemini AI gets dedicated iPhone app in the UK for the first time

Facebook stock

EU fines Meta £660m for competition rule breaches over Facebook Marketplace

A phone taking a photo of a phone mast

Government pledges more digital inclusion as rural Wales gets phone mast boost

Social media apps displayed on a mobile phone screen

What is Bluesky and why are people leaving X to sign up?

Someone types at a keyboard

Cyber security chief warns Black Friday shoppers to be alert to scams

MPs

Ministers pressed on excluding Chinese firms from UK’s genomics sector

Child with mobile phone stock

Specially designed smartphone for children launches in the UK

Roblox on a laptop

Children’s gaming platform Roblox makes ‘major update’ to parental controls

An offshore wind farm

Government launches competition to find AI solutions to boost UK clean energy

A Google logo on the screen of a mobile phone

Google partnership with Anthropic AI cleared by competition watchdog

Concept images showing the entrance to the Minecraft-themed park

Minecraft to become UK real-life destination in deal with Merlin