Richard Spurr 1am - 4am
Social media firms ‘not trusted to combat online abuse’
3 September 2021, 00:04
A report from Hope not Hate found that the public supports stricter regulation of tech firms over their efforts to stop online abuse.
Social media companies are not trusted by the public to deal with the problem of online abuse and hateful content, research suggests.
It also found the majority of people in the UK support more regulation on tech firms.
The study by the anti-abuse campaign group Hope not Hate found that 74% of those asked said they did not trust social media companies alone to decide what is extreme content or disinformation when it appears on their platforms.
It found that the issue of online abuse remains a key one among the public, with 73% of those asked saying they were worried about the amount of such content on social media.
And there is strong public support for tougher regulations compelling tech firms to take action against harmful content, with 71% agreeing they should be held legally responsible for the content on their platforms and 73% saying they should be made to remove such content if it appears.
The Government’s draft Online Safety Bill – which would require platforms to abide by a duty of care to users, with large financial penalties for those that fail to do so – is due to begin being scrutinised by MPs and peers this month.
The proposals also include plans which would force platforms to identify “legal but harmful” content and how they plan to police it on their sites, which has raised concerns from some about a possible clampdown on free speech.
But Hope not Hate’s research suggests the public supports the move, with 80% of those asked saying that while they believe in free speech, there must be limits to stop the spread of extremist content online.
“Allowing people to spew hateful and offensive content online is not a way to protect freedom of speech, but rather risks sowing divisions and amplifying the vile views of a tiny minority,” the group’s head of research Joe Mulhall said.
“At present, online speech that causes division and harm is often defended on the basis that to remove it would undermine free speech.
“In reality, allowing the amplification of such speech only erodes the quality of public debate, and causes harm to the groups such speech targets. This defence, in theory and in practice, minimises free speech overall.
“As our polling shows, there is clearly an overwhelming consensus that hateful content, even when legal, is too visible on social media platforms.
“The only way to really make sure that everyone has freedom of speech is to protect anyone who is currently being attacked or marginalised based on characteristics such as race, gender or sexual orientation.
“That’s why continuing to include legal but harmful content in the Online Safety Bill is the best way to ensure social media companies apply effective systems and processes to reduce the promotion of hate and abuse, while preserving freedom of expression.”