Matthew Wright 7am - 10am
Instagram launches new tool to protect users against abusive messages from trolls
11 August 2021, 15:38
Instagram has launched a limits tool to protect users against abuse on their platform.
It was designed to give people the ability to automatically hide comments and direct message requests from other users who do not already follow, or have only recently followed them.
People will also be able to decide for how long they would like to hide comments and message requests with the feature, which was rolled out globally from Wednesday.
Instagram said the update aimed to give people more control while ensuring they felt safe when using the site.
Other alterations included the strengthening of in-app warnings to those who attempted to post abuse - warning users they face having their account removed if they continue to send abusive comments.
A Hidden Words filter tool is being rolled out for users around the world too, allowing people to filter out words, phrases and emojis they don't want to see.
Read more: Social media boycott: Companies must cover cost of ending online abuse
Read more: Will Guyatt explains how to solve the problem of anonymous online abuse
Patsy Stevenson talks to LBC about social media abuse
Instagram boss Adam Mosseri said in a blog post: "We don't allow hate speech or bullying on Instagram, and we remove it whenever we find it.
"We also want to protect people from having to experience this abuse in the first place, which is why we're constantly listening to feedback from experts and our community, and developing new features to give people more control over their experience on Instagram, and help protect them from abuse.
"We hope these new features will better protect people from seeing abusive content, whether it's racist, sexist, homophobic or any other type of abuse.
"We know there's more to do, including improving our systems to find and remove abusive content more quickly, and holding those who post it accountable."
The tool could be expanded in the future to automatically prompt users to turn on Limits when the platform detects a user may be experiencing a spike in comments and direct messages.
Read more: FA 'appalled' as England stars suffer racist online abuse after Italy loss
Tech journalist on social media racist abuse of England players
It comes after the platform came under fire following the Euro 2020 final, where three members of the England football team faced racial abuse on the Facebook-owned app.
Instagram's public policy manager for Europe, Tom Gault, acknowledged the impact the incident had.
"Our own research, as well as feedback from public figures, shows that a lot of the negativity directed at high-profile people comes from those who don't follow them or who recently followed them," he said.
"And this is the kind of behaviour that we saw after the Euros final."
At the time, Instagram was criticised for failing to remove the racist comments on the footballers' accounts, telling those who reported them that the comments did not break the platform's rules.
The platform later admitted that the app had "mistakenly" failed to flag some racist comments but said the issue had been addressed.
Prime Minister Boris Johnson spoke with leading social media companies following the onslaught of abuse.
"I said we will not hesitate to go further because they do have the technology to sort this out," he said.
"They can adjust their algorithms and we will use legislation if we have to, just as we used the threat of legislation to stop the European Super League."