James O'Brien 10am - 1pm
Hate speech prevalence on Facebook declining, policy enforcement report says
18 August 2021, 18:04
The firm’s latest community standards enforcement report has been published.
The prevalence of hate speech on Facebook is decreasing, the social media giant has claimed, as it published its latest report on how the site enforces its rules.
The firm said hate speech was now appearing in around five posts per 10,000, down from between five and six per 10,000 in the first three months of this year.
The drop in hate speech prevalence is the third quarter in a row the overall figures have fallen and comes despite ongoing criticism of social media platforms over their response to specific abuse incidents, including the racist abuse directed at black England footballers following the Euro 2020 final.
Facebook put this decline down to improvements in its proactive detection technology and noted its hate speech content removal had increased by more than 15 times on both Facebook and Instagram since it began reporting such figures thanks to its tech.
The firm said these changes meant more than 90% of the content it took down across 12 of its 13 policy areas was removed before any user saw it or reported it.
According to Facebook’s latest community standards enforcement report, in the second quarter of this year the site removed 31.5 million pieces of hate speech, up from 25.2 million in the first quarter.
Facebook vice president of integrity Guy Rosen said “investments in AI enable us to detect more kinds of hate speech violations on Facebook and Instagram”.
Mr Rosen added that the company was “committed to sharing meaningful data so we can be held accountable for our progress, even if the data shows areas where we need to do better”.
He said the improvements around detection had also helped the firm better enforce its policies across different languages.
Elsewhere in its report, Facebook said it removed 2.3 million pieces of child nudity and physical abuse content from Facebook and 458,000 pieces from Instagram, as well as 25.7 million pieces of child sexual exploitation content from Facebook and 1.4 million pieces from Instagram.
In terms of misinformation relating to Covid-19, Facebook said more than 20 million pieces of content had been removed since the start of the pandemic last year, as well as 3,000 accounts, pages and groups for breaching rules around Covid-19 or vaccine misinformation.
The social media company also reported a sharp increase in the number of suicide and self-injury posts being removed – 16.8 million, up from 5.1 million in the first three months of this year – which Facebook said was due to a “technical fix” which allowed the platform to go back and catch violating content it had previously missed.