
Nick Ferrari 7am - 10am
28 February 2025, 17:01
Tech giant Meta has been forced to issue an apology after receiving a flood of complaints concerning gore, violence and dead bodies appearing on Instagram.
Swathes of Instagram users reported a suspected malfunction on the social media platform‘s algorithm, which curates what appears for people on the app.
Reels is a feature similar to TikTok on the Mark Zuckerberg-owned app which lets users post short clips.
In a forum on chat room site Reddit, users shared how they had repeatedly been shown extremely graphic and distressing Reels.
One user wrote: “Okay, seriously, what is going on with Instagram lately?“For the past 24 hours, it feels like Instagram’s algorithm has gone rogue.
"My feed is absolutely packed with violent Reels – like, one after the other. It's like Instagram is now trying to make me question if I accidentally followed a "bloodshed and chaos" account.
Read more: Counterfeiters adding metal weights to fake Apple power adapters to dupe consumers
Read more: WhatsApp goes down for thousands as users unable to send or receive messages
“I get that the platform's all about "engagement" and "trending," but did they really think this was the way to get people to scroll more?"
Another replied: “Yeah no seriously I’m getting back to back reels on gore and everyone in the comments sections have been saying the same thing.
“Huge algorithm explosion of gore as of 2/26/25. Not sure what this is about but i dont believe its unintentional especially with how many people have been saying the same thing.“
Meanwhile, tech news outlet 404 reported how some horror reals include a man being set on fire and a man shooting a cashier at point-blank range.
Meta says it has since fixed the issue as it apologised to users for the “mistake”, calling it an “error”.A spokesperson said: “We have fixed an error that caused some users to see content in their Instagram Reels feed that should not have been recommended. We apologize for the mistake.
According to Meta policy, the firm seeks to protect users from disturbing imagery and takes down content that is seen to be especially violent or graphic.
Banned content can include “videos depicting dismemberment, visible innards or charred bodies.” It also prohibits “sadistic remarks towards imagery depicting the suffering of humans and animals.” However, Meta says it does allow for some graphic content to remain on the platform if it helps people condemn and raise awareness about important issues.
These range from human rights abuses and armed conflicts to acts of terrorism. These could with limitations such as warning labels.
Earlier this month, Meta announced it is ditching its fact-checking service on Facebook and Instagram and replacing it with X-inspired "community notes" where users can decide on a post’s accuracy.
The parent company of Facebook, Instagram and Threads said it was ending third-party fact-checking on posts, first in the US and then across international markets.
Meta CEO Mark Zuckerberg said the decision was about "restoring free expression" on its platforms and "reducing mistakes" it said automated content moderation systems were making.
Meta said it believed fact-checking amounted to censorship in some cases, accusing some fact-checkers of being influenced by their own biases.
But independent, UK fact-checking charity Full Fact said the decision was likely to help misinformation more easily spread online as a result.