Facebook and Google ‘failing to take down scam adverts’

26 April 2021, 00:04

Online Christmas shopping
Online Christmas shopping. Picture: PA

New research from Which? claims many people who reported fraudulent adverts did not see them removed from online platforms.

Facebook and Google have been accused of failing to take action against scam adverts following a new study which claimed many were left online even after being reported.

Research from consumer group Which? found that 34% of people who reported an advert to Google after falling victim to it said the ad was not taken down, with 26% of Facebook users saying the same.

Both companies said they remove fraudulent adverts and such content was not allowed on their platforms, but Which? said the current reactive approach was not working and called on the Government to include online scams within the scope of its upcoming Online Safety Bill.

The consumer group’s research showed that just over a quarter (27%) of those who had fallen victim to a scam via an advert on a search engine or social media did so on Facebook, with 19% saying it happened through Google.

The study also found there was a low level of engagement with scam reporting processes on these platforms among the public, with 43% of those who have fallen victim to a scam saying they did not report it to the platform on which they saw it.

Some 31% of those said they did not report a scam advert because they did not think anything would be done about it.

Which? said this showed that a new approach was needed and has called for online platforms to be given legal responsibility for preventing fake and fraudulent adverts from appearing on their sites, rather than relying on reacting to fake ads once they had caused harm.

“Our latest research has exposed significant flaws with the reactive approach taken by tech giants including Google and Facebook in response to the reporting of fraudulent content – leaving victims worryingly exposed to scams,” Which? consumer rights expert Adam French said.

“Which? has launched a free scam alert service to help consumers familiarise themselves with the latest tactics used by fraudsters, but there is no doubt that tech giants, regulators and the government need to go to greater lengths to prevent scams from flourishing.

“Online platforms must be given a legal responsibility to identify, remove and prevent fake and fraudulent content on their sites. The case for including scams in the Online Safety Bill is overwhelming and the Government needs to act now.”

In response to the research, a Facebook company spokesperson said: “Fraudulent activity is not allowed on Facebook and we have taken action on a number of pages reported to us by Which?.

“Our 35,000 strong team of safety and security experts work alongside sophisticated AI to proactively identify and remove this content, and we urge people to report any suspicious activity to us.

“Our teams disable billions of fake accounts every year and we have donated £3 million to Citizens Advice to deliver a UK Scam Action Programme.”

In a statement, Google said: “We’re constantly reviewing ads, sites and accounts to ensure they comply with our policies. As a result of our enforcement actions (proactive and reactive), our team blocked or removed over 3.1 billion ads for violating our policies.

“As part of the various ways we are tackling bad ads, we also encourage people to flag bad actors they’re seeing via our support tool where you can report bad ads directly. It can easily be found on Search when looking for ‘How to report bad ads on Google’ and filling out the necessary information. It is simple for consumers to provide the required information for the Google ads team to act accordingly.

“We take action on potentially bad ads reported to us and these complaints are always manually reviewed.”

“We have strict policies that govern the kinds of ads that we allow to run on our platform. We enforce those policies vigorously, and if we find ads that are in violation we remove them. We utilise a mix of automated systems and human review to enforce our policies.”

By Press Association

More Technology News

See more More Technology News

Hands on a laptop

Estimated 7m UK adults own cryptoassets, says FCA

A teenager uses his mobile phone to access social media,

Social media users ‘won’t be forced to share personal details after child ban’

Google Antitrust Remedies

US regulators seek to break up Google and force Chrome sale

Jim Chalmers gestures

Australian government rejects Musk’s claim it plans to control internet access

Graphs showing outages across Microsoft

Microsoft outage hits Teams and Outlook users

The Google logon on the screen of a smartphone

Google faces £7 billion legal claim over search engine advertising

A person holds an iphone showing the app for Google chrome search engine

Apple and Google ‘should face investigation over mobile browser duopoly’

UK unveils AI cyber defence lab to combat Russian threats, as minister pledges unwavering support for Ukraine

British spies to ramp up fight against Russian cyber threats with launch of cutting-edge AI research unit

Pat McFadden

UK spies to counter Russian cyber warfare threat with new AI security lab

Openreach van

Upgrade to Openreach ultrafast full fibre broadband ‘could deliver £66bn boost’

Laptop with a virus warning on the screen

Nato countries are in a ‘hidden cyber war’ with Russia, says Liz Kendall

Pat McFadden

Russia prepared to launch cyber attacks on UK, minister to warn

A Google icon on a smartphone

Firms can use AI to help offset Budget tax hikes, says Google UK boss

Icons of social media apps, including Facebook, Instagram, YouTube and WhatsApp, are displayed on a mobile phone screen

Growing social media app vows to shake up ‘toxic’ status quo

Will Guyatt questions who is responsible for the safety of children online

Are Zuckerberg and Musk responsible for looking after my kids online?

Social media apps on a phone

U16s social media ban punishes children for tech firm failures, charities say