Richard Spurr 1am - 4am
New laws needed to protect people from online scams, Which? warns
24 September 2021, 00:04
The consumer group says tech firms have failed to act and now must be forced to do so by governments and regulators.
The Government and regulators must step in and force internet platforms to prevent scams, dangerous products and fake reviews appearing on their sites, consumer group Which? has said.
The consumer champion said it was time to stop asking tech firms to make changes and instead introduce new laws to better protect people.
Which? has published new research which found that more than two thirds (68%) of those asked said they had little or no trust in firms such as Amazon, eBay, Facebook and Google were taking effective steps to protect them from scams or fake products.
The research also found that while 89% of those asked said they used online customer reviews to inform product purchases, just 6% said they had a “great deal” of trust in online platforms taking meaningful steps to stop the spread of fake reviews, and 18% said they did not trust the platforms to do so “at all”.
In response, the group has launched its #JustNotBuyingIt campaign, which is urging the Government to make tech firms take responsibility for the harms taking place on their sites.
Which? said it believes that current legislation means platforms do not have enough legal responsibility, and as a result, scammers and criminals are able to sell unsafe products and mislead people, and that there is not enough legal incentive to shut down these practices.
“Millions of consumers are being exposed every day to scams, dangerous products and fake reviews,” Rocio Concha, Which? director of policy and advocacy said.
“The world’s biggest tech companies have the ability to protect people from consumer harm but they are simply not taking enough responsibility.
“We are launching our new #JustNotBuyingIt campaign because it is time to stop just asking these platforms to do the right thing to protect consumers – instead the government and regulators must now step in and make them take responsibility by putting the right regulations in place.”
In response to Which?, Amazon said it “strongly disagrees with these assertions, which misrepresent the facts”, and that it had invested more than 700 million dollars and employed more than 10,000 people to protect customers and was “relentless” in its efforts and has built “robust programmes and industry-leading tools” to ensure products are safe and reviews were genuine.
“Our powerful machine learning tools and skilled investigators analyse over 10 million reviews submissions weekly, and last year our teams proactively blocked more than 10 billion suspect listings for various forms of abuse, including non-compliance, before they were published to our store,” Amazon added.
eBay said it had a “long-standing commitment to ensuring consumers have the confidence to shop online safely”, and that it uses automatic filter to block unsafe listings, which it said blocked “six million unsafe listings” in 2020.
The platform also said it had established a “regulatory portal”, which enables “authorities, such as Trading Standards, to directly report and remove listings that do not comply with relevant laws and regulations”.
A Facebook company spokesperson said the firm was “dedicating significant resources to tackle the industry-wide issue of online scams”, and was working to detect scam ads, block advertisers and take legal action against them in some cases.
“While no enforcement is perfect, we continue to invest in new technologies and methods to protect people on our service from these scams.
“We have also donated £3 million to Citizens Advice to deliver a UK Scam Action Programme to both raise awareness of online scams and help victims,” Facebook said.
A Google spokesperson said “protecting consumers and legitimate businesses operating in the financial sector was a priority” and that it had been working with the Financial Conduct Authority (FCA) on implementing new measures.
“Having now launched further restrictions requiring financial services advertisers to be authorised by the FCA with carefully controlled exceptions, we will be vigorously enforcing our new policy,” the spokesperson added.
Earlier this week, Google and Facebook suggested that fraud does not fit within the draft Online Safety Bill currently being examined by MPs and peers, despite calls for it to be included.
The draft Bill focuses on user-generated content such as child sex exploitation and terrorism, but campaigners, including consumer champion Martin Lewis, have argued scams should fall within its remit.
Asked whether they would welcome fraud being covered by the new regulation, Google and Facebook told MPs they thought it would be a “challenge” to make it work because the techniques for user-generated content and scams were quite different.