Innocent people losing bank accounts thanks to AI, lawyer warns

11 July 2023, 11:44

ATM withdrawals
ATM withdrawals. Picture: PA

Systems designed to protect against fraud are seeing innocent people’s bank accounts closed down with little warning.

Artificial intelligence and heightened sensitivity about fraud are causing banks to shut down the accounts of innocent people, a lawyer has warned.

High-profile cases involving Jeremy Hunt and Nigel Farage have led to ministers calling for an urgent review of how banks treat politicians.

But Jeremy Asher, a solicitor at law firm Setfords, said members of the public were also having their lives ruined by the sudden and unexplained closure of their bank accounts thanks to problems with the way banks guard against the risk of fraud.

Suspicions about fraudulent activity are uploaded to privately-operated databases that banks then check when deciding whether to approve applications for loans or a new account, or when reviewing existing accounts.

Mr Asher said: “They have to have a reasonable suspicion to load a marker, but that’s far lower than proving a case to a criminal standard.

“When they do that, people’s bank accounts close, their applications fail, they can’t get credit and they don’t know about this because it’s all hidden.”

While most fraud markers are loaded correctly, and Mr Asher stressed that they do play an important role in stopping fraudsters, the system is not perfect and “some innocent people slip through the net”.

The markers are supposed to be advisory, with banks then carrying out further checks, but Mr Asher, who specialises in overturning fraud markers, said increasing use of automated decision-making, designed to cut costs, meant in practice they had become “the be-all-and-end-all”.

He said: “In particular when you’re looking at applications, you will find a lot of those organisations use automated processes and they just won’t look behind it.”

Customers are often unaware of the reason their account has been closed or their loan application has been refused, with banks saying they are unable to disclose the fact there is a fraud marker against their name in case it tips them off about a criminal investigation.

The number of fraud markers has also been on the rise in recent years. In 2017, Mr Asher said, 305,000 markers were uploaded to the largest fraud database, run by fraud prevention service Cifas. Last year, that figure had increased to 409,000.

City of London Financial and Professional Services Dinner
Chancellor of the Exchequer Jeremy Hunt said he had been denied a Monzo account because of his position as a ‘politically exposed person’ (Aaron Chown/PA)

Allowing for some people having multiple markers against their names, Mr Asher estimated that at least one million people could be listed on one of these databases.

He also suggested the situation could get worse as banks became more risk averse thanks to a new offence of failing to prevent fraud that the Government is seeking to introduce as part of the Economic Crime and Corporate Transparency Bill currently going through Parliament.

Anti-fraud campaigners argue the new offence will make it easier to prosecute organisations that fail to take steps to stop fraud, but Mr Asher said it could lead to banks taking a “zero risk tolerance” approach to customers.

He said: “Unless they are absolutely 100% positive you’re OK, they are going to walk away from you.”

Recent high-profile cases have focused on the issue of “political exposed persons” (PEPs) – those considered to be higher risks for fraud or money laundering because of their political connections – with Jeremy Hunt, the Chancellor, recently saying he suspected this was the reason he had been refused an account by Monzo.

But Mr Asher said the processes involved were similar, with increasingly risk-averse banks relying on automated processes to screen out customers.

The Financial Conduct Authority is already conducting a review of how banks treat PEPs, with city minister Andrew Griffith urging the watchdog to prioritise the issue to ensure UK politicians were not unduly denied access to banking services.

Mr Asher added: “There needs to be a bit of slack and a reality check about fraud risk.

“Using automated systems and AI is a tool, but it’s just that, it shouldn’t be the be-all-and-end-all of the decision-making process. There needs to be more time spent on decisions.”

A spokesperson for Cifas said: “Cifas members file individuals to our database in order to share intelligence on a person’s previous fraudulent conduct.

“Evidence to support a Cifas marker is robust and must meet our standard of proof, and there are strict rules and guidance around the use of Cifas markers in automated systems.

“A Cifas marker is a trigger for a member to investigate a customer’s conduct, and it is the decision of a member whether or not to allow an account to be opened or closed.

“We are transparent about how we operate and there are clear processes in place should a consumer wish to understand what data is on the Cifas database or to appeal a marker.”

By Press Association

More Technology News

See more More Technology News

UK unveils AI cyber defence lab to combat Russian threats, as minister pledges unwavering support for Ukraine

British spies to ramp up fight against Russian cyber threats with launch of cutting-edge AI research unit

Pat McFadden

UK spies to counter Russian cyber warfare threat with new AI security lab

Openreach van

Upgrade to Openreach ultrafast full fibre broadband ‘could deliver £66bn boost’

Laptop with a virus warning on the screen

Nato countries are in a ‘hidden cyber war’ with Russia, says Liz Kendall

Pat McFadden

Russia prepared to launch cyber attacks on UK, minister to warn

A person holds an iphone showing the app for Google chrome search engine

Apple and Google ‘should face investigation over mobile browser duopoly’

A Google icon on a smartphone

Firms can use AI to help offset Budget tax hikes, says Google UK boss

Icons of social media apps, including Facebook, Instagram, YouTube and WhatsApp, are displayed on a mobile phone screen

Growing social media app vows to shake up ‘toxic’ status quo

Will Guyatt questions who is responsible for the safety of children online

Are Zuckerberg and Musk responsible for looking after my kids online?

Social media apps on a phone

U16s social media ban punishes children for tech firm failures, charities say

Google shown on a smartphone

US Government proposes forcing Google to sell Chrome to break-up tech empire

The logo for Google's Gemini AI assistant

Google’s Gemini AI gets dedicated iPhone app in the UK for the first time

Facebook stock

EU fines Meta £660m for competition rule breaches over Facebook Marketplace

A phone taking a photo of a phone mast

Government pledges more digital inclusion as rural Wales gets phone mast boost

Social media apps displayed on a mobile phone screen

What is Bluesky and why are people leaving X to sign up?

Someone types at a keyboard

Cyber security chief warns Black Friday shoppers to be alert to scams