AI could lead to patient harm, researchers suggest

11 April 2025, 16:04

Doctor using AI algorithm and machine learning to detect pneumonia
Doctor using AI algorithm and machine learning to detect pneumonia. Picture: PA

The findings highlight the ‘inherent importance’ of ‘applying human reasoning and assessment to AI judgements’, experts said.

Artificial intelligence (AI) could lead to patient harm if the development of models is focused more on accurately predicting outcomes than treatment, researchers have suggested.

Experts warned the technology could create “self-fulfilling prophecies” when trained on historic data that does not account for demographics or the under-treatment of certain medical conditions.

They added that the findings highlight the “inherent importance” of applying “human reasoning” to AI decisions.

Academics in the Netherlands looked at outcome prediction models (OPMs), which use a patient’s individual features such as health history and lifestyle information, to help medics weigh up the benefits and risks of treatment.

AI can perform these tasks in real-time to further support clinical decision-making.

Using prediction models for decision-making can lead to harm, even when the predictions exhibit good discrimination after deployment

Research team

The team then created mathematical scenarios to test how AI may harm patient health and suggest that these models “can lead to harm”.

“Many expect that by predicting patient-specific outcomes, these models have the potential to inform treatment decisions and they are frequently lauded as instruments for personalised, data-driven healthcare,” researchers said.

“We show, however, that using prediction models for decision-making can lead to harm, even when the predictions exhibit good discrimination after deployment.

“These models are harmful self-fulfilling prophecies: their deployment harms a group of patients, but the worse outcome of these patients does not diminish the discrimination of the model.”

The article, published in the data-science journal Patterns, also suggests the development of AI model development “needs to shift its primary focus away from predictive performance and instead toward changes in treatment policy and patient outcome”.

Reacting to the risks outlined in the study, Dr Catherine Menon, a principal lecturer at the University of Hertfordshire’s department of computer science, said: “This happens when AI models have been trained on historical data, where the data does not necessarily account for such factors as historical under-treatment of some medical conditions or demographics.

“These models will accurately predict poor outcomes for patients in these demographics.

“This creates a ‘self-fulfilling prophecy’ if doctors decide not to treat these patients due to the associated treatment risks and the fact that the AI predicts a poor outcome for them.

“Even worse, this perpetuates the same historic error: under-treating these patients means that they will continue to have poorer outcomes.

“Use of these AI models therefore risks worsening outcomes for patients who have typically been historically discriminated against in medical settings due to factors such as race, gender or educational background.

“This demonstrates the inherent importance of evaluating AI decisions in context and applying human reasoning and assessment to AI judgments.”

While these tools promise more accurate and personalised care, this study highlights one of a number of concerning downsides: predictions themselves can unintentionally harm patients by influencing treatment decisions

Professor Ewen Harrison, University of Edinburgh

AI is currently used across the NHS in England to help clinicians read X-rays and CT scans to free up staff time, as well as speeding up the diagnosis of strokes.

In January, Prime Minister Sir Keir Starmer pledged that the UK will be an “AI superpower” and said the technology could be used to tackle NHS waiting lists.

Ian Simpson, a professor of biomedical informatics at the University of Edinburgh, highlighted that AI OPMs “are not that widely used at the moment in the NHS”.

“Here they tend to be used in parallel with existing clinical management policies and often either for assisting diagnostics and/or speeding up processes like image segmentation,” he said.

Ewen Harrison, a professor of surgery and data science and co-director of the centre for medical informatics at the University of Edinburgh, said: “While these tools promise more accurate and personalised care, this study highlights one of a number of concerning downsides: predictions themselves can unintentionally harm patients by influencing treatment decisions.

“Say a hospital introduces a new AI tool to estimate who is likely to have a poor recovery after knee replacement surgery. The tool uses characteristics such as age, body weight, existing health problems and physical fitness.

“Initially, doctors intend to use this tool to decide which patients would benefit from intensive rehabilitation therapy.

“However, due to limited availability and cost, it is decided instead to reserve intensive rehab primarily for patients predicted to have the best outcomes.

“Patients labelled by the algorithm as having a ‘poor predicted recovery’ receive less attention, fewer physiotherapy sessions and less encouragement overall.”

He added that this leads to a slower recovery, more pain and reduced mobility in some patients.

“These are real issues affecting AI development in the UK,” Prof Harrison said.

By Press Association

More Technology News

See more More Technology News

Trump

‘Severe strain’ on tech supply chains will cause more price rises in electronics

Close up of a pair of hands using and playing with a PS5 handset

Sony raises PlayStation 5 prices in UK and Europe

Facebook chief executive Mark Zuckerberg in Dublin

Meta faces landmark trial which could break up its tech empire

A message on an iPhone

Government’s encryption row with Apple ‘really strange’, expert says

Scientists have grown teeth in the lab for the first time

Scientists grow human teeth in the lab for the first time - in 'revolution for dentistry'

X logo

Data watchdog to investigate X’s Grok AI tool

Elon Musk, CEO of Tesla and senior advisor to the president of the United States, has frozen Tesla sales in China.

Elon Musk freezes Tesla orders to China as Trump's trade war continues

Nearly a quarter of children spend more than four hours a day on an internet-enabled device, a survey for the Children’s Commissioner has suggested.

Nearly quarter of children spend more than four hours a day on devices

A laptop user with their hood up

Four in 10 UK businesses hit by cyber attack or breach in the last year

The remote-controlled mine plough system Weevil being put through its paces

Minefield-clearing robot to be trialled for British Army front lines

Elon Musk 'rage quits' favourite video game after being ‘cyber-bullied’ by players

Elon Musk 'rage quits' favourite video game after being ‘cyber-bullied’ by players

Exclusive
A video game which touts itself as an "incest and non-consensual sex" simulator has been banned in the UK

Home Secretary hails victory for LBC after vile rape and incest game pulled from download in UK

Young girl playing on an apple iPad tablet computer

Nearly quarter of children spend more than four hours per day on internet device, survey finds

School mobile phone bans

Nearly quarter of children spend more than four hours a day on devices – poll

School attendance

Government should ban phones in schools to alleviate pressures – union leader

Exclusive
The computer game "No Mercy" centres around a male protagonist who is encouraged to "become every woman's worst nightmare", and "never take no for an answer."

Video game encouraging rape and incest removed from major gaming platform in the UK after LBC investigation