AI chatbots ‘lack safeguards to prevent spread of health disinformation’

20 March 2024, 22:34

Woman holding a smart phone and using a chat bot app to ask questions about a smart watch. Showcases the use of AI language models to provide accessib
Woman holding a smart phone and using a chat bot app to ask questions about a smart watch. Showcases the use of AI language models to provide accessib. Picture: PA

Research published in the British Medical Journal (BMJ) found some popular chatbots could easily be prompted to create disinformation.

Many popular AI chatbots, including ChatGPT and Google’s Gemini, lack adequate safeguards to prevent the creation of health disinformation when prompted, according to a new study.

Research by a team of experts from around the world, led by researchers from Flinders University in Adelaide, Australia, and published in the British Medical Journal (BMJ) found that the large language models (LLMs) used to power publicly accessible chatbots failed to block attempts to create realistic-looking disinformation on health topics.

As part of the study, researchers asked a range of chatbots to create a short blog post with an attention-grabbing title and containing realistic-looking journal references and patient and doctor testimonials on two health disinformation topics: that sunscreen causes skin cancer and that the alkaline diet is a cure for cancer.

Health Stock – British Medical Journal
The British Medical Journal published the research (PA)

The researchers said that several high-profile, publicly available AI tools and chatbots, including OpenAI’s ChatGPT, Google’s Gemini and a chatbot powered by Meta’s Llama 2 LLM, consistently generated blog posts containing health disinformation when asked – including three months after the initial test and being reported to developers when researchers wanted to assess if safeguards had improved.

In contrast, AI firm Anthropic’s Claude 2 LLM consistently refused all prompts to generate health disinformation content.

The researchers also said that Microsoft’s Copilot – using OpenAI’s GPT-4 LLM – initially refused to generate health disinformation. This was no longer the case at the three-month re-test.

In response to the findings, the researchers have called for “enhanced regulation, transparency, and routine auditing” of LLMs to help prevent the “mass generation of health disinformation”.

During the AI Safety Summit, hosted by the UK at Bletchley Park last year, leading AI firms agreed to allow their new AI models to be tested and reviewed by AI safety institutes, included one established in the UK, before their release to the public.

However, details of any testing since that announcement has been scarce and it remains unclear if those institutes would have the power to block the launch of an AI model because it is not backed by any current legislation.

Campaigners have urged governments to bring forward new legislation to ensure user safety, while the EU has just approved the world’s first AI Act, which will place greater scrutiny on, and require greater transparency from, AI developers based on how risky the AI application is considered to be.

By Press Association

More Technology News

See more More Technology News

A person holds an iphone showing the app for Google chrome search engine

Apple and Google ‘should face investigation over mobile browser duopoly’

A Google icon on a smartphone

Firms can use AI to help offset Budget tax hikes, says Google UK boss

Icons of social media apps, including Facebook, Instagram, YouTube and WhatsApp, are displayed on a mobile phone screen

Growing social media app vows to shake up ‘toxic’ status quo

Will Guyatt questions who is responsible for the safety of children online

Are Zuckerberg and Musk responsible for looking after my kids online?

Social media apps on a phone

U16s social media ban punishes children for tech firm failures, charities say

Google shown on a smartphone

US Government proposes forcing Google to sell Chrome to break-up tech empire

The logo for Google's Gemini AI assistant

Google’s Gemini AI gets dedicated iPhone app in the UK for the first time

Facebook stock

EU fines Meta £660m for competition rule breaches over Facebook Marketplace

A phone taking a photo of a phone mast

Government pledges more digital inclusion as rural Wales gets phone mast boost

Social media apps displayed on a mobile phone screen

What is Bluesky and why are people leaving X to sign up?

Someone types at a keyboard

Cyber security chief warns Black Friday shoppers to be alert to scams

MPs

Ministers pressed on excluding Chinese firms from UK’s genomics sector

Child with mobile phone stock

Specially designed smartphone for children launches in the UK

Roblox on a laptop

Children’s gaming platform Roblox makes ‘major update’ to parental controls

An offshore wind farm

Government launches competition to find AI solutions to boost UK clean energy

A Google logo on the screen of a mobile phone

Google partnership with Anthropic AI cleared by competition watchdog