UK vulnerable to misinformation, fact-checking charity warns

9 April 2024, 00:04

Hands on a laptop
UK vulnerable to misinformation. Picture: PA

Full Fact’s annual report says the UK is ill-equipped to tackle misleading content.

The UK is highly vulnerable to misinformation and is currently not properly equipped to combat it, the annual report from fact-checking charity Full Fact says.

The report warns that without urgent action, the UK risks falling behind the pace of international progress in protecting citizen from misinformation.

It says that a combination of significant gaps in the Online Safety Act and the rapid rise of generative AI means a fundamental overhaul of the UK’s legislative and regulatory framework is needed in order to adequately combat misinformation and disinformation.

Generative AI, most notably chatbots such as ChatGPT, have become more prominent parts of daily life over the past 18 months as they have been made widely available as content creation and productivity tools – including being used to create misleading images, video and audio.

In its report, Full Fact warns that this technology could be used to power disinformation campaigns that disrupt or undermine confidence in the result of forthcoming elections.

A number of politicians, including Prime Minister Rishi Sunak, Labour leader Sir Keir Starmer and the mayor of London Sadiq Khan have been the subjects of misleading content – or deepfakes – in recent times.

The report says that generative AI could play a role in the upcoming general election by making it cheap, easy and quick to spread content so plausible that it cannot easily or rapidly be identified as false.

The charity says that concerns around the rapid evolution of AI technology are being exacerbated by what it said were “fundamental gaps” in the Online Safety Act, which passed in law last year and is designed to protect internet users from encountering online harms.

“Despite promises that the regulation would apply to disinformation and misinformation that could cause harm to individuals, such as anti-vaccination content, there are only two explicit areas of reference to misinformation in the final Act,” Full Fact’s report says.

“One is that a committee should be set up to advise the regulator, Ofcom, on policy towards misinformation and disinformation, and how providers of regulated services should deal with it.

“The other is that Ofcom’s existing media literacy duties should expand to cover public awareness of misinformation and disinformation, and the nature and impact of harmful content.

“This is not good enough, given the scale of the challenge we face.”

Chris Morris, chief executive of Full Fact, said: “The Government’s failed promise to tackle information threats has left us facing down transformative, digital challenges with an analogue toolkit.

“An Online Safety Act with just two explicit areas of reference to misinformation cannot be considered fit for purpose in the age of AI.”

The charity has listed 15 recommendations for government, political parties, regulators and tech companies to protect the UK’s information environment, including urging either amendments to the Online Safety Act – or bring in new legislation – to better target misinformation and disinformation.

The recommendations also call on political parties to commit to using generative AI responsibly and transparently.

“A better future is possible, but only if all political parties show that they are serious about restoring trust in our system by campaigning transparently, honestly, and truthfully,” Mr Morris said.

A spokesperson for the Department for Science, Innovation and Technology said: “We are working extensively across government to ensure we are ready to rapidly respond to misinformation.

“The Online Safety Act has been designed to be tech-neutral and future-proofed, to ensure it keeps pace with emerging technologies.

“Once implemented, it will require social media platforms to swiftly remove illegal misinformation and disinformation, including where it is AI-generated, as soon as they become aware of it.

“In addition to the work of our defending democracy taskforce, the digital imprints regime will also require certain political campaigning digital material to have a digital imprint making clear to voters who is promoting the content.”

By Press Association

More Technology News

See more More Technology News

Person on laptop

Firms must do more to combat threat of cyber attacks, data regulator warns

iPad Ad Backlash

Apple apologises after iPad advert backlash

A man using a smartphone

Battery fires from discarded electronics on the rise, study warns

A laptop user with their hood up

AI regulator needed to tackle deepfake pornography, charity says

TikTok icon on a smartphone

TikTok to introduce new tools to flag AI-generated content

A new Nokia 3210

Nokia 3210 relaunched to mark handset’s 25th anniversary

Online Safety

Online safety rules don’t go far enough, bereaved parents say

A child using a laptop

Tech firms must ‘tame aggressive algorithms’ under Ofcom online safety rules

A new Apple iPad

Apple unveils new iPads on ‘biggest day’ for device

Grant Shapps

State involvement in MoD cyber attack cannot be ruled out, Grant Shapps says

Rishi Sunak visit to London businesses

‘Malign actor’ behind MoD cyber attack, Sunak says

Cyber crime

UK and allies sanction Russian leader of ransomware gang

The sign for the Ministry of Defence in London

Shapps to update MPs on hack targeting defence payroll details

The UK Centre for Ecology & Hydrology (UKCEH) is working with partners across the world to pioneer the use of automated biodiversity monitoring stations.

AI can ‘transform understanding of biodiversity threats and support action’

Virus on computer screen

Data stolen in cyber attack on health board published on dark web

Transport Secretary Mark Harper having a ride in a self-driving car being tested by automated driving company Wayve in Westminster

UK firm Wayve secures over £800m in funding to build AI for self-driving cars