Experts ‘deeply concerned’ as Government agency drops focus on bias in AI

14 February 2025, 12:44

A graphic of a robot hand touching a human hand
Robot and human hands touch and connect on binary code background. Smart AI, Machine learning, Chatbot concepts. Artificial Intelligence for science,. Picture: PA

The AI Safety Institute is being renamed the AI Security Institute to reflect a greater focus on crime and national security issues.

Technology experts have expressed concern that the Government is “pivoting away from ‘safety’ towards ‘national security’” after it announced a rebranding of the AI Safety Institute.

Peter Kyle, the Technology Secretary, rechristened the agency on Friday as the AI Security Institute (AISI), saying it would refocus its work on crime and national security issues.

But while Mr Kyle insisted the AISI’s work “won’t change”, his department revealed it would no longer focus on “bias or freedom of speech”, sparking concern from experts in the field.

Michael Birtwistle, associate director at the Ada Lovelace Institute, said he was “deeply concerned that any attention to bias in AI applications has been explicitly cut out of the new AISI’s scope”.

He said: “A more pared back approach from the Government risks leaving a whole range of harms to people and society unaddressed – risks that it has previously committed to tackling through the work of the AI Safety Institute.

“It’s unclear if there’s still a plan to meaningfully address them, if not in AISI.”

Rishi Sunak delivers a speech at the AI Safety Summit in 2023.
Rishi Sunak launched the AI Safety Institute at the end of 2023, but less than two years later it is being renamed (Justin Tallis/PA)

Pointing to a series of scandals involving bias in AI in Australia, the Netherlands and the UK, Mr Birtwistle said there was a “real risk that inaction on risks like bias will lead to public opinion turning against AI”.

As well as the AISI’s new name, Mr Kyle announced the creation of a new “criminal misuse” team within the institute to tackle risks such as AI being used to create chemical weapons, carry out cyber attacks and enable crimes such as fraud and child sexual abuse.

Crime and security concerns already form part of the institute’s remit, but it currently also covers wider societal impacts of artificial intelligence, the risk of AI becoming autonomous and the effectiveness of safety measures for AI systems.

Established in 2023, then-prime minister Rishi Sunak said the institute would “advance the world’s knowledge of AI safety”, including exploring “all the risks from social harms like bias and misinformation, through to the most extreme risks of all”.

Mr Kyle said the AISI’s “renewed focus” on security would “ensure our citizens – and those of our allies – are protected from those who would look to use AI against our institutions, democratic values and way of life”.

But Andrew Dudfield, head of AI at fact checking organisation Full Fact, said the move was “another disappointing downgrade of ethical considerations in AI development that undermines the UK’s ability to lead the global conversation”.

Prime Minister Sir Keir Starmer gesticulates as he gives a speech at Google's London AI Campus.
Sir Keir Starmer has pledged to put AI at the heart of his Government, but the UK declined to join other nations in signing a major international agreement on the technology in Paris earlier this week (Stefan Rousseau/PA)

Describing security and transparency as “mutually reinforcing pillars essential to building public confidence in AI”, Mr Dudfield added: “If the Government pivots away from the issues of what data is used to train AI models, it risks outsourcing those critical decisions to the most powerful internet platforms rather than exploring them in the democratic light of day.”

Friday’s announcement comes after the Government began the year pledging to make the UK a world leader in AI and to put the technology at the heart of Whitehall.

But it also comes in the same week that the UK joined the US in refusing to sign an international agreement on AI at a summit in Paris.

The Government said it had declined to sign the communique issued at the end of the French-hosted AI Action Summit as it had not provided enough “practical clarity” on “global governance” of the technology or addressed “harder questions” about national safety.

By Press Association

More Technology News

See more More Technology News

A woman’s hands on a laptop keyboard

Out-of-date government IT systems ‘hampering public sector adoption of AI’

Back view closeup of young gamer boy playing video games online on computer in dark room wearing headphones with microphone

What are Com networks and what threat do they pose?

A man taking a photo of a mobile phone mast using a mobile phone

Smartphones to receive phone signals from space under Ofcom proposals

Chancellor of the Exchequer Rachel Reeves head shot

Chancellor faces ‘tough balancing act’ if tax on big tech firms is scrapped

Health minister Stephen Kinnock said the Government is taking steps to address online harms (PA)

Government urged to ‘grasp the nettle’ on social media’s impact on young men

Brianna Ghey

Social media companies will not put lives before profit – Brianna Ghey’s mother

Facebook

Meta considering subscription option for UK Facebook users

Professor Stephen Hawking

Cambridge University sparks row over claims Stephen Hawking 'benefited from slavery'

Queen's University Belfast Vice Chancellor Professor Sir Ian Greer (left) with Goodloe Sutton, Vice President of Strategy and Advocacy at Boeing Government Operations

Queen’s receives Boeing investment for aerospace engineering research lab

A girl holding a mobile phone while blurred figures sit in the background

Toxic ‘bro’ culture driving Gen Z women from social media, survey suggests

Scanner

New scanner technique may offer hope for patients with drug-resistant epilepsy

Amazon accused of 'pushing propaganda' after mum asks Alexa to name celebrities - and is given list of Republicans

Amazon accused of 'pushing propaganda' after mum asks Alexa for celebrities - and is given Trump, Vance and Musk

Stephen Graham

Adolescence creators accept invitation to discuss online safety with MPs

A Norwegian man filed a complaint against the creators of ChatGPT

Norwegian man calls for fines after ChatGPT ‘hallucinated’ that he’d killed his children

Psychologists gave the more accurate ADHD videos an average rating of 3.6 out of five (PA)

ADHD misinformation on TikTok is widespread and affecting young people – study

A child's hands holding a mobile phone while playing a game

Ad watchdog announces crackdown on degrading images of women in gaming apps