Government urged to address AI ‘risks’ to avoid ‘spooking’ public

31 August 2023, 00:04

Chat GPT
Chat GPT. Picture: PA

Ministers also urged the Government to ‘move with greater urgency’ when it comes to addressing potential risks associated with the technology.

The Government must address the risks associated with artificial intelligence (AI) – including potential threats to national security and the perpetuation of “unacceptable” societal biases – to ensure the public is not “spooked” by the technology, MPs have said.

The Science, Innovation and Technology Committee (SITC) said there are “many opportunities” for AI to be beneficial, but the technology also presents “many risks to long-established and cherished rights”.

Overcoming these are vital to securing public safety and confidence in the technology, as well as positioning the UK “as an AI governance leader”.

The SITC opened its inquiry into how AI should be regulated in October, examining its impact on society and the economy.

It said that while AI has been debated since “at least” the 1950s, it is ChatGPT, launched last November “that has sparked a global conversation”.

SITC chairman Greg Clark said: “Artificial intelligence is already transforming the way we live our lives and seems certain to undergo explosive growth in its impact on our society and economy.

“AI is full of opportunities, but also contains many important risks to long-established and cherished rights – ranging from personal privacy to national security – that people will expect policymakers to guard against.”

Mr Clark said the challenges identified by the committee “must be addressed” if “public confidence in AI is to be secured”.

The 12 major challenges outlined in the SITC report are:

– Bias – AI introducing or perpetuating “unacceptable” societal biases
– Privacy – AI allowing people to be identified or sharing personal information
– Misrepresentation – the generation of material by AI that “deliberately misrepresents someone’s behaviour, opinions or character”
– Access to data – AI requires large datasets which are held by few organisations
– Access to compute – powerful AI requires significant computer power, which is limited
– ‘Black box’ challenge – AI cannot always explain why it produces a particular result, which is an issue for transparency
– Open source challenges – requiring code to be openly available could promote transparency, but allowing it to be proprietary may concentrate market power
– Intellectual property and copyright – Some tools use other people’s content
– Liability – If AI is used by third parties to cause harm, policy must establish who bears liability
– Employment – AI will disrupt jobs
– International co-ordination – the development of AI governance frameworks must be international
– Existential challenges – some people think AI is a “major threat” to human life and governance must provide protections for national security

Mr Clark said no one risk included in the document is a priority and they “all have to be addressed together”.

“It’s not the case if you just deal with one, or half of them, that everyone can relax,” he added.

Greg Clark
Greg Clark said all the risks outlined by the committee ‘must be addressed together’ (James Manning/PA)

In March, a white paper outlining a “pro-innovation approach to AI regulation” was presented to Parliament by Michelle Donelan, the Secretary of State for Science, Innovation and Technology.

The document included five principles on AI – safety, security and robustness; fairness; transparency and explainability; accountability and governance; and contestability and redress.

However, Mr Clark said things have moved on from five months ago and the challenges outlined by SITC are more “concrete”.

“The challenges we’ve laid out are much more concrete and the Government needs to address them,” he added.

“It’s a challenge for the Government, but it’s very important that the development of the technology doesn’t outpace the development of policy thinking, to make sure that we can benefit and we’re not harmed by it.

“You need to drive the policy thinking at the same time as the tech development. If the public lose confidence and are spooked by AI, then there will be a reaction standing in the way of some of the benefits.”

The SITC also warned that legislation must be presented to Parliament during its next session and ahead of the general election, which is expected to take place in 2024.

It added that delays “would risk the UK, despite the Government’s good intentions, falling behind other jurisdictions”, such as the USA and European Union.

The Global AI Safety Summit – which is being held at Bletchley Park in November – is a “golden opportunity” for AI governance, according to SITC.

However, Mr Clark added: “If the Government’s ambitions are to be realised and its approach is to go beyond talks, it may well need to move with greater urgency in enacting the legislative powers it says will be needed.”

The SITC will publish its final recommendations on AI policy “in due course”.

A Government spokesperson said: “AI has enormous potential to change every aspect of our lives, and we owe it to our children and our grandchildren to harness that potential safely and responsibly.

“That’s why the UK is bringing together global leaders and experts for the world’s first major global summit on AI safety in November – driving targeted, rapid international action on the guardrails needed to support innovation while tackling risks and avoiding harms.

“Our AI Regulation White Paper sets out a proportionate and adaptable approach to regulation in the UK, while our Foundation Model Taskforce is focused on ensuring the safe development of AI models with an initial investment of £100 million – more funding dedicated to AI safety than any other government in the world.”

By Press Association

More Technology News

See more More Technology News

Openreach van

Upgrade to Openreach ultrafast full fibre broadband ‘could deliver £66bn boost’

Laptop with a virus warning on the screen

Nato countries are in a ‘hidden cyber war’ with Russia, says Liz Kendall

Pat McFadden

Russia prepared to launch cyber attacks on UK, minister to warn

A person holds an iphone showing the app for Google chrome search engine

Apple and Google ‘should face investigation over mobile browser duopoly’

A Google icon on a smartphone

Firms can use AI to help offset Budget tax hikes, says Google UK boss

Icons of social media apps, including Facebook, Instagram, YouTube and WhatsApp, are displayed on a mobile phone screen

Growing social media app vows to shake up ‘toxic’ status quo

Will Guyatt questions who is responsible for the safety of children online

Are Zuckerberg and Musk responsible for looking after my kids online?

Social media apps on a phone

U16s social media ban punishes children for tech firm failures, charities say

Google shown on a smartphone

US Government proposes forcing Google to sell Chrome to break-up tech empire

The logo for Google's Gemini AI assistant

Google’s Gemini AI gets dedicated iPhone app in the UK for the first time

Facebook stock

EU fines Meta £660m for competition rule breaches over Facebook Marketplace

A phone taking a photo of a phone mast

Government pledges more digital inclusion as rural Wales gets phone mast boost

Social media apps displayed on a mobile phone screen

What is Bluesky and why are people leaving X to sign up?

Someone types at a keyboard

Cyber security chief warns Black Friday shoppers to be alert to scams

MPs

Ministers pressed on excluding Chinese firms from UK’s genomics sector

Child with mobile phone stock

Specially designed smartphone for children launches in the UK