ChatGPT maker among AI firms to publish safety policies in transparency push

27 October 2023, 11:24

The Chat GPT website
AI Safety Summit. Picture: PA

OpenAI is one of six tech giants to have published their responses to a Government request for AI safety policy details.

Leading AI companies, including ChatGPT maker OpenAI and Google DeepMind have published their safety policies following a request from Technology Secretary Michelle Donelan.

The group also includes AI firm Anthropic, along with tech giants Amazon, Meta and Microsoft, who have each published their policies on nine areas of safety around AI development proposed by the Government.

It comes before the UK’s AI Safety Summit next week, with the Government wanting to use the safety policies as a way of informing discussions at the summit and encouraging the sharing of best practice within the AI community.

The Government is keen to establish a set of safety processes for AI firms to follow as it looks to place the UK as a world leader in the development of safe artificial intelligence.

On Thursday, Prime Minister Rishi Sunak used a major speech on AI to declare that establishing safety principles around the technology should be a global priority on a par with pandemics and preventing nuclear war.

Prime Minister Rishi Sunak delivers a speech setting out how he will address the dangers presented by artificial intelligence
Prime Minister Rishi Sunak delivers a speech setting out how he will address the dangers presented by artificial intelligence (Peter Nicholls/PA)

Mr Sunak also announced that the UK would set up a “world first” AI Safety Institute to examine and evaluate emerging AI models.

On the newly published safety policies from AI firms, Technology Secretary Michelle Donelan said: “This is the start of the conversation and as the technology develops, these processes and practices will continue to evolve, because in order to seize AI’s huge opportunities we need to grip the risks.

“We know openness is key to increasing public trust in these AI models which in turn will drive uptake across society meaning more will benefit, so I welcome AI developers publishing their safety policies today.”

The nine safety policy areas include implementing responsible capability scaling, a new framework for managing emerging AI risks which would see firms set out potential risk to be monitored ahead of time.

It also includes the idea that AI firms employ third parties to try and hack their systems to help identify sources of weakness or risk.

The Government hopes to bring together a range of world leaders, tech giants and civil society at the AI Safety Summit next week, but reports suggest several major leaders will stay away.

It has been reported that French President Emmanuel Macron is unlikely to attend, while the White House has confirmed that US vice president Kamala Harris will attend the summit rather than President Joe Biden.

Speaking to LBC on Friday morning, Education Secretary Gillian Keegan denied it was embarrassing for the Government that the heads of state were reportedly not expected to attend its flagship summit.

“I think the most important thing actually for the safety summit is to make sure we’ve got the people who are really knowledgeable about AI as well. So yes, there’ll be some of the leaders but it will also be teams of people that are really aware of AI and also the risks of AI,” she said.

The Education Secretary also defended the Government’s decision to invite China to the summit.

“I think the Chinese are one of the world leaders in AI alongside the US… We recognise AI as a security threat as well,” she said.

“We need to really at this stage make sure that we’re all working to understand both what we can do with AI to accelerate the good that AI can do… but also to make sure this is a focus on safety.

“That we work with everyone who’s got some knowledge on this to ensure that we understand the remits of safety.”

By Press Association

More Technology News

See more More Technology News

Laptop with a virus warning on the screen

Nato countries are in a ‘hidden cyber war’ with Russia, says Liz Kendall

Pat McFadden

Russia prepared to launch cyber attacks on UK, minister to warn

A person holds an iphone showing the app for Google chrome search engine

Apple and Google ‘should face investigation over mobile browser duopoly’

A Google icon on a smartphone

Firms can use AI to help offset Budget tax hikes, says Google UK boss

Icons of social media apps, including Facebook, Instagram, YouTube and WhatsApp, are displayed on a mobile phone screen

Growing social media app vows to shake up ‘toxic’ status quo

Will Guyatt questions who is responsible for the safety of children online

Are Zuckerberg and Musk responsible for looking after my kids online?

Social media apps on a phone

U16s social media ban punishes children for tech firm failures, charities say

Google shown on a smartphone

US Government proposes forcing Google to sell Chrome to break-up tech empire

The logo for Google's Gemini AI assistant

Google’s Gemini AI gets dedicated iPhone app in the UK for the first time

Facebook stock

EU fines Meta £660m for competition rule breaches over Facebook Marketplace

A phone taking a photo of a phone mast

Government pledges more digital inclusion as rural Wales gets phone mast boost

Social media apps displayed on a mobile phone screen

What is Bluesky and why are people leaving X to sign up?

Someone types at a keyboard

Cyber security chief warns Black Friday shoppers to be alert to scams

MPs

Ministers pressed on excluding Chinese firms from UK’s genomics sector

Child with mobile phone stock

Specially designed smartphone for children launches in the UK

Roblox on a laptop

Children’s gaming platform Roblox makes ‘major update’ to parental controls