AI Safety Summit: What have we learned?

2 November 2023, 21:14

Rishi Sunak at the AI Safety Summit
AI safety summit. Picture: PA

The first global gathering on AI safety has concluded at Bletchley Park.

The first AI Safety Summit has come to an end with Rishi Sunak hailing “landmark” agreements and progress on global collaboration around artificial intelligence.

But what did we learn during the two-day summit at Bletchley Park?

– Rishi Sunak wants to make the UK a ‘global hub’ for AI safety

As the summit closed, the Prime Minister made a notable announcement around the safe testing and rollout of AI.

The UK’s new AI Safety Institute would be allowed to test new AI models developed by major firms in the sector before they are released.

The agreement, backed by a number of governments from around the world as well as major AI firms including OpenAI and Google DeepMind, will see external safety testing of new AI models against a range of potentially harmful capabilities, including critical national security and societal harms.

The UK institute will work closely with its newly announced US counterpart.

In addition, a UN-backed global panel will put together a report on the state of the science of AI, looking at existing research and raising any areas that need prioritising.

Then there is the Bletchley Declaration, signed by all attendees on day one of the summit – including the US and China – which acknowledged the risks of AI and pledged to develop safe and responsible models.

It all left the Prime Minister able to say at the end of the summit that the AI Safety Institute, and the UK, would act as a “global hub” on AI safety.

– Elon Musk thinks AI is one of the biggest threats facing humanity

The outspoken billionaire’s visit to the summit was seen as a major endorsement of its aims by the UK Government, and while at Bletchley Park, the Tesla and SpaceX boss reiterated his long-held concerns around the rise of AI.

Elon Musk during the AI Safety Summit
Elon Musk during the AI Safety Summit (Leon Neal/PA)

Having suggested a developmental pause earlier this year, he called the technology “one of the biggest threats” to the modern world because “we have for the first time the situation where we have something that is going to be far smarter than the smartest human”.

He said the summit was “timely” given the nature of the threat, and suggested a “third-party referee” in the sector to oversee the work of AI companies.

– Governments from around the world have acknowledged the risks too

Another key moment of the summit came early on day one with the announcement of the Bletchley Declaration, signed by all the nations in attendance, affirming their efforts to work together on the issue.

The declaration says “particular safety risks” lay around frontier AI – the general purpose models which are likely to exceed the capabilities of the AI models we know today.

It warns that substantial risks may arise from “potential international misuse” or from losing control of such systems, and names cybersecurity, biotechnology and disinformation as particular areas of concern.

To respond to these risks it says countries will “resolve to work together in an inclusive manner to ensure human-centric, trustworthy and responsible AI that is safe and supports the good of all through existing international fora and other relevant initiatives”.

Many experts have noted that this is only the start of the conversation on AI, but is a promising place to start.

– A network of global safety institutes could be the first steps towards wider AI regulation

Mr Sunak laid out plans for the UK’s AI Safety Institute at the close of the summit, and how it will evaluate and test new AI models before and after they are released.

This week, the US also confirmed plans to create its own institute, and both countries have pledged that the organisations will work in partnership.

Collaboration was a key theme of the summit, in the Bletchley Declaration and in the state of the science on AI report, which will see all 28 countries at the event recommend an expert to join the report’s global panel.

With more countries expected to create their own institutes, a wider network of safety expert groups collaborating on and examining advances in AI could help pave the way for the framework for more binding rules on AI development, applied around the world.

– There are more safety summits planned

Before the Bletchley Park summit, the Government said it wanted to start a global conversation to continue over the coming years given the speed of AI’s development.

Rishi Sunak at Bletchley Park
Rishi Sunak at Bletchley Park (Justin Tallis/PA)

That feat appears to have been achieved with the confirmation that two more summits have been confirmed for next year: a virtual mini-conference hosted by South Korea in around six months and a full summit by France a year from now.

– Some unanswered questions remain

Getting the US, the EU and China to all sign the Bletchley Declaration was a “massive deal”, Technology Secretary Michelle Donelan said at the summit.

But some commentators have already questioned whether political tensions between nations can be truly put aside to collaborate over AI.

China was not included in some of the discussions on the second day of the summit, with “like-minded governments” around AI safety testing.

Questions also remain over plans to combat the impact AI is already having on daily life, notably on jobs.

Critics have questioned why the summit only focused on longer term AI technologies, and not the generative AI apps which some believe are already threatening industries including publishing and administrative work, as well creative sectors.

Even by the end of the summit, discussion on the topic had been sparse.

It remains unclear how much power the UK’s AI Safety Institute will have when it comes to stopping the release of AI models it believes could be unsafe.

The new agreement around safety testing is voluntary and the Prime Minister admitted that “binding requirements” are likely to be needed to regulate the technology, but said now is the time to move quickly without laws.

But the true power of the institute and the agreements made during the summit will not be known until an AI model appears that raises concerns among the new safety bodies.

By Press Association

More Technology News

See more More Technology News

Laptop with a virus warning on the screen

Nato countries are in a ‘hidden cyber war’ with Russia, says Liz Kendall

Pat McFadden

Russia prepared to launch cyber attacks on UK, minister to warn

A person holds an iphone showing the app for Google chrome search engine

Apple and Google ‘should face investigation over mobile browser duopoly’

A Google icon on a smartphone

Firms can use AI to help offset Budget tax hikes, says Google UK boss

Icons of social media apps, including Facebook, Instagram, YouTube and WhatsApp, are displayed on a mobile phone screen

Growing social media app vows to shake up ‘toxic’ status quo

Will Guyatt questions who is responsible for the safety of children online

Are Zuckerberg and Musk responsible for looking after my kids online?

Social media apps on a phone

U16s social media ban punishes children for tech firm failures, charities say

Google shown on a smartphone

US Government proposes forcing Google to sell Chrome to break-up tech empire

The logo for Google's Gemini AI assistant

Google’s Gemini AI gets dedicated iPhone app in the UK for the first time

Facebook stock

EU fines Meta £660m for competition rule breaches over Facebook Marketplace

A phone taking a photo of a phone mast

Government pledges more digital inclusion as rural Wales gets phone mast boost

Social media apps displayed on a mobile phone screen

What is Bluesky and why are people leaving X to sign up?

Someone types at a keyboard

Cyber security chief warns Black Friday shoppers to be alert to scams

MPs

Ministers pressed on excluding Chinese firms from UK’s genomics sector

Child with mobile phone stock

Specially designed smartphone for children launches in the UK

Roblox on a laptop

Children’s gaming platform Roblox makes ‘major update’ to parental controls