From ChatGPT to the AI Safety Summit: The year in AI

24 December 2023, 00:04

AI safety summit
AI safety summit. Picture: PA

The technology has become increasingly part of everyday lives over the last year.

Artificial intelligence has become one of the biggest issues in tech in 2023, driven by the rise of generative AI and apps such as ChatGPT.

Since OpenAI rolled out ChatGPT to the public in late 2022, awareness of the technology and its potential has exploded – from being discussed in parliaments around the world to being used to write TV news segments.

The public interest in generative AI models has also pushed many of the world’s largest tech companies to introduce their own chatbots, or speak more publicly about how they plan to use AI in the future, while regulators have increased debate around how countries can and should approach the opportunities and potential risks of AI.

In 12 months, conversations around AI have gone from concerns over how it could be exploited by schoolchildren to do their homework for them, to Prime Minister Rishi Sunak hosting the first AI safety summit of nations and technology companies to discuss how to prevent AI from surpassing humanity or even posing an existential threat.

In short, 2023 has been the year of AI.

Much like the technology itself, product launches around AI moved quickly over the last 12 months, with Google, Microsoft and Amazon all following OpenAI in announcing generative AI products in the wake of ChatGPT’s success.

Google unveiled Bard, an app it said would have the edge over any of its rivals in the new AI chatbot space because it was powered by the data from Google’s industry-leading search engine, and established Google Assistant virtual helper, found in its smartphones and smart speakers.

On a similar note, Amazon used its big product launch of the year to talk about how it was using AI to make its virtual assistant Alexa sound and respond in a more human fashion – able to understand context and react to follow-up questions more seamlessly.

And Microsoft began the rollout of its new Copilot, its take on combining generative AI with a virtual assistant on Windows, allowing users to ask for help with any task they were doing, from writing a report to organising the open windows on their screen.

Elsewhere, Elon Musk announced the creation of xAI, a new start-up focused on work in the artificial intelligence space.

The first product from that start-up has already appeared in the form of Grok, a conversational AI available to paying subscribers to Musk-owned X, formerly known as Twitter.

Such large-scale developments in the sector could not be ignored by governments and regulators, and debate around regulation of the AI sector has also intensified during the year.

In March, the Government published its White Paper on AI, which proposed using existing regulators in different sectors to carry out AI governance, rather than give responsibility to a new single regulator.

But any AI Bill is still yet to be brought forward, a delay that has been criticised by some experts, who have warned that it risks allowing the technology to go unchecked just as the use of AI tools is exploding.

The Government has said it does not want to rush to legislate while the world is still getting to grips with the potential of AI, and says its approach is more agile and allows for innovation.

In contrast, earlier this month the EU agreed on its own set of rules around AI oversight, although they are unlikely to become law before 2025, which will give regulators the power to scrutinise AI models and be provided with details on how models are trained.

But Mr Sunak’s desire for the UK to be a key player in AI regulation was highlighted in November as he hosted world leaders and industry figures at Bletchley Park for the world’s first AI Safety Summit.

Mr Sunak and Technology Secretary Michelle Donelan used the two-day summit to discuss the threats of so-called “frontier AI”, cutting edge aspects of the technology which, in the wrong hands, could be used for nefarious means.

The summit saw all the international attendees, including the US and China, sign the Bletchley Declaration, which acknowledged the risks of AI and pledged to develop safe and responsible models.

And the Prime Minister announced the launch of the UK’s AI Safety Institute, alongside a voluntary agreement with leading firms including OpenAI and Google DeepMind, to allow the institute to test new AI models before they are released.

Although not a binding agreement, it has laid the groundwork for AI safety to become an increasingly prominent part of the debate moving forwards.

Elsewhere, the AI industry witnessed some major boardroom soap opera to end the year, as ChatGPT maker OpenAI sensationally ousted chief executive Sam Altman in late November.

But it sparked backlash among staff, nearly all of whom signed a letter pledging to leave the company and join Altman on a proposed new AI research team at Microsoft if he was not reinstated.

Within days Altman was back at the helm of OpenAI and the board had been reconfigured, with the reasoning behind the saga still unclear.

Since then, the UK’s Competition and Markets Authority (CMA) has asked for views from within the industry on Microsoft’s partnership with OpenAI, which has seen the tech giant invest billions into the AI firm and have an observer on its board.

The CMA said it was minded to look into the partnership in part because of the Altman saga.

Another sign that the coming year is likely to see scrutiny of the AI sector continue to intensify.

By Press Association

More Technology News

See more More Technology News

Google homepage

Competition regulator objects to Google’s ad tech practices

A passenger waits for a Tube train at Westminster London Underground station

TfL restricts access to online services due to cyber attack

A purple Currys sign above a store entrance

Currys boosted by AI-curious customers as it takes 50% laptop market share

The Darktrace wesbite

Darktrace chief steps down ahead of £4.3bn private equity takeover

Charlotte Owen

Baroness Owen to introduce law change aimed at criminalising deepfake creation

Hands using computer with artificial intelligence app

UK signs first international treaty on artificial intelligence

The logo of mobile phone network EE is displayed on the screen of a smartphone

EE launches its first standalone 5G network across 15 UK cities

Lord Chancellor Shabana Mahmood signs first legally-binding treaty governing safe use of artificial intelligence.

'We must not let AI shape us': UK to sign first international treaty to safeguard public from risks of artificial intelligence

Visa debit card sitting on a keyboard

Visa unveils initiative to boost consumer protection for bank transfers

A child using a laptop computer

Seven in 10 children exposed to harmful content online – research

Oasis band members Noel Gallagher and Liam Gallagher

Dynamic pricing to be examined by European Commission amid Oasis ticket furore

Amazon's new AI-powered shopping assistant Rufus on a smartphone

Amazon launches AI-powered shopping assistant Rufus in the UK

Gamers play on a PlayStation 4

Sony to take multiplayer game Concord offline two weeks after release

A woman's hnad on a laptop keyboard

Competition watchdog clears Microsoft arrangements with Inflection AI

Health Minister Stephen Donnelly (PA)

Time of expecting social media sites to remove harmful content ‘is over’

An Nvidia sign

Nvidia shares plunge nearly 10% in largest single-day value loss for a US firm