Ben Kentish 4pm - 7pm
Labour commits to introducing AI regulation for tech giants
13 June 2024, 14:34
The party’s manifesto says it will introduce “binding regulation” on the safe development of AI models.
Labour has said it will introduce “binding regulation” on the biggest artificial intelligence firms to ensure the “safe development” of AI if it wins the General Election.
In its manifesto, the party said it would target the regulation at the “handful of companies developing the most powerful AI models”.
Labour said it would also ban the creation of sexually explicit deepfakes, and pledged to create a new Regulatory Innovation Office which it said would help regulators across sectors keep up with rapidly evolving new technologies.
It said regulators were currently “ill-equipped” to deal with such advances, which often “cut across traditional industries and sectors”.
The new office would help regulators “update regulation, speed up approval timelines and co-ordinate issues that span existing boundaries”, Labour said.
This is in contrast to the Government’s approach during the last parliament, which chose to use existing regulators to take on the role of monitoring AI use within their own sectors rather than creating a new, central regulator dedicated to the emerging technology, which it said was a more agile and pro-innovation approach.
As part of that approach, in February, the Government pledged to spend £100 million on AI regulation, including on upskilling regulators across different sectors on how handle the rise of AI.
And speaking in November last year, Prime Minister Rishi Sunak said that while “binding requirements” would likely be needed one day to regulate AI, it was currently the time to move quickly without laws.
Last month, a number of world-leading AI scientists called for stronger action from world leaders on the risks associated with AI, and said governments were moving too slowly to regulate the rapidly evolving technology.
In an expert consensus paper published in the Journal Science, 25 leading scientist said more funding was needed for AI oversight institutions, as well as more rigorous risk assessment regimes.