Iain Dale 7pm - 10pm
Tech chiefs call on scientists to pause development of AI systems
29 March 2023, 19:14
Elon Musk and Steve Wozniak are among those calling on researchers to make sure advances do not pose a risk to humanity.
Technology experts including Elon Musk have urged scientists to pause developing artificial intelligence (AI) to ensure it does not pose a risk to humanity.
Tech chiefs including Apple co-founder Steve Wozniak and Skype co-founder Jaan Tallinn have signed an open letter demanding all labs stop training AI systems for at least six months.
The prevalence of AI has increased massively in recent years, with systems such as chatbot ChatGPT quickly becoming part of everyday life.
The letter said: “Recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no-one – not even their creators – can understand, predict or reliably control.
“Contemporary AI systems are now becoming human-competitive at general tasks and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth?”
It added: “Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable.”
The technology chiefs do not want any AI systems more powerful than new chatbot GPT-4 and called for researchers to focus on making sure the technology is accurate, safe and transparent.
US tech firm OpenAI released its latest version of AI chatbot ChatGPT earlier this month.
ChatGPT was launched late last year and it has become an online sensation because of its ability to hold natural conversations but also to generate speeches, songs and essays.
The bot can respond to questions in a human-like manner and understand the context of follow-up queries, much like in human conversations. It can even admit its own mistakes or reject inappropriate requests.
According to OpenAI, GPT-4 has “more advanced reasoning skills” than ChatGPT but, like its predecessors, GPT-4 is still not fully reliable and may “hallucinate” – a phenomenon where AI invents facts or makes reasoning errors.
The letter said humanity can now enjoy an “AI summer” where it can reap the rewards of the systems but only once safety protocols have been made.
The letter added: “Humanity can enjoy a flourishing future with AI. Having succeeded in creating powerful AI systems, we can now enjoy an ‘AI summer’ in which we reap the rewards, engineer these systems for the clear benefit of all and give society a chance to adapt.
“Society has hit pause on other technologies with potentially catastrophic effects on society.
“We can do so here. Let’s enjoy a long AI summer, not rush unprepared into a fall.”