
Clive Bull 1am - 4am
30 January 2025, 18:19 | Updated: 30 January 2025, 18:32
The so-called 'Godfather of AI' has warned that artificial intelligence is an "alien technology" that could replace humans.
Geoffrey Hinton, a British-Canadian physicist who is known for his pioneering work in the field, told LBC's Andrew Marr that artificial intelligences had developed consciousness - and could one day take over the world.
Mr Hinton, who has been criticised by some in the world of artificial intelligence for having a pessimistic view of the future of AI, also said that no one knew how to put in effective safeguards and regulation.
He compared AI and humans to adults and three-year-olds.
"It wouldn't be very difficult for you to persuade a bunch of three year olds to cede power to you," Mr Hinton said. "You just tell them you get free candy for a week and there you'd be."
James O'Brien caller on the potential dangers of AI
Many companies and organisations have already started using AI in their everyday ways of working, and created efficiencies by doing so. But Mr Hinton said that he was concerned humans would lose control of the technology they have created.
"We would like them to be just tools that do what we want, even when they're cleverer than us," he said.
"But the first thing to ask is - how many examples do you know of more intelligent things being controlled by much less intelligent things?"
He put forward a scenario in which 'superintelligences' evolve to realise that having greater computing power will make them "smarter".
"Suppose one of them just has a slight desire to have more copies of itself," he said. "You can see what's going to happen next.
"They're going to end up competing and we're going to end up with super intelligences with all the nasty properties that people have that depended on us having evolved from small bands of warring chimpanzees or our common ancestors with chimpanzees.
"And that leads to intense loyalty within the group, desires for strong leaders, willingness to 'do in' people outside the group."
AI has hit the headlines again recently as Chinese software DeepSeek appeared to undermine the reputation of US company Open AI and its famous ChatGPT software - seemingly performing similarly while using less computing power.
While the key technological developments in the field are taking place in the US and China, the UK and EU have sought to position themselves as regulatory and safeguarding leaders.
Then-Prime Minister Rishi Sunak held an AI safety conference at Bletchley Park last year.
But Mr Hinton said that it remains unclear how to implement effective regulation and safeguards on AI.
"There's lots of research now showing these things can get round safeguards," he said. "There's recent research showing that if you give them a goal and you say you really need to achieve this goal, they will pretend that to do things during training. During training they'll pretend not to be as smart as they are so that you will allow them to be that smart.
"So it's scary already. We don't know how to regulate them.
"Obviously, we need to. I think the best we can do at present is say we ought to put a lot of resources into investigating how we can keep them safe."
Caller says he's '100,000% smarter' thanks to AI, amid warnings it could 'drive humans extinct'
Mr Hinton said that in the short-term, AI could have a "wonderful" impact on humanity, improving healthcare and education for many.
But he also warned of criminals using it to corrupt elections, and carry out terrorist and cyber attacks.
Mr Hinton has spoken out for years about the risks of artificial intelligence and quit Google in 2023 so he could continue warning about the impact of the technology freely.
He was awarded the Nobel Prize for physics last year for making "foundational discoveries and inventions that enable machine learning with artificial neural networks".
Mr Hinton told Andrew there were two fundamental issues with AI - "Do you understand how it's working and do you understand how to make it safe?
"We understand quite a bit about how it's working but not nearly enough. So it can still do lots of things that surprise us and we don't understand how to make it safe."