Daniel Barnett 9pm - 10pm
Chatbot users should not share private information with software, expert warns
26 December 2023, 13:14
Prof Michael Wooldridge said complaining about personal relationships or expressing political views to artificial intelligence was ‘extremely unwise’.
Users of ChatGPT and other chatbots should resist sharing their private information with the technology, an expert has warned.
Michael Wooldridge, professor of computer science at Oxford University, said complaining about personal relationships or expressing political views to the artificial intelligence (AI) was “extremely unwise”, the Daily Mail reported.
Prof Wooldridge will deliver the Royal Institution’s annual Christmas lectures on the BBC this week, with a focus on AI and help from the world’s first ultra-realistic robot artist, Ai-Da.
Speaking about finding personalities in chatbots, Prof Wooldridge said: “It has no empathy. It has no sympathy.
“That’s absolutely not what the technology is doing, and crucially it’s never experienced anything.
“The technology is basically designed to try to tell you what you want to hear – that’s literally all it’s doing.”
Prof Wooldridge said users should assume any information they type into ChatGPT or similar chatbots is “just going to be fed directly into future versions”, and it was nearly impossible to get data back once in the system.
Ai-Da can create drawings, performance art and paintings and sculptures, and Aidan Meller, director of the Ai-Da project, said developments with AI would create “seismic changes” across industries in the next four years.
He told BBC Radio 4’s Today programme: “AI is incredibly powerful – it’s going to transform society as we know it, and I think we’re really only at the very beginning.
“We have these explosions of development, things like ChatGPT that people know about, but in actual fact as more and more people get to grips with it, we think that by 2026 or 2027 there’s going to be a seismic change as AI is in all industries.”
Mr Meller said the medium of art allows scientists to discuss and study issues around AI without the risk of any threat to humans because it is benign.
Talking about the Royal Institution lectures, he said: “I think AI is going to enable us to have very fake situations, and we’re not going to know whether they’re fake or not – that is where lies the problem.
“We don’t know what we’re dealing with, and we hope that these lectures by the Royal Institution are going to be able to really open that topic up.
“Remember we’ve got the elections next year, very worrying times for things that are fake and not fake, so in actual fact it is a very serious matter.”
Mr Meller described 2024 as “a very big year” for AI, with the fifth version of ChatGPT set to be released which will be able to make actions rather than just act as a text-based editor.
He explained: “You could say to your phone ‘Can you book me the restaurant on Monday at seven?’ ChatGPT Five will be able to phone up the restaurant, speak to them audibly, say ‘Hi, I’m trying to get an appointment for seven’ and book it for you, and then come back to you and say ‘We’ve now done that’. Can you imagine how that’s going to be useful in business?”
Mr Meller also hailed progress in the Metaverse – an augmented reality platform created by Facebook parent company Meta – as a huge development in 2024.