Nick Abbot 10pm - 1am
Artificial Intelligence already working one million hours a year for police, as concerns raised over criminal ‘AI race’
1 November 2023, 13:56
One police force has worked its way through ‘65 years of data in just six months’ thanks to advances in artificial intelligence (AI), according to the chief scientific adviser for police.
Listen to this article
Loading audio...
Speaking to LBC, Paul Taylor said the technology - which is being discussed at a safety summit this week - is already working the equivalent shifts of 600 officers every year.
"All forces are benefiting from AI already, it’s integrated into systems around unmanned vehicles and drones and in language translation for rapid crisis situations,” he said.
“We’re using AI in facial recognition technology, identifying hundreds of offenders every month.
"It’s looking through hundreds of thousands of images to identify illegal child pornography material. Historically our teams would have had to look at that material manually, now we’re able to use artificial intelligence to find those explicit and horrible images.
“That not only speeds up the investigation, it also means our workforce is not having to look at lots of that material - which is important.
“Of course, in every call it’s a human making the final decision but what the AI is doing is helping those humans complete their tasks in a rapid manner.”
Mr Taylor insisted the increased use of the technology does not mean people will lose their jobs - rather, it would free officers up to “get back to the things they joined the police for in the first place”.
Researchers have been developing the use of artificial intelligence for more than a decade across different sectors.
The government has been using it to identify fraudulent benefit claims.
National Grid uses AI drones to maintain energy infrastructure.
And the NHS has been working on systems to manage hospital capacity, train and support surgeons in carrying out complex operations and to more accurately diagnose conditions.
Jorge Cardosa, a researcher at King’s College London, showed LBC a system they’ve developed which compares MRI scans, to quantify issues to aid diagnoses - rather than relying on a human’s educated guess.
“A lot of these AI systems will do many of the really boring jobs that clinicians and nurses currently do and release their time to focus more on the patients. But it’s also making it easier to diagnose issues and give clinicians all the information they need.
“In this example, it’s a way to transform complex images into a series of numbers that can help figure out what’s wrong, while AI is also gathering all the data the NHS holds about a patient to stitch it together and help build a better picture.
“The ultimate decision is always with the clinician and the patient though, who should always be able to opt in or opt out.”
Concerns have been raised about the rapid development of the technology, though, particularly when it comes to national security.
Paul Taylor, who works closely with police chiefs across the UK, went on to tell LBC that they need to be aware of the ‘AI race’ as criminals look to exploit the use of the technology.
"We have that kind of tension of making we’re rolling it out proportionately and sensibly but equally understanding that as it’s moving forwards, that criminals don’t have the same moral standards that we would have.
"Two of our most present concerns are around deepfakes, where images and videos are being used in exploitation cases. We are concerned about that becoming easier and easier to do and are building technologies to help us spot those fakes and stop them at source.
"And the other is automation of fraud, with things like ChatGPT which can create very convincing narratives. You can imagine that being automated into a system where we can see large scale fraud utilising Artificial Intelligence.
"Those are two areas of many threats that we are alive to, but the opportunities hopefully outweigh the threats."