Generative AI ‘helping criminals create more sophisticated cyber attacks’

30 November 2023, 00:04

ChatGPT
AI research. Picture: PA

The UK’s National Cyber Security Centre has also highlighted the use of AI to create and spread disinformation as a key threat.

The rise of generative AI tools such as ChatGPT is helping cybercriminals create more convincing sophisticated scams, cybersecurity experts have warned.

As ChatGPT marks the first anniversary of its launch to the public, a number of industry experts have said the technology is being leveraged by bad actors online.

They warn that generative AI tools for text and image creation are making it easier for criminals to create convincing scams, but also that AI is being used to help boost cyber defences by helping identify evolving threats as they appear.

At the UK’s AI Safety Summit earlier this month, the threat of more sophisticated cyber attacks powered by AI was highlighted as a key risk going forward, with world leaders agreeing to work together on the issue.

The UK’s National Cyber Security Centre (NCSC) has also highlighted the use of AI to create and spread disinformation as a key threat in years to come, especially around elections.

James McQuiggan, security awareness advocate at cyber security firm KnowBe4, said the impact of generative AI, and the large language models (LLMs) which power them, was already being felt.

“ChatGPT has revolutionised the threat landscape, open source investigations, and cybersecurity in general,” he told the PA news agency.

“Cybercriminals leverage LLMs to generate well-written documents with proper grammar and no spelling mistakes to level up their attacks and circumvent one of the biggest red flags taught in security awareness programmes – the notion that poor grammar and spelling mistakes are indicative of social engineering email or phishing attacks.

“Unsurprisingly, there have been increased sophistication and volume of phishing attacks in various styles, creating challenges for businesses and consumers alike.

“With generative AI also lowering the technical barrier to creating convincing profile pictures, impeccable text and even malware, AI and LLMs like ChatGPT are increasingly being used to create more convincing phishing messages at scale.”

The next generation of generative AI models are expected to start appearing in 2024, with experts predicting they will be significantly more capable than the current generation models.

Looking ahead to potential future uses of generative AI by bad actors, Borja Rodriguez, manager of threat intelligence operations at cyber security firm Outpost24, said hackers could develop AI tools to write malicious code for them.

“Currently, tools like Copilot from GitHub help developers generate code automatically,” he said.

“Not far from that, someone could create a similar tool specifically to assist in creating malicious code, scripts, backdoors and more, aiding script kiddies (novice hackers) with low levels of technical knowledge to achieve things they weren’t capable of in the past.

“These tools will assist underground communities in executing complex attacks without much expertise, lowering the skill requirements for those executing them.”

The rate of advancement of generative AI, and the general unknown potential of the technology for the years to come, has created an uncertainty around it, the experts say.

Many governments and world leaders have begun discussions on how to potentially regulate AI, but without knowing more about the possibilities of technology, piecing together successful regulation will be unlikely.

Etay Maor, senior director of security strategy at Cato Networks, said the issue of trust remained key in regard to LLMs, which are trained on large amounts of text data, and how they are programmed.

“As the excitement surrounding LLMs settles into a more balanced perspective, it becomes imperative to acknowledge both their strengths and limitations,” he said.

“Users must verify critical information from reliable sources, recognising that, despite their prowess, LLMs are not immune to errors.

“LLMs such as ChatGPT and Bard have already reshaped the landscape.

“However, a lingering uncertainty persists as the industry grapples with understanding where these tools source their information and whether they can be fully trusted.”

By Press Association

More Technology News

See more More Technology News

A remote-controlled sex toy

Remote-controlled sex toys ‘vulnerable to attack by malicious third parties’

LG AeroCatTower (Martyn Landi/PA)

The weird and wonderful gadgets of CES 2025

Sinclair C5 enthusiasts enjoy the gathering at Alexandra Palace in London

Sinclair C5 fans gather to celebrate ‘iconic’ vehicle’s 40th anniversary

A still from Kemp's AI generated video

Spandau Ballet’s Gary Kemp releases AI generated music video for new single

DragonFire laser weapon system

Britain must learn from Ukraine and use AI for warfare, MPs say

The Pinwheel Watch, a smartwatch designed for children, unveiled at the CES technology show in Las Vegas.

CES 2025: Pinwheel launches child-friendly smartwatch with built in AI chatbot

The firm said the morning data jumps had emerged as part of its broadband network analysis (PA)

Millions head online at 6am, 7am and 8am as alarms go off, data shows

A mobile phone screen

Meta ends fact-checking on Facebook and Instagram in favour of community notes

Mark Zuckerberg

Meta criticised over ‘chilling’ content moderation changes

Apps displayed on smartphone

Swinney voices concern at Meta changes and will ‘keep considering’ use of X

sam altman

Sister of OpenAI CEO Sam Altman files lawsuit against brother alleging sexual abuse as child

OpenAI chief executive Sam Altman with then-prime minister Rishi Sunak at the AI Safety Summit in Milton Keynes in November 2023

OpenAI boss Sam Altman denies sister’s allegations of sexual abuse

A super-resolution prostate image

New prostate cancer imaging shows ‘extremely encouraging’ results in trials

Gadget Show

AI will help workers with their jobs, not replace them, tech executives say

Zuckerberg said he will "work with President Trump to push back on governments around the world that are going after American companies and pushing to censor more”.

Meta’s ‘chilling’ decision to ditch fact-checking and loosen moderation could have ‘dire consequences’ says charity

Twitter logo

X boss Linda Yaccarino praises Meta’s decision to scrap fact checkers