UK: AI to ‘Almost Certainly’ Increase Cyber Attacks in Next 2 Years

Hackers are even offering 'GenAI-as-a-service" to other criminals

Ben Wodecki, Deborah Yao

January 25, 2024

3 Min Read
Image of a lock on a keyboard
Getty Images

At a Glance

  • The U.K.'s National Cyber Security Centre said AI will raise the volume and impact of cyber attacks over the next two years.
  • All hackers are using AI but the least skilled gets the most uplift to their capabilities, contributing to global ransomware.
  • The most skilled hackers can mount more advanced AI-fueled attacks, but these will not come until 2025.

Artificial intelligence will “almost certainly” raise the number and impact of cyber attacks over the next two years, according to a new report by the U.K.’s National Cyber Security Centre (NCSC).

Bad actors will be able to use AI to analyze data faster and use it to train AI models for nefarious purposes. AI could also be used to assistant in the development of malware capable of evading detection by current security filters.

Moreover, the center said “all types” of attackers “are already using AI, to varying degrees,” whether they are state-backed or not, skilled or less skilled.

The center divided attackers into three groups: highly capable attackers that have abilities to attack nations; capable attackers and cyber criminal organizations; less skilled hackers and hacker activists (hactivisits).

AI will affect each type of bad actor differently.

For the two most capable groups, AI will give them enhanced capabilities in social engineering (manipulating people into carrying out certain actions or revealing information), phishing, as well as exfiltration (taking information out of the system) capabilities.

However, for reconnaissance (staking out a target before attack) and tools used for attacks, AI will only improve the picture either moderately or minimally for the two highest capable groups.

Related:NIST Creates Cybersecurity Playbook for Generative AI

This group will benefit most from AI

The group that will gain the most from using AI is the lowest tier of hackers: Less-skilled, opportunistic hackers-for-hire and other cyber criminals as well as hactivists.

AI will provide a “significant uplift” to their social engineering, phishing and password-stealing capabilities, an uplift to their reconnaissance and exfiltration abilities and a “moderate” uplift to their toolkit, according to the center.

As such, AI stands poised to help unskilled bad actors conduct more effective cyber attacks as it lowers the barrier to skills needed to mount attacks.

The arming of the least skilled hackers will contribute to the global ransomware threat over the next two years, according to the NCSC. In a ransomware attack, hackers block access to a company’s, organization’s or user’s data until a ransom is paid.

In the near term, ransomware will remain as the biggest cybersecurity threat due to its financial rewards and “established business model.” Cyber criminal gangs can have very organized set-ups, including  a customer service center for victims of ransomware if they have trouble restoring access to data even after paying the hackers.

Most advanced attacks coming in 2025

Related:AI and Cybersecurity: Guard Against ‘Poisoning’ Attacks

However, the most sophisticated attacks will still come from the two most capable groups of hackers, who have access to quality training data and have “significant” expertise and resources.

These most advanced uses of AI in attacks are “unlikely to be realized” until 2025, the center said.

“The emergent use of AI in cyber attacks is evolutionary, not revolutionary, meaning that it enhances existing threats like ransomware but does not transform the risk landscape in the near term,” said NCSC CEO Lindy Cameron, in a statement.

Cyber criminals are already making use of generative AI and are even developing ‘GenAI-as-a-service’ for other bad actors to buy. However, the effectiveness of these new tools and systems will be constrained by both the quantity and quality of data on which they are trained, the report noted.

The NCSC said that private industry is already adopting AI’s use in enhancing cyber security resilience through improved threat detection and security-by-design.

Steve Young, Dell’s U.K. senior vice president and managing director, said businesses should consider adopting a Zero Trust framework to protect themselves.

“Data privacy, cybersecurity and storage standards will all mitigate risks associated with using AI technology. Additionally, while GenAI deployments will span all locations, it may prove more manageable, cost-effective and secure for enterprises to bring AI closer to their data. For many, that will mean their AI models and data working together in their own, secure environment,” he said in an emailed statement.

Ivana Bartoletti, chief privacy and AI governance officer at Wipro, added that it is “crucial that we build an alliance between technical experts and policymakers so we can develop the future of AI in threat hunting and beyond, and support organizations in the fight to protect their assets.”

Read more about:

ChatGPT / Generative AI

About the Authors

Ben Wodecki

Jr. Editor

Ben Wodecki is the Jr. Editor of AI Business, covering a wide range of AI content. Ben joined the team in March 2021 as assistant editor and was promoted to Jr. Editor. He has written for The New Statesman, Intellectual Property Magazine, and The Telegraph India, among others. He holds an MSc in Digital Journalism from Middlesex University.

Deborah Yao

Editor

Deborah Yao runs the day-to-day operations of AI Business. She is a Stanford grad who has worked at Amazon, Wharton School and Associated Press.

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like