Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!
Artificial intelligence will “almost certainly” raise the number and impact of cyber attacks over the next two years, according to a new report by the U.K.’s National Cyber Security Centre (NCSC).
Bad actors will be able to use AI to analyze data faster and use it to train AI models for nefarious purposes. AI could also be used to assistant in the development of malware capable of evading detection by current security filters.
Moreover, the center said “all types” of attackers “are already using AI, to varying degrees,” whether they are state-backed or not, skilled or less skilled.
The center divided attackers into three groups: highly capable attackers that have abilities to attack nations; capable attackers and cyber criminal organizations; less skilled hackers and hacker activists (hactivisits).
AI will affect each type of bad actor differently.
For the two most capable groups, AI will give them enhanced capabilities in social engineering (manipulating people into carrying out certain actions or revealing information), phishing, as well as exfiltration (taking information out of the system) capabilities.
However, for reconnaissance (staking out a target before attack) and tools used for attacks, AI will only improve the picture either moderately or minimally for the two highest capable groups.
The group that will gain the most from using AI is the lowest tier of hackers: Less-skilled, opportunistic hackers-for-hire and other cyber criminals as well as hactivists.
AI will provide a “significant uplift” to their social engineering, phishing and password-stealing capabilities, an uplift to their reconnaissance and exfiltration abilities and a “moderate” uplift to their toolkit, according to the center.
As such, AI stands poised to help unskilled bad actors conduct more effective cyber attacks as it lowers the barrier to skills needed to mount attacks.
The arming of the least skilled hackers will contribute to the global ransomware threat over the next two years, according to the NCSC. In a ransomware attack, hackers block access to a company’s, organization’s or user’s data until a ransom is paid.
In the near term, ransomware will remain as the biggest cybersecurity threat due to its financial rewards and “established business model.” Cyber criminal gangs can have very organized set-ups, including a customer service center for victims of ransomware if they have trouble restoring access to data even after paying the hackers.
However, the most sophisticated attacks will still come from the two most capable groups of hackers, who have access to quality training data and have “significant” expertise and resources.
These most advanced uses of AI in attacks are “unlikely to be realized” until 2025, the center said.
“The emergent use of AI in cyber attacks is evolutionary, not revolutionary, meaning that it enhances existing threats like ransomware but does not transform the risk landscape in the near term,” said NCSC CEO Lindy Cameron, in a statement.
Cyber criminals are already making use of generative AI and are even developing ‘GenAI-as-a-service’ for other bad actors to buy. However, the effectiveness of these new tools and systems will be constrained by both the quantity and quality of data on which they are trained, the report noted.
The NCSC said that private industry is already adopting AI’s use in enhancing cyber security resilience through improved threat detection and security-by-design.
Steve Young, Dell’s U.K. senior vice president and managing director, said businesses should consider adopting a Zero Trust framework to protect themselves.
“Data privacy, cybersecurity and storage standards will all mitigate risks associated with using AI technology. Additionally, while GenAI deployments will span all locations, it may prove more manageable, cost-effective and secure for enterprises to bring AI closer to their data. For many, that will mean their AI models and data working together in their own, secure environment,” he said in an emailed statement.
Ivana Bartoletti, chief privacy and AI governance officer at Wipro, added that it is “crucial that we build an alliance between technical experts and policymakers so we can develop the future of AI in threat hunting and beyond, and support organizations in the fight to protect their assets.”
Read more about:ChatGPT / Generative AI
Ben Wodecki is the Jr. Editor of AI Business, covering a wide range of AI content. Ben joined the team in March 2021 as assistant editor and was promoted to Jr. Editor. He has written for The New Statesman, Intellectual Property Magazine, and The Telegraph India, among others. He holds an MSc in Digital Journalism from Middlesex University.
You May Also Like
Generative AI Journeys with CDW UK's Chief TechnologistFeb 28, 2024
Qantm AI CEO on AI Strategy, Governance and Avoiding PitfallsFeb 14, 2024
Deloitte AI Institute Head: 5 Steps to Prepare Enterprises for an AI FutureJan 31, 2024
Athenahealth's Data Science Architect on Benefits of AI in Health CareJan 19, 2024