Companies Face New AI Risks, Cybersecurity Issues

AI technologies are empowering cybercriminals with malicious social engineering tactics

Michael Gray, Chief technology officer at Thrive

July 29, 2024

2 Min Read
A digital padlock
Getty images

As AI technology rapidly evolves and is adopted, U.K. businesses are facing a fever pitch of cybersecurity concerns. With no signs of slowing down, the global AI industry is expected to grow in value from $88 billion in 2022 to $411 billion by 2027. This upward tick will dynamically change the AI-generated threat landscape.

The January 2024 report by the National Cyber Security Centre (NCSC) stated that AI technologies are empowering cybercriminals with malicious social engineering tactics, such as malware, ransomware or phishing, luring targets into handing over sensitive and personal information. The rapid processing speed of AI enables cybercriminals to identify and exploit a massive amount of data in network assets, exposing vulnerabilities and gaps in IT infrastructures.

Another obstacle businesses face is the emergence of large language models (LLMs) and generative AI. These technologies make it difficult to detect email phishing attempts or social engineering schemes. As AI-generated malware evolves, the time between security updates being released and hackers exploiting unpatched software decreases and the race to secure businesses can feel like standing in quicksand. The NCSC cautions that these developments will likely be the focal point for cybersecurity resilience challenges for the U.K. government and private enterprises soon.

Related:Leveraging GPTs for Business Growth and Marketing

Recent cyber incidents and breaches across the U.K. serve as reminders of how critical infrastructure is to cyber threats and to protect data integrity and business continuity. That’s why it’s important now, more than ever, for businesses to implement proactive security measures to integrate AI capabilities swiftly.

Key considerations businesses need to adopt include:

  • Similar to any technology adoption, companies should clearly outline how they intend to use AI tools and then specify their expected returns on investment.

  • Invest in training and educational programs to ensure that employees have the skills to use AI technology for optimal benefits while mitigating potential risks.

  • Establish procedures and protocols to maintain the quality of AI-generated output. It’s important to define what information should or should not be shared with the AI model to prevent any compromise of data integrity.

Security is no longer a luxury but a necessity. As businesses embrace AI technologies and their infinite potential, the risk is greater than the reward if companies do not invest in and implement robust cybersecurity measures for intelligence. In addition, businesses should consider partnering with outsourced technology partners to help monitor and advise on AI-related threats before a breach occurs.

Related:Tiny AI is the Future of AI

 

About the Author

Michael Gray

Chief technology officer at Thrive, Thrive

Michael Gray is the chief technology officer at Thrive, a global technology outsourcing provider for cybersecurity, cloud and traditional managed service provider (MSP) services. Michael has held several positions at Thrive including network engineer, consulting engineer, solutions manager and director of network operations. Michael previously worked for a publicly traded biotechnology company that was acquired by one of the top five pharmaceutical companies in the world. Michael now plays an integral role in hosted and managed services product management and development.

Michael has a degree in business administration from Northeastern University. He is also a Kaseya certified master administrator and a Sonicwall network security advanced administrator. He is a member of various partner councils, including Sonicwall’s VAR council.

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like