Navigating Generative AI's Cybersecurity Challenges

An opinion piece by the director of security and strategy, EMEA, at Akamai Technologies

Richard Meeus, Akamai Director of Security and Strategy, EMEA

November 22, 2023

4 Min Read
Getty Images

The rapid progress in generative AI technology has ushered in a new era of innovation and creativity, enabling businesses to use its remarkable capabilities. Breakthroughs in generative AI could lead to a 7% increase in global GDP over a 10-year period, according to Goldman Sachs research, demonstrating the scale of the opportunity. However, as organizations embrace the power of this new technology, they must also confront the cybersecurity risks that accompany it.

Chief among them is the pressing concern of the Business Email Compromise (BEC), which refers to the practice of tricking employees into revealing sensitive information or inadvertently making payments to bad actors. In addition, organizations face the risk of employees and IT staff divulging proprietary information on these platforms or going rogue.

To fight these, companies need to reinforce their cybersecurity training and awareness and ensure it is updated to reflect the emerging role of generative AI in business. Balancing the immense possibilities of this technology with prudent safeguards is imperative to mitigate the risks effectively.

Generative AI: A powerful tool in the hands of bad actors

Generative AI technology provides bad actors with potent BEC tools to enhance the quality and efficacy of phishing emails. By using sophisticated algorithms, malicious actors can automatically generate persuasive messages that mimic legitimate communication, leading unsuspecting individuals to disclose information or fall victim to malware.

This evolution in phishing techniques presents a considerable challenge for cybersecurity experts, as the boundaries between authentic and AI-generated content blur. Bad actors can also emphasize their written messages with AI-generated deep fakes that can defeat voice recognition safeguards.

Generative AI not only amplifies the risks of phishing emails but also empowers malicious actors to create more potent malware. White hat hackers have found ways to bypass the safeguards of multiple Large-Language Models (LLMs), a practice called ‘jailbreaking,’ which involves tricking the systems into generating forbidden content.

This highlights the fact that cybercriminals can also leverage generative AI to create sophisticated code capable of infiltrating systems, stealing sensitive documents, and encrypting files for ransom. This alarming advancement enables attackers to automate the creation of malware, accelerating their ability to compromise networks and hold organizations hostage. As generative AI continues to evolve, the potential for malicious actors to exploit its capabilities in developing stealthy and destructive malware becomes a growing concern.

In addition to the activity of bad actors outside the network, generative AI increases vulnerability to internal incidents. The use of generative AI tools increases the risk of shadow IT, where developers might independently explore new generative AI technologies, potentially leading to unauthorized and unregulated data usage.

Furthermore, the risks extend to employees using LLMs, which process large swaths of text, in their work processes. Accidental submission of proprietary company information while using LLMs poses significant risks, as this data could become incorporated into the model’s dataset, ultimately becoming public.

How businesses can defend themselves

To mitigate the risks associated with generative AI, organizations need to establish and enforce comprehensive policies, such as a data handling and privacy policy and a vendor and third-party compliance policy. These policies should encompass guidelines for handling sensitive data within generative AI platforms, emphasizing the importance of data protection and responsible usage. By setting clear expectations and promoting a culture of security, businesses can minimize the chances of unintended data exposures and breaches.

Alongside policies, organizations must adopt technical safeguards to bolster their defenses. Implementing robust encryption, access controls, and monitoring mechanisms can help identify anomalous activities and potential data leaks. Collaborating with cybersecurity experts to deploy advanced threat detection systems and AI-driven anomaly detection can enhance organizations’ resilience against AI-generated threats.

The transformative potential of generative AI presents both innovative opportunities and intricate cybersecurity challenges. Recognizing that restricting access to such capabilities is impractical, organizations need to strike a balance between harnessing AI’s potential and implementing effective safeguards.

Prohibiting tool usage would stifle productivity. Thus, organizations must adopt reasonable policies and technical measures that empower them to tap into the capabilities of generative AI while minimizing risks. By integrating human vigilance with technological tools, organizations can proactively defend against the looming threat of AI-driven phishing attacks which, when implemented successfully, can leave businesses to explore the potential of generative AI while retaining a robust cybersecurity posture.

Read more about:

ChatGPT / Generative AI

About the Author(s)

Richard Meeus

Akamai Director of Security and Strategy, EMEA

Richard Meeus is the director of security and strategy, EMEA, at Akamai Technologies.

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like