Sponsored by Google Cloud
Choosing Your First Generative AI Use Cases
To get started with generative AI, first focus on areas that can improve human experiences with information.
Fighting back against criminal enterprises leveraging generative AI requires an equally forceful response
The rise of generative AI is having a transformative effect across almost every industry, from marketing and gaming, to even the regulated industries like financial services — Goldman Sachs employed its first Gen AI tool across the firm earlier this year, and healthcare. Arguably though, the area generative AI has had the most significant impact on is cybercrime, with 70% of businesses citing AI-driven fraud as their second greatest challenge. Deepfake videos, voice phishing calls and the like are giving bad actors a dangerous suite of tools with which to exploit individuals, businesses, and governments. And the use of these tools is surging, with deepfake fraud impacting 49% of global businesses in the past year.
The generative phishing email is one of the most insidious forms of generative AI-enabled cybercrime. AI platforms trained on email traffic at a macro and micro level can exploit human credulity at speed, scale, and with a precision that the systems typically used to protect email communications just can’t keep up with. Most concerningly for those affected by these scams, the adaptive nature of generative AI means that its ability to trick even the most well-trained employees will only increase over time.
Fighting back against criminal enterprises leveraging generative AI requires an equally forceful response. Here’s how organizations can use AI to build their defenses against these new attack vectors.
While it is considered a relatively new technology, artificial intelligence is omnipresent. We encounter AI in many aspects of our everyday lives, whether in the recommendation engines of Video-on-Demand and streaming platforms, weather forecasts based on predictive analytics, or the diagnostic tools now commonly used in hospitals and clinics.
Other applications such as robotics in manufacturing and construction, natural language processing in translation and transcription tasks, and automated fraud detection in banking have also quickly become the norm. Where email communications are concerned, a category of AI known as Discriminative AI has come to the fore. Discriminative AI can learn how to distinguish between classes of input data. It's a type of probabilistic machine learning (ML) that has been used for email filtering since the mid-1990s, so it's had nearly 30 years to learn from user feedback and adapt to new patterns in data.
With a quarter of US IT and security professionals saying they are concerned about phishing — and those generated by AI tools especially convincing to humans — Discriminative AI filters have a crucial job. Businesses must use these tools not only for spam filtration but also to recognise what constitutes normal communication patterns — internally between staff and externally between employees and other stakeholders — so anything out of the ordinary can be flagged.
Discriminative AI is an adaptive solution that learns over time and is critical to reducing the potential impact of email compromise, phishing attacks, and impersonation. Like Discriminative AI, generative AI is also a type of machine learning that can learn from pre-existing data patterns and then create new data based on these patterns, making it the ideal phishing email factory for bad actors.
For example, cybercriminals may use a generative AI platform such as ChatGPT to create a new phishing threat that uses a specific tactic to try and bypass filters and convince an employee to respond or click on a link within the message. It could be closely based on a message that the employee has received before, which looks to come from a colleague and is written in a way that seems convincing to the human eye.
Once discriminative AI identifies this message as a phishing threat, the communication chain can be flagged and quarantined pending human investigation. . Cybercriminals have access to generative AI models that allow them to create an effectively infinite number of replicants, so organizations must use Discriminative AI as a defense as traditional filters have no hope of keeping up with the sheer scale of generative AI-powered attacks.
Discriminative AI’s ability to identify patterns in a fraction of the time it would take a human means it has undoubted utility as a tool that can empower IT analysts to make decisions much more quickly. Organizations would be well-advised to use these technologies alongside human supervision to provide a multi-layered security defense. Traditional email gateways coupled with discriminative AI should be the first line of defense but needs to be backed up with human oversight to identify false positives and keep up with emerging cybercriminal tactics.
Many businesses are utilizing email security solutions to train their AIs to detect and adapt to telltale patterns of generative AI use that can fool human eyes and bypass secure email gateways. By incorporating the right AI tools into a company’s email systems, these organizations can benefit from defenses that learn typical communication patterns on a per-employee basis to enable the quick identification and containment of generative AI doppelgangers.
With less than a third of security professionals confident their systems can protect against AI threats, it’s clear that not enough businesses are leveraging an AI arsenal of their own. These organizations must quickly address the need to fight fire with fire, or risk being easy prey for cybercriminals.
You May Also Like