April 5, 2023
At a Glance
- Sophisticated phishing emails rose 135% in number from January to February, coinciding with ChatGPT's rise.
- Generative AI tools are letting criminals craft well-written scam emails, with 82% of workers worried they will get fooled.
Researchers noticed an uptick in “novel social engineering attacks” in Darktrace customers’ emails from January to February, “corresponding to the widespread adoption of ChatGPT,” the company said. These emails were much better written and crafted.
Notably, 61% of people said they spot scam or phishing emails by their poor spelling and grammar, but generative AI has enabled cyber criminals to craft emails that have “no mistakes."
“The trend suggests that generative AI, such as ChatGPT, is providing an avenue for threat actors to craft sophisticated and targeted attacks at speed and scale,” the company said.
As such, 82% of employees fear that they cannot distinguish phishing from genuine communication.
For decades, email has been and remains as the main communication tool of companies. It also is an organization’s most vulnerable point of attack because it takes just one employee to click on an email attachment to let criminals in, according to Darktrace.
Around 3.4 billion phishing emails are sent daily, the cybersecurity firm said.
Generative AI tools could prove an effective workaround for malicious actors, especially when 30% of respondents said they have fallen for a fraudulent email or text in the past.
In addition to Darktrace's customers, 70% of global employees said they have noticed an increase in the frequency of scam emails and texts in the last six months.
Darktrace surveyed 6,700 workers in the U.S., U.K., France, Germany, Australia and the Netherlands.
Read more about:ChatGPT
About the Author(s)
You May Also Like