No tool exists that can precisely tell whether a phishing email was written by an AI chatbot. This is one of the depressing highlights of Phishing Threat Trends Report, a study released Monday by cyber security company Egress.
Most detection tools utilize large language models (LLMs). But these tend to be most accurate with longer sample sizes — say, 250 characters.
But 44.9% of phishing meetings do not meet that limit. And 26.5% fall below 500.
The result: 71.4% of attacks cannot be reliably be detected.
“Without a doubt chatbots or large language models (LLM) lower the barrier for entry to cybercrime, making it possible to create well-written phishing campaigns and generate malware that less capable coders could not produce alone,” says Jack Chapman, vice president of threat intelligence, Egress.
advertisement
advertisement
Chapman adds: “Within seconds a chatbot can scrape the internet for open-source information about a chosen target that can be leveraged as a pretext for social engineering campaigns, which are growing increasingly common.”
Here’s another problem: 55.2% of phishing emails utilize obfuscation techniques to avoid detection.
Want to try this yourself? Here’s a little how-to on the popular techniques:
This year, 54.5% of phishing emails got through secure email gateways, versus 42.3% in 2022. In addition, 38.8% have made it through Microsoft defenses, up from 31% last year, the study states.
Here are two more details:
It’s not a pretty picture.