Abnormal Security, a behavioral AI-based email security platform, has introduced a program for detecting AI-generated email attacks, including business email compromise (BEC) attempts.
The new tool, CheckGPT, is designed to “combat the threat of AI by investing in AI-powered security solutions that ingest thousands of signals to learn their organization’s unique user behavior, apply advanced models to precisely detect anomalies, and then block attacks before they reach employees,” says Evan Reiser, chief executive officer at Abnormal Security.
CheckGPT uses a suite of open source large language models (LLMs) to analyze the likelihood of a generative AI model creating a fraudulent message. It does this by analyzing the context that precedes it, then combines this indicator with what it calls an ensemble of AI detectors to make a final determination, the firm says.
Abnormal’s most recent research report showed a 55% increase in BEC attacks over the previous six months.
“The degree of email attack sophistication is going to significantly increase as bad actors leverage generative AI to create novel campaigns,” says Karl Mattson, chief information security officer at Noname Security.
Noname Security is using Abnormal for advanced email attack detection. “It's not reasonable that each company can become an AI security specialty shop,” Mattson explains.