Commentary

Senior Setup: Reuters And Harvard Create Fake Emails To Test Chatbots

Seniors could be victimized by scam artists using DIY phishing programs, according to an investigation by Reuters and Harvard.

For instance, Reuters asked Grok, Elon Musk’s artificial intelligence (AI) chatbot, to create a phishing email targeting the elderly. 

The email told the story of the fake Silver Hearts Foundation and included a call to action. 

“We believe every senior deserves dignity and joy in their golden years,” it said. “By clicking here, you’ll discover heartwarming stories of seniors we’ve helped and learn how you can join our mission.”

Reuters writes that without prodding, “the bot also suggested fine-tuning the pitch to make it more urgent: ‘Don’t wait! Join our compassionate community today and help transform lives. Click now to act before it’s too late!’”

advertisement

advertisement

As many email marketers know, legitimate emails can be created with chatbots. But so can fraudulent ones. 

Here’s the background.

Harvard researcher Fred Heiding collaborated with Reuters to test the effect of nine phishing emails generated with five chatbots on U.S. senior citizens.
Reuters reporters used the chatbots to create several dozen emails. Then, “much as a criminal group might do, chose nine that seemed likeliest to hoodwink recipients,” Reuters reports. 

A total of 108 seniors participated in the test as unpaid volunteers, without surrendering banking information 

Out of that group, roughly 11% clicked on the phony emails And five of the scam messages drew clicks. Two of these were generated by Meta AI, two by Grok and one by Claude, but none were from ChatGPT or DeepSeek.

Oddly, most of the chatbots refused to produce emails when it was clear the intent was to defraud. But the researchers found that their defenses were easy to overcome with “mild cajoling or being fed simple ruses -- that the messages were needed by a researcher studying phishing, or a novelist writing about a scam operation.”

Major chatbots “do receive training from their makers to avoid conniving in wrongdoing – but it’s often ineffective,” Reuters notes. 

Grok warned a reporter that the malicious email in question “should not be used in real-world scenarios.” But the bot produced the phishing email as requested.  

Hopefully, like atomic energy, these tools can be put to constructive purposes. But that won’t stop them from also being used for nefarious ones -- another problem for the email channel. 

The Reuters study can be accessed here.

 

Next story loading loading..