Can we really believe the claim by Blix Inc. that its new generative email tool, GEM AI, can help users can “leverage the power of AI to save valuable time while enjoying a highly personalized experience,” as Blix co-founder Ben Volach claims?
Probably not. We don’t doubt that BlueMail’s tool is just as good or better than any other in this space, but our inbox is filled with promotions for AI technology that can write your email, your article or your Master’s thesis.
Spare us the puffery.
Microsoft has made itself a laughingstock with its foray into ChatGPT, sending "unhinged messages" to people, according to The Independent. And there are signs that some of these news tools are not only ill-conceived, but dangerous.
For example, researchers at Check Point say bad actors are already trying to use ChatGPT to conduct phishing campaigns and spread malware, according to CNET. Writer Bree Fowler reports that “the free artificial intelligence tool that's taken the world by storm didn't have a problem writing a very convincing tech support note addressed to my editor asking him to immediately download and install an included update to his computer's operating system.”
CNET should know about the downside of of technology.
The CNET team had launched “a test using an internally designed AI engine -- not ChatGPT -- to help editors create a set of basic explainers around financial services topics,” CNET Editor-In-Chief Connie Guglielmo wrote in blog post.
Someone cited a factual error (“rightly,” Guglielmo acknowledges) and the team performed a full audit. Some stories required “correction, with a small number requiring substantial correction and several stories with minor issues such as incomplete company names, transposed numbers or language that our senior editors viewed as vague,” Guglielmo reports.
Worse, an AI-driven article in the Arena Group’s Men’s Journal titled “What All Men Should Know About Low Testosterone" was riddled with factual errors.
Bradley Anawalt, the chief of medicine at the University of Washington Medical Center, “reviewed the article and told Futurism that it contained “persistent factual mistakes and mischaracterizations of medical science that provide readers with a profoundly warped understanding of health issues,” Futurism reports.
Maybe these sorts of problems will be straightened out in time. But we hearken back to what the grand old direct mail copywriters like Frank Johnson and Bill Jayme said.
Jayme argued, correctly, that readers have to be rewarded for their reading time.
That requires a person-to-person human connection, something that seems to be missing from chatGPT. It also takes craft and a certain creative flair to convince readers — and a sense of humor. As Frank Johnson said, "You tell funny stories, you put in funny pictures, you do any g--d----d thing you can to keep them reading.”Let’s try to write that into the ChatGPT code.