The following was previously published in an earlier edition of Marketing Insider.Even in my world, which is nowhere near the epicenter of the technology universe, everyone is
talking about AI. And depending on who’s talking, it’s either going to be the biggest boon to humanity, or it’s going to wipe us out completely. Middle ground seems to be hard to
find.
I recently attended a debate at the local university about it. Two were arguing for AI, and two were arguing against. I went into the debate somewhat worried. When I walked out at the
end of the evening, my worry was bubbling just under the panic level.
The “For” Team had a computer science professor, Kevin Leyton-Brown, and a philosophy professor, Madeleine
Ransom. Their arguments seemed to rely mainly on creating more leisure time for us by freeing us from the icky jobs we’d rather not do. Leyton-Brown did make a passing reference to AI helping us
to solve the many, many wicked problems we face, but he never got into specifics.
advertisement
advertisement
“Relax!” seemed to be the message. “This will be great! Trust us!”
The
“Against” team was comprised of a professor in creative and critical studies, Bryce Traister. As far as I could see, he seemed to be mainly worried about AI replacing Shakespeare. He did
seem quite enamored with the cleverness of his own quips.
It was the other “Against” debater who was the only one to actually talk about something concrete I could wrap my head
around. Wendy Wong is a professor of political science. She has a book on data and human rights coming out this fall. Many of her concerns focused on this area.
Interestingly, the AI Boosters
all mentioned social media in their arguments. And on this point, they were united. All the debaters agreed that the impact of social media has been horrible. But the boosters were quick to say that
AI is nothing like social media.
Except that it is. Maybe not in terms of the technology that lies beneath it, but in terms of the unintended consequences it could unleash, absolutely! Like
social media, what will get us into trouble with AI are the things we don’t know we don’t know.
I remember when social media first appeared on the scene. Like AI, there were plenty
of evangelists lining up saying that technology would connect us in ways we couldn’t have imagined. We were redefining community, removing the physical constraints that had previous limited
connection.
If there was a difference between social media and AI, it was that I don’t remember the same doomsayers at the advent of social media. Everyone seemed to be saying
“This will be great! Trust us!”
Today, of course, we know better. No one was warning us that social media would divide us in ways we never imagined, driving a wedge down the
ideological middle of our society. There were no hints that social media could (and still might) short circuit democracy.
Maybe that’s why we’re a little warier when it comes to AI
We’ve already been fooled once.
I find that AI boosters share a similar mindset. They tend to be from the S.T.E.M. (science, technology, engineering and math) school of thought. As
I’ve said before, these types of thinkers tend to mistake complex
problems for complicated ones. They think everything is solvable, if you just have a powerful enough tool and apply enough brain power. For them, AI is the Holy Grail – a powerful tool that
potentially applies unlimited brain power.
But the dangers of AI are hidden in the roots of complexity, not complication, and that requires a different way of thinking. If we’re going to
get some glimpse of what’s coming our way, I am more inclined to trust the instincts of those that think in terms of the humanities. A thinker, for example, such as Yuval Noah Harari, author of
“Sapiens: A Brief History of Humankind.”
Harari recently wrote an essay in The Economist that may be the
single most insightful thing I’ve read about the dangers of AI: “AI has gained some remarkable abilities to manipulate and generate language, whether with words, sounds or
images. AI has thereby hacked the operating system of our civilisation.”
In my previous experiments with ChatGPT, it was this fear that was haunting me. Human brains operate on narratives.
We are hard-wired to believe them. By using language, AI has a back door into our brains that could bypass all our protective firewalls.
My other great fear is that the development of AI is
being driven by for-profit corporations, many of which rely on advertising as their main source of revenue. If ever there was a case of putting the fox in charge of the henhouse, this is it!
When it comes to AI, it’s not my job I’m afraid of losing. It’s my ability to sniff out AI-generated bullshit. That’s what’s keeping me up at night.