Even in my world, which is nowhere near the epicenter of the technology universe, everyone is talking about AI. And depending on who’s talking, it’s either going to be the biggest boon to humanity, or it’s going to wipe us out completely. Middle ground seems to be hard to find.
I recently attended a debate at the local university about it. Two were arguing for AI, and two were arguing against. I went into the debate somewhat worried. When I walked out at the end of the evening, my worry was bubbling just under the panic level.
The “For” Team had a computer science professor, Kevin Leyton-Brown, and a philosophy professor, Madeleine Ransom. Their arguments seemed to rely mainly on creating more leisure time for us by freeing us from the icky jobs we’d rather not do. Leyton-Brown did make a passing reference to AI helping us to solve the many, many wicked problems we face, but he never got into specifics.
“Relax!” seemed to be the message. “This will be great! Trust us!”
advertisement
advertisement
The “Against” team was comprised of a professor in creative and critical studies, Bryce Traister. As far as I could see, he seemed to be mainly worried about AI replacing Shakespeare. He did seem quite enamored with the cleverness of his own quips.
It was the other “Against” debater who was the only one to actually talk about something concrete I could wrap my head around. Wendy Wong is a professor of political science. She has a book on data and human rights coming out this fall. Many of her concerns focused on this area.
Interestingly, the AI Boosters all mentioned social media in their arguments. And on this point, they were united. All the debaters agreed that the impact of social media has been horrible. But the boosters were quick to say that AI is nothing like social media.
Except that it is. Maybe not in terms of the technology that lies beneath it, but in terms of the unintended consequences it could unleash, absolutely! Like social media, what will get us into trouble with AI are the things we don’t know we don’t know.
I remember when social media first appeared on the scene. Like AI, there were plenty of evangelists lining up saying that technology would connect us in ways we couldn’t have imagined. We were redefining community, removing the physical constraints that had previous limited connection.
If there was a difference between social media and AI, it was that I don’t remember the same doomsayers at the advent of social media. Everyone seemed to be saying “This will be great! Trust us!”
Today, of course, we know better. No one was warning us that social media would divide us in ways we never imagined, driving a wedge down the ideological middle of our society. There were no hints that social media could (and still might) short circuit democracy.
Maybe that’s why we’re a little warier when it comes to AI We’ve already been fooled once.
I find that AI boosters share a similar mindset. They tend to be from the S.T.E.M. (science, technology, engineering and math) school of thought. As I’ve said before, these types of thinkers tend to mistake complex problems for complicated ones. They think everything is solvable, if you just have a powerful enough tool and apply enough brain power. For them, AI is the Holy Grail – a powerful tool that potentially applies unlimited brain power.
But the dangers of AI are hidden in the roots of complexity, not complication, and that requires a different way of thinking. If we’re going to get some glimpse of what’s coming our way, I am more inclined to trust the instincts of those that think in terms of the humanities. A thinker, for example, such as Yuval Noah Harari, author of “Sapiens: A Brief History of Humankind.”
Harari recently wrote an essay in The Economist that may be the single most insightful thing I’ve read about the dangers of AI: “AI has gained some remarkable abilities to manipulate and generate language, whether with words, sounds or images. AI has thereby hacked the operating system of our civilisation.”
In my previous experiments with ChatGPT, it was this fear that was haunting me. Human brains operate on narratives. We are hard-wired to believe them. By using language, AI has a back door into our brains that could bypass all our protective firewalls.
My other great fear is that the development of AI is being driven by for-profit corporations, many of which rely on advertising as their main source of revenue. If ever there was a case of putting the fox in charge of the henhouse, this is it!
When it comes to AI, it’s not my job I’m afraid of losing. It’s my ability to sniff out AI-generated bullshit. That’s what’s keeping me up at night.
I'll tell you why I'M worried about AI.
The Matrix. Terminator. Minority Report. Her. Speilberg's AI. 2001. Blade Runner. All this furor over AI has me thinking to myself, "Am I the only one who goes to the movies?"
Every time someone says, "And then... one day... the machines became sentient," that NEVER ends well.
AI taking over the world LOL. I'm in the middle of this debate and should embrace it to a point going to be pros & cons to AI. There will be bad actors that will use AI and I hope they will be exposed for being bad actors just got to catch them. I think those that embrace AI reporters working in broadcast even if some TV station groups don't embrace it as I know Nexstar isn't embracing AI and putting their head in the sand which they shouldn't and those that don't embrace AI will be left behind in my opinion
The articles I read often present two arguments. One is that AI makes a lot of mistakes (so it cannot be trusted) and the other is that it's an existential threat to humanity that will magically stop making mistakes. The two arguments seem intended to scare me but the first one seems to cancel out the second one. When challenged with the conundrum, some of my friends say, "Oh, it'll just keep improving until it's perfect, just you wait." I'm not convinced. These stories seldom mention ZeroGPT as the antidote. I guess ZeroGPT is predicted to become less reliable while ChatGPT grows smarter and smarter, despite both tools sharing the same AI roots.
So much paranoia. I think it's because most stories are promulgated by lesser-skilled people who are journalists perhaps worrying the most about losing their jobs. Just because everyone is talking about AI doesn't mean everyone is right about the threat. Even the experts cannot be trusted because they, too, fear a threat to their human-based consultancy.