One of my things that brings me delight is meeting people whose professions match their names. True story: we’re running an event next week, and our chef is called Grant Kitchen.
There’s a name for this: nominative determinism. And it’s a statistically significant effect. For example, people called Dennis are more likely to become dentists than what we would
expect.
Language matters.
Many years ago, someone lamented to me that “global warming” was the worst phrase we could have chosen. “Warming sounds nice, cozy,
you’re curled up by the fire. We should have called it ‘climate cancer.’”
She was right, of course. As much as we like to think of ourselves as rational actors, basing
our decisions on logic, evidence and a sound intuition, humans are actually rife with cognitive shortcuts and biases.
Language matters.
So when we call something “artificial
intelligence,” it matters. It shapes our perceptions, our assumptions, our mental shortcuts.
advertisement
advertisement
I’m fine with the artificial bit. It’s the intelligence I have an issue with.
Ex-OpenAI guy Leopold Aschenbrenner (who was fired for allegedly leaking information) recently tried to make the case that computational volume equals intelligence. He shared a chart showing the progression of generative AI since 2018, extrapolating out to 2028. The left-hand y-axis shows
“effective compute.” The right-hand y-axis, presumably the equivalent of the other one, shows the intelligence level of each version of GPT: GPT-2 is a “preschooler,” GPT-3 is
an “elementary schooler,” GPT-4 (where we are now) is a “smart high schooler.”
Except… it’s not. Current AI models can do extraordinary, astonishing
things: generate wide-ranging scenarios; turn emoji into highly-detailed realistic images; solve protein-folding challenges that have stumped scientists for decades.
But it is not
smart, not in the way we think of smart. Certainly not in the way a pre-schooler is smart.
A pre-schooler may not do scenario planning, but may be a genius at getting an extra cookie.
An elementary schooler may not be a highly skilled artist, but may be adept at navigating his mother’s alcoholism. A high-schooler may not know how to fold a protein, but may have a detailed
understanding of the nuances and intricacies of her social environment.
These things are not the same. Language matters.
This is an actual exchange I had with Chat-GPT 4o, the
“smart highschooler,” this week:
Me: how many rs in strawberry
ChatGPT: The word "strawberry" contains two "r"s.
Me: what is the third letter
ChatGPT: The third
letter in the word "strawberry" is "r": strawberry
Me: what is the eighth letter
ChatGPT: The eighth letter in the word "strawberry" is "r": strawberry
Me: what is the ninth letter
ChatGPT: The ninth letter in the word "strawberry" is "r": strawberry
Me: how many rs in strawberry
ChatGPT: The word "strawberry" contains two "r"s.
ChatGPT, along with
other LLMs and generative models, can do extraordinary, astonishing things -- but what they cannot do is grok. They cannot truly,
deeply understand what they are doing or saying.
There is no “intelligence” there.
Language matters.
Once we ourselves grok that what we’re dealing with is
not actually intelligence, we can set aside the idea of AI as a new species, as Mustafa Suleyman proposed in his recent TED talk. We can
appreciate it for what it really is: artificial enhancement.
AI enhances our photos. It enhances our ability to summarize. It enhances our skill at scenario planning and folding
proteins. It is like a robotic exoskeleton: it requires our own intentionality to work.
But at the end of the day, it’s not
intelligent -- and we are the ones who need to be accountable for what it does.