The Velvet Sundown band fooled a lot of people, including Spotify fans and Rolling Stone.
It had suddenly shown up on Spotify several months ago, with full albums of Americana-styled
rock, and more than one million folks streamed the band’s songs – except there was no band. The songs, the album art, the band’s photos – it was all generated by AI.
When you know this and relisten to the songs, you swear you would’ve never been fooled. Those now in the know say the music is formulaic, derivative and uninspired. Yet many of us were taken
in by this AI hoax, or what the real “band” now refers to itself as – “a synthetic music project guided by human creative direction and composed, voiced and visualized
with the support of artificial intelligence.”
Formulaic. Derivative. Synthetic. We mean these as criticisms. But they are accurate descriptions of exactly how AI works. It is synthesis
by formulas (or algorithms) that parse billions or trillions of data points, identify patterns and derive the finished product from it. That is AI’s greatest strength…and its biggest
downfall.
advertisement
advertisement
The human brain, on the other hand, works quite differently.
Our biggest constraint is the limit of our working memory. When we analyze disparate data points, the available
slots in our temporary memory bank can be as low as in the single digits. To cognitively function beyond this limit, we have to do two things: “chunk” them together into mental building
blocks and code them with emotional tags. That is the human brain’s greatest strength… and our biggest downfall.
What the human brain is best at is what AI is unable to do. And
vice versa.
Humans now do something called “cognitive off-loading.” If something looks like it’s going to be a drudge to do, we use ChatGPT to do it. This is the slogging
mental work that our brains are not particularly well-suited to. That’s probably why we hate doing it – the brain is trying to shirk the work by tagging it with a negative emotion (brains
are sneaky that way).
But by off-loading, we short circuit the very process required to build that uniquely human expertise.
Writer, researcher and educational change advocate Eva
Keiffenheim outlines the potential danger for humans who
“off-load” to a digital brain; we may lose the sole advantage we can offer in an artificially intelligent world: “If you can’t recall it without a device, you haven’t
truly learned it. You’ve rented the information. We get stuck at ‘knowing about’ a topic, never reaching the automaticity of ‘knowing how.’”
For
generations, we’ve treasured the concept of “know-how.” Perhaps, in all that time, we forgot how much hard mental work was required to gain it. That could be why we are quick to
trade it away now that we can.
Malcolm Gladwell called that know-how the “10,000-hour rule.” For humans to add any value they must put in the time. There are no short cuts.
A few posts back I wrote about one less-than-impressive experience with an AI
tool to build a website – from my attempt, AI wasn’t very good at anticipating how a human might interact with the website. I ended my post musing what role humans might play as AI
evolves and becomes more capable. One possible answer being used already is the “humans-in-the-loop” approach. It plugs the humanness that sits in our brains into the equation, allowing AI
to do what it’s best at and humans to provide the spark of intuition or the gut checks that currently cannot come from an algorithm.
These “use cases” should come from a
human, arguably even more. As an online user experience researcher for many years for major players, I accumulated the expertise over time – talking to users about goals and intentions,
analyzing online interactions, coding eye-tracking – where I could quickly look at a website or search results page and provide a pretty accurate gut call prediction of how a user would interact
with it.
There are no short cuts. Or – at least – there never used to be.