Sometimes I think tech companies use acronyms and cryptic names for new technologies to allow them to sneak game-changers in without setting off alarm bells. Take OpenAI for example. How scary does Q* sound? It’s just one more vague label for something we really don’t understand.
If I’m right, we do have to ask the question, “Who is keeping an eye on these things?”
This week I decided to dig into the whole Sam Altman firing/hiring episode a little more closely so I could understand if there’s anything I should be paying attention to. Granted, I know almost nothing about AI, so what follows if very much at the layperson level, but I think that’s probably true for the vast majority of us. I don’t run into AI engineers that often in my life.
advertisement
advertisement
So, should we care about what happened a few weeks ago at OpenAI? In a word – YES.
First of all, a little bit about the dynamics of what led to Altman’s original dismissal. OpenAI started with the best of altruistic intentions, to “to ensure that artificial general intelligence benefits all of humanity.” That was an ideal -- many would say a naïve ideal -- that Altman and OpenAI’s founders imposed on themselves.
As Google discovered with its “Don’t Be Evil” mantra, it’s really hard to be successful and idealistic at the same time. In our world, success is determined by profits, and idealism and profitability almost never play in the same sandbox. Google quietly watered the “Don’t be Evil” motto until it virtually disappeared in 2018.
OpenAI’s nonprofit board was set up as a kind of Internal “kill switch” to prevent the development of technologies that could be dangerous to the human race. That theoretical structure was put to the test when the board received a letter this year from some senior researchers at the company warning of a new artificial intelligence discovery that might take AI past the threshold where it could be harmful to humans. The board then did what it was set up to do, firing Altman and board chairman Greg Brockman and putting the brakes on the potentially dangerous technology.
Then, Big Brother Microsoft (who has invested $13 billion in OpenAI) stepped in and suddenly Altman was back. (Note - for a far more thorough and fascinating look at OpenAI’s unique structure and the endemic problems with it, read through Alberto Romero’s series of thoughtful posts.)
There were probably two things behind Altman’s ouster: the potential capabilities of a new development called Q*, and a fear that it would follow OpenAI’s previous path of throwing it out there to the world, without considering potential consequences.
So, why is Q* so troubling?
Q* could be a major step closer to AI that can rationalize and plan. This moves us closer to the overall goal of artificial general Intelligence (AGI), the holy grail for every AI developer, including OpenAI. AGI, as per OpenAI’s own definition, are “AI systems that are generally smarter than humans.” Q*, through its ability to tackle grade school math problems, showed the promise of being artificial intelligence that could plan and reason. And that is an important tipping point, because something that can rationalize and plan pushes us forever past the boundary of a tool under human control. It’s technology that thinks for itself.
This should worry us because of Herbert Simon’s concept of “bounded rationality,” which explains that we humans are incapable of pure rationality. At some point we stop thinking endlessly about a question and come up with an answer that’s “good enough.” And we do this because of limited processing power. Emotions take over and make the decision for us.
But AGI throws those limits away. It can process exponentially more data at a rate we can’t possibly match. If we’re looking at AI through Sam Altman’s rose-colored glasses, that should be a benefit. Wouldn’t it be better to have decisions made rationally, rather than emotionally? Shouldn’t that be a benefit to mankind?
But here’s the rub. Compassion is an emotion, as is empathy and love. What kind of decisions do we come to if we strip that out of the algorithm, along with any type of human check and balance?
Here’s an example. Let’s say at some point in the future an AGI superbrain is asked the question, “Is the presence of humans beneficial to the general well-being of the earth?”
I think you know what the rational answer to that is.