
On the bright side, the
emotionally charged, week-long on again/off again drama surrounding Sam Altman’s ouster and return to OpenAI proves just how human the people leading the development of artificial intelligence
actually are.
On the downside, it demonstrates how precariously human they also are.
“It’s all very juicy. But this drama should also be raising larger
questions, far beyond one company’s internal hirings and firings, including We are the people making the decision that will determine so much of our technological future,” author Jill
Filipovic wrote in an opinion column Tuesday on CNN.com, adding: “What guiding
principles are they using to make those decisions? And how should other institutions – governments, non-tech industries, global alliances, regulatory bodies – reign in the worst excesses
of potentially dangerous AI innovators?”
advertisement
advertisement
The very juicy Sam Altman/OpenAI/Microsoft/OpenAI drama has been entertaining for sure, but it has also been erratic in a way that
causes this non-regulatory body to be concerned about what’s next.
And while I did take some comfort in the fact that OpenAI officially acknowledged Altman’s return --
for now -- with a heart emoji on its X Corp. account, it would have been nice to have seen something more like a substantive rationalization for what actually happened -- and why it won’t happen
again.
Even before Altman’s boardroom drama unfolded this week, I’ve been thinking about the irrational human desires accelerating the development of next-generation
artificial intelligence applications -- at least on the surface -- if not in the actual engineering, security and safety protocols.
Mainly, I’ve been thinking about Elon
Musk’s posts about his Grok AI chatbot, which he says is modeled after sci-fi satire, “Hitchhiker’s Guide to the Galaxy,” but “has a rebellious streak.”
Rebellious streak? Just what I don’t want in a technological super power. Has anyone seen “Terminator?”
Over the past quarter century of technological
hyper-acceleration, I have often thought about the science-fiction role models that have influenced and inspired its inception, and I’ve always fantasized that they were being guided more by
Isaac Asimov (see his fictional rules of robotics below), and less by Douglas Adams or James Cameron.
I’m proposing that the institutions seeking to reign in the worst excesses
of potentially dangerous AI innovators use the three rules enshrined in Asimov’s short story “Runaround” in 1942, become
codified as international law very soon.
The laws, which Asimov described as part of the fictional "Handbook of Robotics, 56th Edition, 2058 A.D.,”, are:
