This past weekend I listened to a radio call-in show about AI. The question posed was this: Are those using AI regularly achievers or cheaters?
A good percentage of the conversation was focused on AI in education, especially those in post-secondary studies. Educators worried about being able to detect the use of AI to help complete coursework, such as the writing of papers. Many callers -- all of whom were well north of 50 years old -- bemoaned the fact that students today don’t understand the fundamental concepts they’re being presented because they’re using AI to complete assignments.
A computer science teacher explained why he teaches obsolete coding to his students – it helps them to understand why they’re writing code at all. What is it they want to code to do? He can tell when his students are using AI because they submit examples of coding that are well beyond their abilities.
That, in a nutshell, sums up the problem with our current thinking about AI. Why are we worried about trying to detect the use of ChatGPT by a student who’s learning how to write computer code? Shouldn’t we be asking, instead, why we need humans to learn coding at all, when AI is better at it? Maybe it’s a toss-up right now, but it’s guaranteed not to stay that way for long. This isn’t about students using AI to “cheat.” This is about AI making humans obsolete.
advertisement
advertisement
As I was writing this, I ran across an essay by computer scientist Louis Rosenberg. He is worried that those in his circle, like the callers to the show I was listening to, “have never really considered what life will be like the day after an artificial general intelligence (AGI) is widely available that exceeds our own cognitive abilities.”
As I said, what we use AI for now is a poor indicator of what AI will be doing in the future. To use an analogy I have used before, it’s like using a rocket to power your lawnmower.
But what will life be like when, in a somewhat chilling example put forward by Rosenberg, “I am standing alone in an elevator -- just me and my phone -- and the smartest one speeding between floors is the phone?”
It’s hard to wrap your mind around the possibilities. One of the callers to the radio show was a middle-aged man who was visually impaired. He talked about the difference it made when he got a pair of Meta Glasses last Christmas. Suddenly, his world opened up. He could make sure the pants and shirt he picked out to wear were in colors that matched. He could see if his recycling had been picked up before he made the long walk down the driveway to pick up the bin. He could cook for himself because the glasses could tell him what was in the boxes he took off his kitchen shelf. AI gave him back his independence.
I believe we’re on the cusp of multiple AI revolutions. Healthcare will take a great leap forward when we lessen our need for expert advice that comes from a human. In Canada, general practitioners are in desperately short supply. Combining AI with the leaps being made by incorporating biomonitoring into wearable technology could help us live longer, healthier lives. I hope the same is true for dealing with climate change, agricultural production, and other existential problems we’re currently wrestling with.
But let’s back up to Rosenberg’s original question: What will life be like the day after AI exceeds our own abilities? The answer to that, I think, depends on who is in control of AI on the day before. The danger here is more than just humans becoming irrelevant. The danger is which humans are determining the future direction of AI before AI takes over the steering wheel and determines its own future.
For the past seven decades, the most pertinent question about our continued existence as a species has been this: “Who is in charge of our combined nuclear arsenals?” But going forward, a more relevant question might be “Who is setting the direction for AI?” Who is it that’s setting the rules, coming up with safeguards and determining what data the models are training on? Who determines what tasks AI takes on? Here’s just one example. When does AI decide when the nuclear warheads are launched?
As I said, it’s hard to predict where AI will go. But I do know this. The general direction is already being determined. And we should all be asking, “By whom?”