Commentary

What Would Happen If We Let AI Vote?

In his bestseller “Homo Deus: A Brief History of Tomorrow,” Yuval Noah Harari writes that AI might mean the end of democracy. His reasoning for that idea comes from an interesting perspective: how societies crunch their data.

Harari acknowledges that democracy might have been the best political system available to us -- up to now. That’s because it relied on the wisdom of crowds.

The hypothesis operating here is that if you get enough people together, each with different bits of data, you benefit from the aggregation of that data.

Theoretically, if you allow everyone to vote, the aggregated data will guide the majority to the best possible decision.

Now, there are a truckload of “yeah, buts” in that hypothesis, but it does make sense. If the human ability to process data was the single biggest bottleneck in making the best governing decisions, distributing the processing among a whole bunch of people was a solution. Not the perfect solution, perhaps, but probably better than the alternatives.

advertisement

advertisement

As Winston Churchill said, “democracy is the worst form of government except for all those other forms that have been tried from time to time." So if we look back at our history, democracy seems to emerge as the winner.

But the whole point of Harari’s book is to look forward. It is, he promises, “A Brief History of Tomorrow.” And that tomorrow includes a world with AI, which blows apart the human data-processing bottleneck: “As both the volume and speed of data increase, venerable institutions like elections, parties and parliaments might become obsolete -- not because they are unethical, but because they don’t process data efficiently enough.”

The other problem with democracy is that the data we use to decide is dirty. Increasingly, thanks to the network effect anomalies that come with social media, we are using data that has no objective value, but is simply the emotional effluent of ideological echo chambers.

This is true on both the right and left ends of the political spectrum. Human brains default to using available and easily digestible information that happens to conform to their existing belief schema. Thanks to social media, there’s no shortage of this severely flawed data.

So if AI can process data exponentially faster than humans, can analyze that data to make sure it meets some type of objectivity threshold, and can make decisions based on algorithms that are dispassionately rational, why shouldn’t we let AI decide who should form our governments?

Now, I pretty much guarantee that many of you, as you’re reading this, are saying that this is BS -- that it means humans surrendering control in the most important of arenas.

But I must ask in all seriousness, why not? Could AI do worse than we humans do? Worse than we have done in the past? Worse than we might do again in the very near future?

These are exactly the type of existential questions we should ask when we ponder our future in a world that includes AI.

Our hubris in believing we’re the best choice for being put in control of a situation is no coincidence. As Harari admits, the liberal human view that we have free will and should have control of our own future was really the gold standard. Like democracy, it wasn’t perfect, but it was better than all the alternatives.

The problem is, there’s now a lot of solid science that indicates our concept of free will is an illusion. We are driven by biological algorithms that have been built up over thousands of years to survive in a world that no longer exists. We self-apply a thin veneer of rationality and free will at the end to make us believe that we were in control and meant to do whatever it was we did.

What’s even worse, when it appears we might have been wrong, we double down on the mistake, twisting the facts to conform to our illusion of how we believe things are. But we now live in a world where there is -- or soon will be -- a better alternative, without the bugs that proliferate in the biological OS that drives us.

As another example of this impending crisis of our own consciousness, let’s look at driving. Up to now, a human was the best choice to drive a car. We were better at it than chickens or chimpanzees.

But we are at the point where that may no longer be true. There is a strong argument that as of today, autonomous cars guided by AI are safer than human-controlled ones. And if the jury is still out on this question today, it is certainly going to be true in the very near future.

Yet we humans are loath to admit the inevitable and give up the wheel. It’s the same story as making our democratic choices.

So let’s take it one step further. If AI can do a better job than humans in determining who should govern us, it will also do a better job in doing the actual governing. All the same caveats apply.

When you think about it, democracy boils down to various groups of people pointing the finger at those chosen by other groups, saying they will make more mistakes than our choice.

The common denominator is this: Everyone is assumed to make mistakes. And that is absolutely the case. Right or left, Republican or Democrat, liberal or conservative, no matter who is in power, they will screw up. Repeatedly.

Because they are, after all, only human.

2 comments about "What Would Happen If We Let AI Vote?".
Check to receive email when comments are posted.
  1. Ben B from Retired, January 31, 2024 at 8:01 p.m.

    I don't think AI would be good as a leader in my opinion and could start a war with one major screw up. I'm not excited for the rematch of Biden VS Trump 2.0 O wish that GOP would've picked someone else in the primary as it is all over Haley isn't making it to Super Tue next month maybe AI Nikki Haley could've done better she is better than the 2 evils that is Biden & Trump. I'm voting 3RD party that isn't Green Party or RFK JR and if he runs as a Libertarian I'm not voting Libertarian. Writing in The Rock, Ronald Reagan, John McCain, maybe Geogre W. Bush. If I don't like any of the 3RD party candidates running for Presient.

  2. Gian Fulgoni from 4490 Ventures, February 5, 2024 at 8:47 p.m.

    Hi Gord: Long time no see. Hope all is well. The problem with what you suggest is that the data driving the AI systems is going to be 100% based on the Internet. And we all know how much nonsense there is out there. I'd rather depend on real people, even if they are partly influenced by the Internet. At least we can be assured of some amount of common sense. The alternative is far too scary to even contemplate.

Next story loading loading..