Commentary

Murder By Numbers

Machines now decide what content you see.  The resulting media experiences are amplifiers for every imaginable attitude, sometimes inciting violence. The underlying enabler is software, but because it’s cryptic, software insulates its creators from the real-world consequences it can cause.  

Every pixel on every screen was put there by software. That’s all fun and games, as they say, until someone uses social networks to swing elections, or coordinate a genocide.   

The broader question is, when a computer causes or enables a bad thing, how do we, as a society, bring humans to account? We can’t punish computers, so how do we adjust our justice?

In civil law, negligence is failure to exercise appropriate care — but there is no standard of appropriate care for computers, right?

Every day, humans make choices to optimize profit that cause machines to become dangerous or annoying. When consequences for faulty or even dangerous programming are zero, there is no incentive for engineers and their bosses to protect us from their creations.

advertisement

advertisement

Doctors take an oath to “do no harm,” but software engineers have no such oath. Maybe they should.

Advertising is the tip of the iceberg

Boeing 737 Max 8 crashes: Due to software. New Jersey train wreck? Software. Autonomous vehicle going under a truck, decapitating the driver? Software. Equifax hack handing your credit card to criminals? Software. Ad fraud? Software. Drones killing innocents? Software.

I knew a guy who wrote software for nuclear weapons. This is history, not some future dystopia.

Yet rarely do you hear of a person being held accountable for the consequences of software failure, malice, or even wasting our time. Can dynamic creative or search subvert Food and Drug Administration rules? Sure. How about “Content Discovery Engines” misleading their audience? Happens all day.

Across the entire range of social nastiness, from annoying to murderous, we are missing all layers of normal security: laws, detection, prevention, punishment, etc. The problem is not so much how to mend a broken system as how to begin to create a framework.

Washington is hopelessly lost, with lawmakers playing mainly in the niches that might impact them personally (that is, pandering to voters), reacting with knee-jerk interventions and turning the knobs they can turn. Is “privacy” really the best we can do for a legislative priority?

Worse, what’s the process to fix this imbalance? The people most willing to help legislators are those with a dog in the hunt — the absolute worst choice.

We need standards and principles. But barring constitutional protections (like free speech), there are painfully few. Computers somehow shield their masters from the consequences of larceny, fraud, and murder. Meanwhile, enforcement loses the trail at the first sign of silicon.

Is software accountability on the congressional agenda? Nope. Presidential agenda? Hahaha (too busy redrawing districts and weather maps).  Even Google disbanded its group looking into AI ethics.

You should be afraid. If you are not, show your faith by booking yourself onto a 737 Max 8 flight after the aircraft returns.

If we can’t deal with issues in which software hurt or misled people by mistake, how do you suppose we will fare against the inevitable code that kills people on purpose?  

The bit is mightier than the bullet

Maybe we should start with the capability to cause harm, not with the intent. Boeing, I am sure, never meant to kill anyone, but it sure has the ability.

We should rely on proactive and vigilant prevention of mishaps. The alternative is thoughts and prayers.

Creative ideas for sustainable countermeasures include proxy servers guarding our national boarders (as China has), and change tracking for code that controls knobs that, turned the wrong way, can hurt people. We need access control for any internet access, and legislation that ties accountability to the scale of the consequence.

How about an updated legal definition of “deception for gain” — that is, fraud?  For example, is protocol spoofing deception? Should political gain count as gain? When Iran takes out the power grid, as seems perfectly plausible, will we even know who did it? Perhaps we should make information literacy a qualification for office!

Without these kinds of things, your virus-infected thermostat can control a drone that will bring down your next flight, and nobody will be accountable. Speaking of horrible possibilities, maybe someday the Russians will use software to control the U.S. political system… oh, never mind.

Bits now beat bullets and are much harder to catch.  We need to grapple with that as a nation and society — or become un-great in a big fat hurry.

1 comment about "Murder By Numbers".
Check to receive email when comments are posted.
  1. Max Kalehoff from MAK, September 6, 2019 at 12:28 p.m.

    AGREE 100% You, like me, must be inspired by The Terminator.
    The ethics and principles of AI are trailing application and practice.
    The examples you mention are strong, as they are very tactile.
    However, there are many examples where I fear great widespread damage wlil occur (and have occured) before they are even addressed in a meaningful way.
    Consider AI's role in widespread addiction and depression with respect to interactive, autonomous apps and systems. If you though oxycontin was bad, just wait...

Next story loading loading..