Commentary

Self-Driving Cars? We're Already In Danger

May 2017. I’m hosting a panel on technology and inequality with three extraordinary humans: one bringing healthcare to rural New Zealand, one making data visually accessible to anyone, one working on rape crisis counseling in sub-Saharan Africa. All three are furious with the way vast swathes of the world are restricted from opportunities, indignant at our collective inability to act faster and experiment more broadly with emerging technologies to make the world a better place for everyone.

Someone asks a question: ”Aren’t you afraid you might do harm?”

“OF COURSE WE’LL DO HARM!” comes the thundering response. “We definitely won’t get it right! But are you suggesting the current system isn’t doing harm?”

Take healthcare in rural communities. Isn’t that guy afraid he might do harm by providing remote services instead of setting up a traditional hospital? He might. But then again, according to the Journal of Patient Safety, up to 400,000 people die in American hospitals from preventable errors -- every year.

advertisement

advertisement

We’re already doing harm. Should we let the fear of doing harm prevent us from trying to do better?

March 2018. 49-year-old Elaine Herzberg, walking her bicycle, steps out into the street in Tempe, Arizona, where she is hit by a self-driving Uber vehicle. She dies on the way to the hospital.

Unlike the fatal Tesla Autopilot crash in June last year -- which was met with a global shrug -- the Uber death is met with global outrage.

In many ways, the outrage is fair. Uber are notoriously blithe about safety.

According to the New York Times, as of March, Uber was struggling to meet a target of one human intervention -- taking over from the robot for safety reasons -- per 13 miles traveled. To put that in context, Waymo (formerly Google) averages one human intervention per 5,600 miles traveled.

Bloomberg reports that, because Uber was using its own software, it disabled Volvo’s collision-avoidance system, which could have served as a fail-safe.

And the video of the crash shows the safety driver looking away.

So here’s the deal: I absolutely support minimum safety standards. If you’re gonna have cars driving themselves on public streets, they need to be able to do a lot more than 13 miles without someone holding their hand. If you’re gonna disable a system that could provide redundancy to your other safety measures, then you need to be held accountable. Just because something is “disruptive” doesn’t mean it’s awesome.

But here’s the other deal: it’s preposterous to complain, as Wired’s Aarian Marshall did, that he has somehow ended up “in a living lab for self-driving tech,” despite the fact that he “didn’t sign any forms or cast any votes.”

It’s preposterous because we are all already in a default living lab. We are experimenting with large-scale migration to cities. With massive social media use. With eight-hour-a-day desk jobs. With industrial food systems. And yet we didn’t sign any forms or cast any votes.

The most insidious thing about the default living lab is precisely that we don’t call it that. We pretend that our real world is not an experiment. Only new and shiny things are “experiments.”

And of course the current system is doing harm. In 2016, nearly 6,000 pedestrians were killed by human-driven vehicles. That’s around one every 40 minutes, every single day.

The Times article has a striking photo of the National Transportation Safety Board investigators examining the Uber car after last week’s crash. They, Uber, Tesla, Waymo -- and every single team working on this stuff -- will be studying what went wrong, adjusting systems, improving capability, and generally making sure this particular failure will never happen again.

When’s the last time that happened in the living lab we got by default?

Next story loading loading..