Remember Chris Hadfield? Of course you do. The Canadian astronaut’s rendition of
"Major Tom” has been viewed over 25
million times. But today I’m more interested in his
TED talk: “What I learned
from going blind in space.”
Turns out that on his first-ever spacewalk, out of the blue, Hadfield suddenly went blind in his left eye. Professional that he is, he tried to keep working --
except that, because tears don’t fall in zero gravity, the stuff blinding his left eye mixed with his tears until it formed a big ball of gunk that just slid across his face into his other eye.
And then he was blind in both eyes.
He didn’t panic, though; he had been trained for this problem. So he knew what happens if you lose your vision and you’re in zero gravity,
blindfolded at the bottom of a swimming pool or using virtual reality to simulate the real thing. The astronauts had been trained for a whole heap of things going wrong. As a result, they had an
extremely accurate ability to assess the actual danger of a situation, regardless of the fear that situation might engender in someone who hadn’t gone through the training.
advertisement
advertisement
(Side note:
the blindness was just the anti-fog that had gotten in his eyes. He was fine.)
As humans, we’re usually pretty terrible at distinguishing between fear and danger. We’re terrified
of Ebola, which has killed around 12,000 people total, and not terrified of tuberculosis, which kills over 28,000 people every week. We’re terrified of sharks, which kill five people worldwide
every year, and not terrified of mosquitoes, which kill over 650,000 people every year.
This distinction, between what we perceive as dangerous and what is actually dangerous, is an important
one for the rollout of new technology.
When it comes to driverless cars, for example, people tend to think the automation is scary, even though human drivers are far more dangerous.
Google’s driverless cars have logged a million accident-free miles, surely more than almost any human. But our misperception of the danger makes us unwilling to cede total control and likely
instead to demand an override option where a human driver can take over if the automation goes awry.
It is an irony worthy of Shakespeare that a handoff between an autopilot and a live person
-- the very thing we think we need to keep us safe -- creates one of the more dangerous scenarios, according to a recent HuffPo article: “Thrust back into control while going full-speed on the freeway, the driver might be unable to
take stock of all the obstacles on the road, or she might still be expecting her computer to do something it can't. Her reaction speed might be slower than if she'd been driving all along, she might
be distracted by the email she was writing or she might choose not to take over at all, leaving a confused car in command. There's also the worry that people's driving skills will rapidly deteriorate
as they come to rely on their robo-chauffeurs.”
It’s not just cars we need to consider. Yesterday, the FAA announced two partnerships to test beyond-line-of-sight drones. Currently, any drones
being used for commercial use have to be within sight of the operator; with beyond-line-of-sight drones, operators would use on-board cameras to navigate.
Note the implicit assumption there:
that the desirable option is for a human to be, ultimately, in charge. I suspect, though, that just like driverless cars, we may find that the fear and the danger are two separate things.
In
2012, road injury was the ninth leading cause of death worldwide, killing 1.3 million people. What, exactly, makes us think robocars need us? Surely it’s the other way around?