After all, its behavior has been egregious for ages. The company lied about how much drivers will make. It lied about their safety fees. It investigated critics and threatened a female journalist who dared to speak negatively about it. In New Zealand, where I live, Uber has encouraged drivers to break the law, without guaranteeing it would pay any fines or provide legal cover.
advertisement
advertisement
So the past few weeks -- which have included Susan Fowler’s explosive account of systemic sexism at Uber, a potentially devastating lawsuit from Waymo, and an unsympathetic video of Travis Kalanick fighting with a driver -- have seemed like a well-deserved comeuppance.
It’s satisfying to point the finger at Uber precisely because its actions are so brazen. But doing so also carries a risk: the risk of distracting us from the more subtle ways technology impacts our lives, and the profound ethical implications thereof.
Luckily, people are thinking about this. People like Abe Gong, chief data officer at Aspire Health, who recently gave a powerful talk about ethics for algorithms. These algorithms, he argues, represent gatekeepers to just about everything we want to do in life -- whether it’s renting a house, getting a car loan, or making parole -- and the way they’re designed has real-world consequences for the real-world people on the other end.
But it’s not just algorithms we should be thinking about. What are the ethics of Airbnb driving up rental prices in areas with lots of listings? What are the ethics of the gig economy? What are the ethics of the increasing income inequality felt so intensely in Silicon Valley, the home of technological innovation? What are the ethics of us using our exponentially expanding capabilities to create apps for the affluent, while millions suffer or starve or flee from the conflict in their homelands?
These are important questions, and they deserve robust discussion. Three days ago, a friend of mine started a thread in a Google group with this provocation: “What is our vision of a ‘better’ civilization? What are the foundations of what we believe to be good? What are our values? When we talk about ‘impact’ and ‘transform’, what do we mean? What kind of impact is positive, and why? What do we envision as a desirable future for millions of people in a specific reality and a specific culture?”
77 replies later, the conversation is going strong. Turns out not everybody measures success exclusively by how much capital you raise. Not everybody thinks the best use of new technologies is to create a startup with a good exit strategy. And not everybody wants to be the next Steve Jobs; some want to be the next Malala Yousafzai.
Abe Gong proposes that we ask the following four questions when designing algorithms: One: Are the statistics solid? Two: Who wins? Who loses? Three: Are the changes in power structures helping? And four: How can we mitigate harms?
He also proposes that we make ethics reviews part of the absolute norm for algorithm development, so it would be weird not to do one.
It’s not only algorithms that need ethics reviews. We need them throughout product design, as well as in marketing, sales, HR, finance — right up to the very top of the organization. What are we in business for? Who wins and who loses? What is the impact on power structures? And how can we mitigate harm?
It’s not just the programmers who need to care. Ethics are for everyone. And it’s time for everyone to get involved.
excerpt from an anonymized recent email conversation:
On Mar 13, 2017, at 2:10 PM, FOUNDER wrote:
Hi Esther,
Your involvement in entrepreneurship and startups along with your extensive experience in supporting and advising early stage companies really appeal to our new AI startup, and I think your insight could be quite helpful. Our startup, FOUNDER Corp., predicts founder success. We generate ‘founder signals’ that help angel investors and early stage VCs pick successful startups as early as their founding. This generates [XXX times] the typical early stage returns, based on our simulations. FOUNDER is the result of 12 months research as part of [a graduate thesis].
We’d really like to get your feedback on how this tool can be more useful to you. Is this something you’d like to learn more about?
[Following another well-known algorithm: when you want money, ask for advice; when you want advice, ask for money.]
I wrote back:
Edyson wrote:
Hi FOUNDER -
the thing that gives me the willies about this is that it may pick up signals of who (1) gets funded by mostly non-black, successful VCs and (2) succeeds, based on history, rather than on who *could* succeed if they *were* funded by VCs.
There’s a lot of discussion around this issue (specifically and more generally), and so I’m curious how you are thinking about that.
Esther
Schadenfraude: German word for people getting pleasure out of other people's pain. Have you seen Paul Ryan's face everytime he talks about cutting benefits for people who desperately need them ? KNowing all this about Uber, why is it always included in entertainment instead of taking a taxi/cab ? The same thing for rental properties. Do you want strangers in and out of the place next door, down the street in places for which there are no qualifications or inspections certified for rentals ? There are to many reasons why both and any others such as this will not only take down ethical treatment of anything living or dead, but the schadenfraudes are making sure we don't learn about it.
Hi Esther,
Thanks so much for sharing this story -- it's exactly the kind of thing I'm referring to. If we develop algorithms to encode our current behavior, we're encoding all the biases associated with it.
Best,
Kaila
thank you, Kaila, you made my day!
maybe the only disruption we really need is companies and people doing things right instead of saying that what they do is right. that would be revolutionary :)