Commentary

Peeling Back The Ethical Onion Of Technology

There are an ever-increasing number of university courses about the ethics of data science and technology. Harvard, MIT and Stanford all offer them. And if you read through the typical syllabus, they seem to be focused on the big tech topics that create ethical questions: artificial intelligence, autonomous weapons, self-driving cars, job-stealing robots.  

I admit, all those things worry me. But ethical dilemmas can be found in almost anything we do. Google is riddled with ethical traps. Facebook is full of them.

I believe there are both intended ethical questions and unintended ones. Drone strikes are an example of an intended ethical problem. The potential for evil is obvious to everyone. 

But the way Facebook is warping our inherent social drives may, in the long run, do more damage to us as a species. The same may be true for the way that Google is potentially reducing our ability to cognate at a higher level. We all use Google and Facebook every single day, as opposed to launching a drone strike. It’s these unintended ethical questions we have to start paying attention to. 

advertisement

advertisement

But before we do that, we have to nail down a fundamental question. What is ethical to us? Let’s start with the classic definition of ethics, according to dictionary.com:  “the body of moral principles or values governing or distinctive of a particular culture or group.”

The word “ethics” comes from the Greek word “ethos”: “the fundamental character or spirit of a culture; the underlying sentiment that informs the beliefs, customs, or practices of a group or society; dominant assumptions of a people or period”

This means that ethics are a moving target. They ebb and flow with the collective culture of the group. 

But is there such a thing as being inherently ethical, a nailed-down foundation of right and wrong? And, if there is, what foundation do we build that upon? What is ground zero in ethics?

Take, for example, the question of personal privacy. That would seem to be something rooted in ethics. Yet, as I’ve written before, the whole concept of privacy is fairly recent. It’s a moving target, and currently that movement is toward convenience and functionality — and away from the sanctity of our personal information. 

A recent article on this question in the New York Times had a great quote about this very question: “The medical profession has an ethic: First, do no harm. Silicon Valley has an ethos: Build it first and ask for forgiveness later.” 

Part of this trap of unintended consequences comes with the practice of agile development. Technology is simply racing forward faster than our society can possibly absorb. We can absorb it on a personal day-to-day level, and we do. We are addicted to technology. 

One ethical dark side of technology that seems indisputable is the practice of building addictive time wasters. This is the foe that former Google engineer Tristan Harris is taking on with the nonprofit initiative Time Well Spent. But what are the other longer-term consequences of this adoption? What about technology’s impact to society — or our species, for that matter?

We are beginning to scratch the surface of the problem, but I fear we haven’t properly scoped the size of what we’re dealing with. According to ABET’s (the accreditation board for engineering and technology) criteria for accreditation, there are two students outcomes that deal, at least tangentially, with ethics. A student must have:

  1. An understanding of professional, ethical, legal, security and social issues and responsibilities.
  2. An ability to analyze the local and global impact of computing on individuals, organizations, and society.

But in order to have both these abilities, you need an ethical baseline as a yardstick. You need to consider not only intended but also unintended consequences of technology. And the latter is almost impossible to foresee.  This alone renders ABET’s attempt to instill ethical guidelines for developers and designers hopelessly futile.  What is the line we’re guiding to?

The medical profession has the advantage of a pretty clear baseline when it comes to the question of harm. It’s not difficult to tell relatively quickly if someone is better or worse health-wise. 

But with technology, it’s much less tangible, and much less immediate. The societal harm that comes from the adoption of a new technology might not be noticeable for a generation. By then, it’s too late. Pandora’s box has been opened. Compounding this challenge in determining what’s ethical is the very real possibility that our culture will have adapted to absorb the impact of the technology. What was once considered unethical will now have become acceptable. 

In the end, in looking for the ground zero of ethics, I keep coming back to Nassim Nicholas Taleb’s concept of anti-fragility. In an unpredictable environment, all you can predict is unpredictability. And when the unpredictable happens, will our use of technology make us —both as a socially driven species and as self-appointed stewards of the planet — more or less fragile? 

1 comment about "Peeling Back The Ethical Onion Of Technology".
Check to receive email when comments are posted.
  1. Paula Lynn from Who Else Unlimited, October 8, 2018 at 3:36 p.m.

    Fear rules the world from the first time the first animal ran or fought another, four legged and then 2 legged. The worst things in the world you can ever think of to destroy stems from fear the selling of fear. Technology FOMO is no different from FOMO ions long ago, except now the danger is greater and faster.

Next story loading loading..