Commentary

AI, Big Data And Ethics

  • by , Op-Ed Contributor, September 21, 2017
This post is a slightly edited version of an earlier AI Insider.


I have a confession: AI ethics makes my brain hurt. The issues and questions are that big. Never mind corporate behavior; people, governments, policies and global geopolitics will all evolve differently depending on how we answer those questions. My brain is hurting right now.

But if you’re a marketer planning to use AI, you’re going to hurt a lot more than your brain if you don’t start examining the ethical questions. If you don’t think through the issues and questions that pertain to you, you will end up damaging your brand’s reputation. As more and more people come to understand the Web-cookied, location-based, sensor-instrumented digital envelope in which they live their lives – and who is watching them – they will begin to ask questions, themselves.

And if you’re using AI to power chatbots or digital ads, customers may be coming to you with their questions. As tech angel investor Esther Dyson told me in an interview at the time, “The advertising community has been woefully unforthcoming about how much data they’re collecting and what they're doing with it. And it’s going to backfire on them, just as the Snowden revelations backfired on the NSA.”

Some prescriptive advice that emerged from that work came from big data ethics thinker Kord Davis (he wrote the book, literally).

Davis recommends companies do three things: explicitly align corporate values with what they do and don’t do with big data and AI algorithms; openly discus their policies relating to data privacy, personally identifiable customer information, and data ownership; and be prepared to have lots of internal disagreements, because ethics are highly variable, personal issues.

Meanwhile, Dyson says, “Ethics don’t change – circumstances change, but the same standards apply.” When I told her about Davis' idea to connect company values to big data/AI actions, she said, “Connecting company values to your big data activities is another way of saying the circumstances have changed but the same standards apply.” Touché!

Another writer and thinker on big-data ethics is Jonathan King, vice president, cloud portfolio management and strategy at Ericsson and a visiting scholar at Washington University’s School of Law in St Louis. He and his writing partner, Neil Richards, a law professor and recognized expert on privacy and First Amendment law, advise you to focus on four areas:

Privacy: They say it isn’t dead, and it’s not just about keeping information hidden. “Ensuring privacy of data is a matter of defining and enforcing information rules – not just rules about data collection, but about data use and retention.”

Shared private information: King and Richards say you can share information and still keep it confidential. Again, this relies on the information rules mentioned  above.

Transparency: They say, “For big data to work in ethical terms, the data owners (the people whose data we are handling) need to have a transparent view of how our data is being used – or sold.”

Identity: This is a really big brain-hurter. They say, “Big data analytics can compromise identity by allowing institutional surveillance to moderate and even determine who we are before we make up our own minds.”

That identity issue is the “My Tivo Thinks I’m Gay” problem writ large. Coincidentally, I ran into it during the time I worked on this project. I researched the Broadway play “Tales From Red Vienna,” but decided not to get tickets. For the next month, everywhere I went on the Internet the “Red Vienna” ad stalked me. Marketers make jokes about the stalking nature of simple retargeting today, but there’s a creepiness to it that I don’t think people will ever shake.

What happens when AIs can predict much more intimate things about us, including what we may want to do or think next? How will people react? And, therefore, how will we want to handle such knowledge? We’re far from there yet, but the time to start asking those questions is now.

Similarly, how will we learn the things we don’t know we need to know if AI predicts our future based on past performance, limiting the real-life serendipity that we were all heir to before the digital envelope encapsulated us?

This is one of the big ethical issues marketers must explore, as it could cause people to evolve differently. Witness the disconnects that occurred in last year’s presidential election: It seems to me that a whole lot of people failed to learn things they didn’t know they needed to know.

Importantly, our work didn’t suggest any simple single answers to any of these big questions. The point is for every organization to engage in open discussion to formulate policies that align their values with their AI/big data behavior, accounting for all four of the issues mentioned above.

Does your brain hurt yet?

1 comment about "AI, Big Data And Ethics".
Check to receive email when comments are posted.
  1. Craig Mcdaniel from Sweepstakes Today LLC, September 21, 2017 at 4:50 p.m.

    After writing this post, you might just get ads to buy the movie 2001 Space Odyssey DVD. I just hope someone remembers to program in good jokes into AI...lol

Next story loading loading..