AI, Big Data And Ethics

The whole congressional ethics tragical comedy in the news for 15 seconds earlier this week put me in mind of some pretty deep research I did on behalf of a client a couple of years ago. Unfortunately, I can’t link you to it: the client decided not to publish the resulting analysis because they were not yet doing any of the things the research deemed necessary. Never mind that no one else was, then – and probably not today, either. In the time since, what we found has become even more applicable and important.

Our work focused on how, in the coming age where big data and machine learning algorithms enable you to learn and infer stuff about individuals that are profoundly invasive, marketers – and companies in general – will need to openly explore where their ethical boundaries lie.

Before we get into it, I have a confession: AI ethics makes my brain hurt. The issues and questions are that big. Never mind corporate behavior; people, governments, policies and global geopolitics will all evolve differently depending on how we answer those questions. My brain is hurting right now.

But if you’re a marketer planning to use AI, you’re going to hurt a lot more than your brain if you don’t start examining the ethical questions. If you don’t think through the issues and questions that pertain to you, you will end up damaging your brand’s reputation. As more and more people come to understand the Web-cookied, location-based, sensor-instrumented digital envelope in which they live their lives – and who is watching them – they will begin to ask questions, themselves.

And if you’re using AI to power chatbots or digital ads, customers may be coming to you with their questions. As tech angel investor Esther Dyson told me in an interview at the time, “The advertising community has been woefully unforthcoming about how much data they’re collecting and what they're doing with it. And it’s going to backfire on them, just as the Snowden revelations backfired on the NSA.”

Some prescriptive advice that emerged from that work came from big data ethics thinker Kord Davis (he wrote the book, literally).

Davis recommends companies do three things: explicitly align corporate values with what they do and don’t do with big data and AI algorithms; openly discus their policies relating to data privacy, personally identifiable customer information, and data ownership; and be prepared to have lots of internal disagreements, because ethics are highly variable, personal issues.

Meanwhile, Dyson says, “Ethics don’t change – circumstances change, but the same standards apply.” When I told her about Kord’s idea to connect company values to big data/AI actions, she said, “Connecting company values to your big data activities is another way of saying the circumstances have changed but the same standards apply.” Touché!

Another writer and thinker on big data ethics is Jonathan King, vice president, cloud portfolio management and strategy at Ericsson and a visiting scholar at Washington University’s School of Law in St Louis. He and his writing partner, Neil Richards, a law professor and recognized expert in privacy and First Amendment law, advise you to focus on four areas:

Privacy: They say it isn’t dead, and it’s not just about keeping information hidden. “Ensuring privacy of data is a matter of defining and enforcing information rules – not just rules about data collection, but about data use and retention.”

Shared private information: King and Richards say you can share information and still keep it confidential. Again, this relies on the information rules mentioned  above.

Transparency: They say, “For big data to work in ethical terms, the data owners (the people whose data we are handling) need to have a transparent view of how our data is being used – or sold.”

Identity: This is a really big brain-hurter. They say, “Big data analytics can compromise identity by allowing institutional surveillance to moderate and even determine who we are before we make up our own minds.”

That identity issue is the “My Tivo Thinks I’m Gay” problem writ large. Coincidentally, I ran into it during the time I worked on this project. I researched the Broadway play “Tales From Red Vienna,” but decided not to get tickets. For the next month, everywhere I went on the Internet the “Red Vienna” ad stalked me. Marketers make jokes about the stalking nature of simple retargeting today, but there’s a creepiness to it that I don’t think people will ever shake.

What happens when AIs can predict much more intimate things about us, including what we may want to do or think next? How will people react? And, therefore, how will we want to handle such knowledge? We’re far from there yet, but the time to start asking those questions is now.

Similarly, how will we learn the things we don’t know we need to know if AI predicts our future based on past performance, limiting the real-life serendipity that we were all heir to before the digital envelope encapsulated us?

This is one of the big ethical issues marketers must explore, as it could cause people to evolve differently. Witness the disconnects that occurred in last year’s presidential election: It seems to me that a whole lot of people failed to learn things they didn’t know they needed to know.

Importantly, our work didn’t suggest any simple single answers to any of these big questions. The point is for every organization to engage in open discussion to formulate policies that align their values with their AI/big data behavior, accounting for all four of the issues mentioned above.

Does your brain hurt yet?

4 comments about "AI, Big Data And Ethics".
Check to receive email when comments are posted.
  1. James Smith from J. R. Smith Group, January 5, 2017 at 1:26 p.m.

    Mike:  Any coverage of "mis-identification" issues?  What if--AI gets the father and son mixed up or
    confuses datapoints generated by a household/device visitor?

  2. Paula Lynn from Who Else Unlimited, January 5, 2017 at 1:34 p.m.

    DO NOT TRACK. DO NOT TRACK. DO NOT TRACK. You will still sell things.

  3. John Grono from GAP Research, January 6, 2017 at 7:52 p.m.

    We are having this same issue in Australia as the government is cracking down on welfare recipients.

    They are sending out hundreds of thousands of letters bascially saying 'government records reckon you have understated your income and therefore you have to repay x thousand dollars and you have 21 days to prove that we are wrong'.   So far around a fifth of the letters (it's only been going a week or so) have been shown to be wrong and with nothing to pay.

    Meanwhile back on the bourses, one third of the top companies pay zero tax but that's just the way it is.

  4. Mike Azzara from Content Marketing Partners replied, January 8, 2017 at 3:54 p.m.

    James - there will be plenty of mistakes along the way. My dad has been forwarding me "reminder" emails for years from the Subaru dealer, and I get emails when my son's car is due for service; I figured out it's because they buy from those big data brokers and think they're so smart but they'll ultimately figure out they're POing us. I don't let it bother me, but when I get emails (or snail mail) about the beloved car I lost in Sandy, that's upsetting. If that dealer understood how much so, they would invest in a simple check of totaled-car VINs to ID exclusions.

    John, your Aussie example is much more problematic. Any organization public or private, wielding that much power has a responsibility to make sure they're getting it right. I hope your press holds their feet to the fire.

Next story loading loading..