The whole congressional ethics tragical comedy in the news for 15 seconds earlier this week put me in mind of some pretty deep research I did on behalf of a client a couple of years ago.
Unfortunately, I can’t link you to it: the client decided not to publish the resulting analysis because they were not yet doing any of the things the research deemed necessary. Never mind that
no one else was, then – and probably not today, either. In the time since, what we found has become even more applicable and important.
Our work focused on how, in the coming age
where big data and machine learning algorithms enable you to learn and infer stuff about individuals that are profoundly invasive, marketers – and companies in general – will need to
openly explore where their ethical boundaries lie.
Before we get into it, I have a confession: AI ethics makes my brain hurt. The issues and questions are that big. Never mind corporate
behavior; people, governments, policies and global geopolitics will all evolve differently depending on how we answer those questions. My brain is hurting right now.
But if you’re a
marketer planning to use AI, you’re going to hurt a lot more than your brain if you don’t start examining the ethical questions. If you don’t think through the issues and questions
that pertain to you, you will end up damaging your brand’s reputation. As more and more people come to understand the Web-cookied, location-based, sensor-instrumented digital envelope in which
they live their lives – and who is watching them – they will begin to ask questions, themselves.
And if you’re using AI to power chatbots or digital ads, customers may be coming to you with their questions. As tech angel investor Esther Dyson
told me in an interview at the time, “The advertising community has been woefully unforthcoming about how much data they’re collecting and what they're doing with it. And it’s going
to backfire on them, just as the Snowden revelations backfired on the NSA.”
Some prescriptive advice that emerged from that work came from big data ethics thinker Kord Davis (he wrote the book, literally).
Davis recommends companies do three things: explicitly align corporate values
with what they do and don’t do with big data and AI algorithms; openly discus their policies relating to data privacy, personally identifiable customer information, and data ownership; and be
prepared to have lots of internal disagreements, because ethics are highly variable, personal issues.
Meanwhile, Dyson says, “Ethics don’t change – circumstances
change, but the same standards apply.” When I told her about Kord’s idea to connect company values to big data/AI actions, she said, “Connecting company values to your big data
activities is another way of saying the circumstances have changed but the same standards apply.” Touché!
Another writer and thinker on big data ethics is Jonathan King, vice
president, cloud portfolio management and strategy at Ericsson and a visiting scholar at Washington University’s School of Law in St Louis. He and his writing partner, Neil Richards, a law
professor and recognized expert in privacy and First Amendment law, advise you to focus on four
areas:
Privacy: They say it isn’t dead, and it’s not just about keeping information hidden. “Ensuring privacy of data is a matter of defining and
enforcing information rules – not just rules about data collection, but about data use and retention.”
Shared private information: King and Richards say you can
share information and still keep it confidential. Again, this relies on the information rules mentioned above.
Transparency: They say, “For big data to work in
ethical terms, the data owners (the people whose data we are handling) need to have a transparent view of how our data is being used – or sold.”
Identity: This is
a really big brain-hurter. They say, “Big data analytics can compromise identity by allowing institutional surveillance to moderate and even determine who we are before we make up our own
minds.”
That identity issue is the “My Tivo Thinks I’m Gay” problem writ large.
Coincidentally, I ran into it during the time I worked on this project. I researched the Broadway play “Tales From Red Vienna,” but decided not to get tickets. For the next month,
everywhere I went on the Internet the “Red Vienna” ad stalked me. Marketers make jokes about the stalking nature of simple retargeting today, but there’s a creepiness to it that I
don’t think people will ever shake.
What happens when AIs can predict much more intimate things about us, including what we may want to do or think next? How will people react? And,
therefore, how will we want to handle such knowledge? We’re far from there yet, but the time to start asking those questions is now.
Similarly, how will we learn the things we
don’t know we need to know if AI predicts our future based on past performance, limiting the real-life serendipity that we were all heir to before the digital envelope encapsulated us?
This is one of the big ethical issues marketers must explore, as it could cause people to evolve differently. Witness the disconnects that occurred in last year’s presidential
election: It seems to me that a whole lot of people failed to learn things they didn’t know they needed to know.
Importantly, our work didn’t suggest any simple single answers to
any of these big questions. The point is for every organization to engage in open discussion to formulate policies that align their values with their AI/big data behavior, accounting for all four of
the issues mentioned above.
Does your brain hurt yet?