CANNES, FRANCE — Humans want to believe the machines. Google Maps has sometimes directed drivers into mishaps, like the bus driver that couldn’t quite clear the underpass the GPS app sent him toward.
Did the driver give up his free will? Was Google ethically complicit? If Google directed you to drive into a pond, would you do it?
Havas's Jason Jercinovic joined Sanofi's Astrid Boutaud and Venerable Tenzin Priyardashi of the Dalai Lama Center for Ethics and Transformative Values, MIT to discuss the role of creative ethics as its relates to cognitive intelligence at a Cannes session today.
"If companies are [establishing ethics] just to be compliant, they will always find loopholes," says Priyardashi. "There has to be a different framework than the legal framework in addition to the well-being of consumers."
Advertisers are at the forefront of this issue. In recent years, AI and cognitive intelligence have unlocked the power of brands to engage with people in unprecedented ways. The panelists agreed it is mostly acceptable to use data and insight to drive personalized advertising.
advertisement
advertisement
"The benefits of AI is no news to marketers with personalization and messaging but there has been a huge quantum leap in being more relevant to consumers, particularly in healthcare," says Boutaud. "Increased data with AI will bring help to sort and analyze in getting the right answers to the right topics. It will help with the diagnoses of patients. Human input with the combination of AI can be very powerful.
As technology helps to make consumers’ lives easier, panelists recommend radical transparency that reveals to people what a specific technology is and what they get out of it. And marketers are ethically obligated to make it easy to opt-out and in.
Still, there are unanswered questions. "How much bias data is driving the algorithm?" asks Priyardashi. "A layperson doesn't have insight into how an algorithm works. When we talk about ethical framework, we should not give the machines the ability to overwrite themselves, he says. "We need to have a human in the loop if you want to work as a precaution measure."
MIT Labs recently investigated how people make decisions in completely automatous ways. "There is this disposition when it comes to complex decision making, where we want others to do so," says Priyardashi.
As part of the MIT research, the owner of an autonomous car was given the option to either self-destruct or hit a pedestrian. Most people choose the option to self-sacrifice, but then they also said they just won't buy the car. That’s just one example of the accountability gap that still isn’t resolved with developing technologies, says Priyardashi.
There is also a cultural context to the issue of ethics.
A Japanese car company asked drivers about hitting either a biker wearing a helmet on the right and one without a helmet on the left. Americans would prefer to hit the driver on the right thinking the helmeted rider will most likely be OK. Asian drivers, by contrast, would hit the rider on the left without headgear because the other guy is following the law.