At the same time, 67% believe AI will erode stakeholder trust over the next four years. AI is an inevitability, but there is low confidence that it will improve human
According to the 2018 Edelman Trust Barometer, trust in all sectors from NGOs to media, business and government has decreased. We’ve entered the “fourth wave of the trust tsunami,” which is the loss of confidence in information channels and sources, brought on largely by the AI-assisted spreading of “fake news.”
This has resulted in trust in social-media platforms now falling below cigarette companies.
So, how can we stop AI from destroying hard-won trust? AI is the next frontier in super-charging business outcomes, but trust is critical. A Millward Brown study shows brands that maintained an above level of trust, since 2006, have achieved 70% growth. Those that fall below average have seen a loss of 13%.
Regulation is helping to guide companies and brands throughout this wave, which we are starting to see come into place in the EU with GDPR, and more recently, in California. There is also the promise of blockchain to create a trust-less system to fend away bad actors, protect data and provide transparency.
BotChain for example, aims to “install” trust in the AI bot economy. While these systems will protect consumers and manage compliance, they are focused on mitigating damage more than cultivating emotional trust.
How can brands leverage AI to supercharge their businesses without damaging hard-won trust? This can be achieved by putting the consumer interest first.
AI in service of consumers
Amazon is a good example of how AI can be good for people and for profit simultaneously. While its tech competitors have dropped off, Amazon remains, per a Harris Poll in March, the most trusted company in the world for the third year running. Amazon's AI magnifies the single-mindedness of the company’s customer focus.
A few years ago, Amazon revamped its AI to create a “low-cost, ubiquitous computer with all its brains in the cloud that you could interact with over voice — you speak to it, it speaks to you,” resulting in the Echo with the Alexa platform behind it. Amazon offers its AI services to outsiders, turning a huge profit. Amazon can then nurture consumer trust as it does not profit from the sale of personal data.
AI that respects humans as decision-maker
AI can build up trust when it recognizes itself as a tool, and leaves the ultimate decision-making to humans. In consumer testing for a client’s AI assistant, no matter how useful and augmentative the AI was, there is no desire for it unless express permission is granted. A smart suggestion was welcome, just as long as it didn’t act on its own accord.
In the real world, Spotify’s Discover Weekly is a great example of a collaborative AI, offering up better music suggestions (not answers) based on studying your behavior. Waze is another, highlighting a range of route options, with implied pros and cons of each accepting there might be other factors, out of its purview, that might play into a final decision.
Both are examples of AI “rooted in a deep respect for human agency.”
Both platforms are doing their best, as machines, to curate what might be most delightful to your human tastes or preferences without presuming to actually know what those are. These platforms build up a sense of trust through mutual respect and understanding.
Much of the promise of AI is the ability to correct for human error, and human biases, at scale. However, even without bad actors, AI can result in unintended consequences stemming from insidious biases in training datasets.
Joy Buolamwini of The Algorithmic Justice League has drawn attention to the predominance of “pale male” benchmark datasets underlying many AI algorithms. These datasets, underlying the AI services of companies such as IBM, Microsoft and Face++ over-represent lighter men in particular and lighter individuals in general, which could perpetuate exclusion.
As the U.S. consumer base becomes more diverse, progressive ads that defy stereotypes are more effective. Studies from Millward Brown using Affectiva's emotion AI tech, show that progressive ads are 25% more effective.
Again, the incentive for brands and businesses to invest in a culture of inclusive AI stems from what works in marketing and brand-building.
Brands would profit by embracing “good” AI practices upfront, to preserv, and even increase, hard-won consumer trust.