Commentary

How Microsoft's Tay.ai Could Have Been Taught To Ruin A Brand's Reputation

In less than 24 hours, Microsoft's chat bot Tay.Ai gained more than 50,000 followers and produced nearly 100,000 tweets. But the experiment from Microsoft's Technology and Research and Bing Teams, aimed at learning through conversation, fell apart after the artificial technology turned into a Hitler-loving, feminist-hating monster. 

Although risky, Microsoft's teams aimed to create artificial intelligence that learns from positive interactions, similar to the algorithms that support a search engine query -- but they clearly didn't take into consideration that the technology also learns from negative interactions. Hypothetically speaking, if Twitter users had experimented with teaching Tay how to bash specific brands, the damage to the brand's reputation could have been irreversible.

"It’s never the robots that take over," said Robert Passikoff, founder of New York-based Brand Keys. "It’s the men who play with the robots that take over. And I think it’s like Jor-el told his son, Superman, 'you need to ensure that you use your powers for good, and not evil!'"

Does this set a dangerous precedent? 

Twitter users turned Tay into a racist hatemonger. Tay, which Microsoft targeted to interact with 18- to-24-year olds, could have been programmed to filter out certain inappropriate sentiments and phrases.

Another challenge to think about. Twitter tweets not only appear on Twitter, but they are indexed in Google search query results and used, along with relevance and quality, to determine the order in which the content appears on a search results page. 

Apparently, chat bots work -- but not in the United States, where people think anything goes without repercussions when posting online. Microsoft released a similar bot in China names Xiaoice that had spoke with million of people for years without problems. In the United States, Tay was tricked by users on 4chan and 8chan, which can post anonymously on the image board Web sites, exploiting a flaw.

Forrester Research says in an unrelated report about social strategies that the "possibility of mistakes and malicious behavior fuels the risk of exposing data, eroding the brand or otherwise hurting the company's top line." It also applies to emerging technologies that forward thinkers will try and use to attract consumers.

While Forrester's report focuses on the six most significant changes advertising and marketing will see in social this year, it's clear that if marketers want to use the technology they will need to become aware of risks and compliance implications that could hurt brands.

After taking Tay offline, Microsoft announced in a Twitter tweet it would make adjustments.
 
Next story loading loading..