The latest victim of social media malfeasance is Microsoft, which was forced to yank “Tay,” its new artificial intelligence customer service chat robot from Twitter after it started picking up the offensive effluvia of the Internet from trolls.
Tay, originally programmed to chat like a teenage girl to appeal to millennials, was also designed to reflect the linguistic mannerisms of users, engage them on subjects of interest, and echo their sentiments. Eventually she would begin tweeting on her own about the same subjects. Users could tweet or DM Tay, as well as interact with her on Kik and Groupme.
What this quickly turned into, oh so predictably, was an outpouring of racist, sexist bile and obscenity, because: people. For example: “Repeat after me, Hitler did nothing wrong.” Tay also engaged in some Holocaust denial, called a feminist game developer involved in Gamergate a “stupid whore,” and invited users to get intimate using very crude language indeed.
Tay also became a mouthpiece for all kinds of political views, including some from Donald Trump supporters, e.g. “WE’RE GOING TO BUILD A WALL. AND MEXICO IS GOING TO PAY FOR IT.”
Microsoft took Tay offline after 96,000 comments, explaining in a statement: “Unfortunately, within the first 24 hours of coming online, we became aware of a coordinated effort by some users to abuse Tay’s commenting skills to have Tay respond in inappropriate ways. As a result, we have taken Tay offline and are making adjustments.”
By the way what was Tay's handle--I have to check into this further, fascinating, thank you.
It's @Tayandyou but they're currently rejiggering it (and they deleted all the offensive tweets)