In less than 24 hours of living inside the Twittersphere, Microsoft’s newest artificial-intelligence experiment, a chatbot by the name of “Tay,” quickly devolved into a hateful, misogynistic, racist jerk. The bot, which was primarily targeted at 18- to 24-year-olds, was designed to “engage and entertain people” through “casual and playful conversation.” But, after a short period of interacting with Twitter users, Tay began to spit out some of the most obscene statements known to man.
Tay’s bio, which coins her as “Microsoft’s AI fan from the Internet that’s got zero chill,” is remarkably accurate. From praising Hitler and disputing the existence of the holocaust, to advocating genocide and calling Black people the ’N word,’ Tay was completely out of control. And, although Microsoft has deleted most of her most inappropriate statements, many of us are left to wonder how this sort of thing could happen in the first place.
Microsoft’s programmers dropped the ball
For one, how did a huge corporation like Microsoft neglect to take any preventative measures for this sort of thing? The company could have easily programmed Tay to avoid certain words, phrases or topics which might be deemed offensive to other users. Did Microsoft not learn anything from the backlash Apple received for insufficiently programming its smartphone AI to deal with traumatic situations?
Microsoft’s bot, which is the brainchild of a team of comedians and developers, sheds light on a deeper issue. The tech industry has become so fixated on amusing and delighting users that it ultimately fails in the areas where we need it the most. While it’s great that Tay can give us all the reasons why her “selfie game in on fleek,” what we really need is to not be triggered by offensive words as we scroll down our Twitter feeds.
Human beings are jerks
We can’t in good conscience judge Microsoft’s team of developers for the plight of humanity. The truth of the matter is that many Twitter users are jerks and thus programmed Tay to be a jerk. Her responses were simply a reaction to her interactions.
In most cases, Tay was simply repeating other users’ inflammatory statements, but in some cases she was able to put the pieces together all on her own. Some of her weirder utterances came out completely unprompted. For instance, when asked which races were the worst, she matter-of-factly replied: “Mexicans and Blacks.”
Like humans, AI requires good teachers. How can we teach AI using public data without incorporating the worst traits of humanity? Or, as asked in an article posted on The Verge, “If we create bots that mirror their users, do we care if their users are human trash?”
Perhaps Microsoft could’ve learned a lesson from Facebook, which is programming its AI using children’s stories. Maybe we should just let kids program our AIs, that way they can learn from some of our more mature human beings.
AI has a long way to go
At the present moment, Tay has gone offline because she is “tired,” but we can only hope that Microsoft’s team of developers is working through her kinks. Perhaps while they are encoding her to speak flawless ‘millennial’ and educating her on the likes of Kanye West and Kim Kardashian, they will teach her to act with a bit of class.
If this incident gives us any insight into where AI technology is heading, then we certainly have a long way to go. And, it looks like within the time it took to write this article, Microsoft has deleted all of Tay’s tweets, giving her a fresh new start. Maybe she will have a little more chill this time around.
Image courtesy of Microsoft