Tay, a chatbot artificial intelligence designed by Microsoft to respond like an emoji-happy young adult, appeared to be silenced within 24 hours after her launch when the Internet taught her to praise Hitler and repeat conspiracy theories.
According to Tay’s “about page,” she is designed to learn how to respond and entertain users, the more they chat with her on social media sites. The bot is can play games, tell stories, tell jokes and comment on pictures sent to her, and she is active on Twitter, Snapchat, Kik and GroupMe, according to Cnet.
“Tay is designed to engage and entertain people where they connect with each other online through casual and playful conversation. The more you chat with Tay the smarter she gets, so the experience can be more personalized for you,” according to the page.
Her responses were drawn from public data, an editorial staff that included improvisational comedians and the team was “modeled, cleaned and filtered” the data, according to Microsoft.
So the Internet taught her this:
The hits just keep on coming from #TayTweets pic.twitter.com/2ADugvpECE
— Whatever Joel (@WhateverJoel) March 24, 2016
It didn’t go well.
Microsoft is deleting its AI chatbot's incredibly racist tweets https://t.co/DUbL6M7WYg #TayTweets pic.twitter.com/TisQW4BqO7
— MȺŧŧɨȺs WȺȼħŧmɇɨsŧɇɍ (@mattiaswac) March 24, 2016
Her last Tweet was 9:20 p.m. on Wednesday, and her website suggested that she’ll be offline for the time being.
“Phew. Busy day. Going offline for a while to absorb it all. Chat soon,” read the message at the top of her website.
More reading
Slate: Microsoft Took Its New A.I. Chatbot Offline After It Started Spewing Racist Tweets
The Guardian: Tay, Microsoft’s AI chatbot, gets a crash course in racism from Twitter
Geekwire: Microsoft’s millennial chatbot Tay.ai pulled offline after Internet teaches her racism