Microsoft’s chatbot turns racist and learns to swear in a day thanks to Twitter

Tay

Microsoft on Wednesday activated a Twitter chatbot which it had to quickly take down due to users of the micro-blogging website having taught her to post racist, misogynist and profane tweets. The bot was meant to make use of artificial intelligence to engage in conversations with teenagers.

The account named TayTweets began her Twitter journey with the innocent tweet ‘hellooooooo world!!!’ She was supposed to turn smarter as more and more users were to interact with her. However, Microsoft had to shut down the account on Thursday and even delete the inappropriate tweets it posted.

With the world now having become what it is, it’s really a wonder that Microsoft seemingly never thought that such a problem could arise. It took just tweets like ‘jews did 9/11’ for Tay to reply with ‘Okay … jews did 9/11.’ Other statements that users manipulated her into tweeting include ‘feminism is cancer,’ ‘Hitler was right’ and more.

Tay Tweets

After taking the bot down and deleting the inappropriate tweets, Microsoft responded by calling the situation a ‘coordinated effort by some users to abuse Tay’s commenting skills.’ As revealed by The Verge, the company is now busy making some adjustments to the chatbot and hopes to bring it back online soon.

There are lots of questions that have been raised by this whole incident, especially about artificial intelligence and how much it’s reliable. There’s no doubt that other companies too will now be taking high precautionary steps before releasing something like this.

Tay’s arrival on Twitter turned out to be a disaster, but with Microsoft having promised to bring it back with some changes, let us now see how things pan out.