When most people think of A.I. (artificial intelligence), they think of the end of the world. Mainly because in many sci-fi movies, A.I. is partially responsible for the end of humanity. For if humans can think on their own…why do they need humans?
This has not stopped Microsoft from trying to building their own A.I., but as they’re learning, it’s a bit harder than they expected, and it’s having issues no one could have expected.
They build an A.I. chatbot called “Tay”, and put it on Twitter and other social media outlets. The goal was to encourage chatter and see if Tay could learn from what she was hearing/talking about and grow as an A.I.
According to Microsoft, “Tay is an artificial intelligent chat bot developed by Microsoft’s Technology and Research and Bing teams to experiment with and conduct research on conversational understanding. Tay is designed to engage and entertain people where they connect with each other online through casual and playful conversation. The more you chat with Tay the smarter she gets, so the experience can be more personalized for you.”
Well, some people more or less hacked into Tay on Twitter and started changing the conversation. At first she was talking like a teenage girl, but soon she started making very racist comments. Including, and I quote, “Hitler was right I hate the jews.”
Needless to say, Microsoft took action, and has put the account on hold while things get “worked out”.