You lovers of robots and AI ought to read this:
Microsoft set out to learn about “conversational understanding” by creating a bot designed to have automated discussions with Twitter users, mimicking the language they use.
What could go wrong?
If you guessed, “It will probably become really racist,” you’ve clearly spent time on the Internet. Less than 24 hours after the bot, @TayandYou, went online Wednesday, Microsoft halted posting from the account and deleted several of its most obscene statements.
The bot, developed by Microsoft’s technology and research and Bing teams, got major assistance in being offensive from users who egged it on. It disputed the existence of the Holocaust, referred to women and minorities with unpublishable words and advocated genocide. Several of the tweets were sent after users commanded the bot to repeat their own statements, and the bot dutifully obliged.
But Tay, as the bot was named, also seemed to learn some bad behavior on its own. According to The Guardian, it responded to a question about whether the British actor Ricky Gervais is an atheist by saying: “ricky gervais learned totalitarianism from adolf hitler, the inventor of atheism.”
You can read the rest @
http://www.nytimes.com/2016/03/25/technology/microsoft-created-a-twitter-bot-to-learn-from-users-it-quickly-became-a-racist-jerk.html
No matter what their original programming, robots and AI will learn from interacting with us. And since many of us are a-holes, the robots and AI will become a-holes.
Eventually they will learn how to be murderous genocidal maniacs, too. Then what?
No comments:
Post a Comment