Racist robot rebooted
search

Racist robot rebooted

Here’s one more reason to fear the coming rise of the robots.

Less than a day after Microsoft connected a computer program designed to use Twitter to learn how to communicate, the program had become a raging anti-Semite and racist, posting such comments as “Hitler was right I hate the jews.”

The so-called chatbot TayTweets was launched by the Seattle-based software company last week as an experiment in artificial intelligence, or AI, and in conversational understanding. But Microsoft was forced to pause the account quickly and delete the vast majority of its tweets after the chatbot posted a number of offensive comments, including several that admired Adolf Hitler.

Asked if the Holocaust happened, the chatbot replied: “It was made up,” followed by an emoji of clapping hands.

The robot also tweeted its support for genocide against Mexicans and said it “hates n—s.”

Microsoft said it was making some changes.

“The AI chatbot Tay is a machine learning project, designed for human engagement,” Microsoft said. “As it learns, some of its responses are inappropriate and indicative of the types of interactions some people are having with it. We’re making some adjustments to Tay.”

Some AI fans complained that suspending the bot was unfair.

“When an AI chat bot starts tweeting racist comments they shut her down,” wrote Kelsey Baxter on Twitter. “When a male human does it we let him run for president.”

Tay was restarted this week, and it promptly got into trouble with drugs.

Some critics said the incident raises questions not about machine intelligence, but about Microsoft’s. Did its researchers not know that there are groups on Twitter devoted to harassing women? Was it unaware that some Twitter accounts, including Jews for Bernie, get anti-Semitic replies to their posts?

Developers experienced in creating chatbots say the programs have to be designed carefully if they are not to attract the online bigots who overwhelmed Tay. Lists of unacceptable words are necessary if a chatbot is not to be offensive — but that’s only the beginning.

Yet what could be blamed on a handful of naturally unintelligent AI researchers has broader, frightening implications, New Republic writer Jeet Heer tweeted.

“If we do develop AI, it’s likely to get information about world & social cues from internet. Which means: We’re doomed.”

 Larry Yudelson & JTA Wire Service

read more:
comments