There is an opinion expressed here, that I agree with: http://smerity.com/articles/2016/tayandyou.html TL;dr: No “learning” from interactions on twitter happened. The bot was parroting old training data, because it does not really generate text. The researchers didn’t apply an offensiveness filter at all.
I think this chat bot was performing badly right from the start. It would not make sense to give too much importance to the users it was chatting with, and they did not change its mind. That bit of media sensationalism is BS.
Natural language generation is an open problem and almost every method I have seen (not an expert in NLP, but would call myself one in Machine Learning) ends up parroting some of its training text, implying that it is overfitting.
Given this, we should learn nothing about AI from this experiment, only about people’s reaction to it, mainly the media reaction to it. Users’ reaction while talking to AI is well documented.
There is an opinion expressed here, that I agree with: http://smerity.com/articles/2016/tayandyou.html TL;dr: No “learning” from interactions on twitter happened. The bot was parroting old training data, because it does not really generate text. The researchers didn’t apply an offensiveness filter at all.
I think this chat bot was performing badly right from the start. It would not make sense to give too much importance to the users it was chatting with, and they did not change its mind. That bit of media sensationalism is BS. Natural language generation is an open problem and almost every method I have seen (not an expert in NLP, but would call myself one in Machine Learning) ends up parroting some of its training text, implying that it is overfitting.
Given this, we should learn nothing about AI from this experiment, only about people’s reaction to it, mainly the media reaction to it. Users’ reaction while talking to AI is well documented.