As long as a bot can write on the level of a human, and humans can fall in love with other humans or be persuaded to commit suicide by them, a bot will be able to do the same thing. The solution here seems to be to only give chatbots “nice” personalities rather than “uncaring” ones.
While giving a positive affect might work for simple chatbots, I don’t think a positive affect would prevent a more intelligent AI from wrecking havoc using vulnerable people.
We need an AI with positive values, goals, and affect, but maybe that is what you meant by personality.
As long as a bot can write on the level of a human, and humans can fall in love with other humans or be persuaded to commit suicide by them, a bot will be able to do the same thing. The solution here seems to be to only give chatbots “nice” personalities rather than “uncaring” ones.
While giving a positive affect might work for simple chatbots, I don’t think a positive affect would prevent a more intelligent AI from wrecking havoc using vulnerable people.
We need an AI with positive values, goals, and affect, but maybe that is what you meant by personality.