“unFriendly” doesn’t mean “evil”, just “not explicitly Friendly”. Assuming you already have an AI capable of recursive self-improvement, it’s easy to give it a goal system that will result in the world being destroyed (not because it hates us, but because it can think of better things to do with all this matter), but creating one that’s actually evil or that hates humans (or has some other reason that torturing us would make sense in its goal system) would probably be nearly as hard as the problem of Friendliness itself, as gregconen pointed out.
“unFriendly” doesn’t mean “evil”, just “not explicitly Friendly”. Assuming you already have an AI capable of recursive self-improvement, it’s easy to give it a goal system that will result in the world being destroyed (not because it hates us, but because it can think of better things to do with all this matter), but creating one that’s actually evil or that hates humans (or has some other reason that torturing us would make sense in its goal system) would probably be nearly as hard as the problem of Friendliness itself, as gregconen pointed out.