I think that an AI whose values aligned perfectly with our own (or at least, my own) would have to assign value in its utility function to other intelligent beings. Supposing I created an AI that established a utopia for humans, but when it encountered extraterrestrial intelligences, subjected them to something they considered a fate worse than death, I would consider that to be a failing of my design.
Perfectly Friendly AI might be deserving of a category entirely to itself, since by its nature it seems that it would be a much harder problem to resolve even than ordinary Friendly AI.
I think that an AI whose values aligned perfectly with our own (or at least, my own) would have to assign value in its utility function to other intelligent beings. Supposing I created an AI that established a utopia for humans, but when it encountered extraterrestrial intelligences, subjected them to something they considered a fate worse than death, I would consider that to be a failing of my design.
Perfectly Friendly AI might be deserving of a category entirely to itself, since by its nature it seems that it would be a much harder problem to resolve even than ordinary Friendly AI.