It sounds like you’re worried about humans optimizing the universe according to human values because they are the wrong values. At the same time you seem to be saying that this won’t be accomplished by building FAI, because only humans can have human values. Is this correct?
Does is also worry you that humans might (mistakenly) optimize the universe with non-human values that also happen to be wrong? If so, do you have any suggestions about how we might get the universe to be optimized according to the right values?
[Deleting because I didn’t notice Phil already answered in another comment.]
It sounds like you’re worried about humans optimizing the universe according to human values because they are the wrong values. At the same time you seem to be saying that this won’t be accomplished by building FAI, because only humans can have human values. Is this correct?
Does is also worry you that humans might (mistakenly) optimize the universe with non-human values that also happen to be wrong? If so, do you have any suggestions about how we might get the universe to be optimized according to the right values?
[Deleting because I didn’t notice Phil already answered in another comment.]