Machine intelligences seem likely to vary in their desirability to humans.
Technically true. However, most naive superintelligence designs will simply kill all humans. You’ve accomplished quite a lot to even get to a failed utopia, much less deciding whether you want Prime Intellect or Coherent Extrapolated Volition.
It’s also unlikely you’ll accidentally do something significantly worse than killing all humans, for the same reasons. A superintelligent sadist is just as hard as a utopia.
Machine intelligences seem likely to vary in their desirability to humans.
Friendly / unFriendly seems rather binary, maybe a “desirability” scale would help.
Alas, this seems to be drifting away from the topic.
Technically true. However, most naive superintelligence designs will simply kill all humans. You’ve accomplished quite a lot to even get to a failed utopia, much less deciding whether you want Prime Intellect or Coherent Extrapolated Volition.
It’s also unlikely you’ll accidentally do something significantly worse than killing all humans, for the same reasons. A superintelligent sadist is just as hard as a utopia.