For what it’s worth, your “nearly friendly” examples all seem better than dying to me, maybe even significantly better.
It is not perfect, but I think it is fair to say that the single example of bootstrap intelligence we got values the complex dynamical processes for what they are
(Superhuman AI only needs to divert very little effort to charity to be the best thing ever that happened to us)
This seems like a pretty silly thing to say; we should expect simple utility functions all else equal. A superintelligence that’s ambivalent about helping humanity would have a pretty complicated utility function.
I agree that unfriendly AI would want to know lots about humans; I don’t see why that requires preserving them. Seems like a scan and computer simulation would work much better.
For what it’s worth, your “nearly friendly” examples all seem better than dying to me, maybe even significantly better.
Are you kidding?
This seems like a pretty silly thing to say; we should expect simple utility functions all else equal. A superintelligence that’s ambivalent about helping humanity would have a pretty complicated utility function.
I agree that unfriendly AI would want to know lots about humans; I don’t see why that requires preserving them. Seems like a scan and computer simulation would work much better.