My guess is mostly that the space is so wide that you don’t even end up with AIs warping existing humans into unrecognizable states, but do in fact just end up with the people dead
Why? I see a lot of opportunities for s-risk or just generally suboptimal future in such options, but “we don’t want to die, or at any rate we don’t want to die out as a species” seems like an extremely simple, deeply-ingrained goal that almost any metric by which the AI judges our desires should be expected to pick up, assuming it’s at all pseudokind. (In many cases, humans do a lot to protect endangered species even as we do diddly-squat to fulfill individual specimens’ preferences!)
Why? I see a lot of opportunities for s-risk or just generally suboptimal future in such options, but “we don’t want to die, or at any rate we don’t want to die out as a species” seems like an extremely simple, deeply-ingrained goal that almost any metric by which the AI judges our desires should be expected to pick up, assuming it’s at all pseudokind. (In many cases, humans do a lot to protect endangered species even as we do diddly-squat to fulfill individual specimens’ preferences!)