The details matter here! Sometimes when (MIRI?) people say “unaligned AIs might be a bit nice and may not literally kill everyone” the modal story in their heads is something like some brain states of humans are saved in a hard drive somewhere for trade with more competent aliens. And sometimes when other people [1]say “unaligned humans might be a bit nice and may not literally kill everyone” the modal story in their heads is that some X% of humanity may or may not die in a violent coup, but the remaining humans get to live their normal lives on Earth (or even a solar system or two), with some AI survelliance but our subjective quality of life might not even be much worse (and might actually be better).
From a longtermist perspective, or a “dignity of human civilization” perspective, maybe the stories are pretty similar. But I expect “the average person” to be much more alarmed by the first story than the second, and not necessarily for bad reasons.
I don’t want to speak for Ryan or Paul, but at least tentatively this is my position: I basically think the difference from a resource management perspective of whether to keep humans around physically vs copies of them saved is ~0 when you have the cosmic endowment to play with, so small idiosyncratic preferences that’s significant enough to want to save human brain states should also be enough to be okay with keeping humans physically around; especially if humans strongly express a preference for the latter happening (which I think they do).
The details matter here! Sometimes when (MIRI?) people say “unaligned AIs might be a bit nice and may not literally kill everyone” the modal story in their heads is something like some brain states of humans are saved in a hard drive somewhere for trade with more competent aliens. And sometimes when other people [1]say “unaligned humans might be a bit nice and may not literally kill everyone” the modal story in their heads is that some X% of humanity may or may not die in a violent coup, but the remaining humans get to live their normal lives on Earth (or even a solar system or two), with some AI survelliance but our subjective quality of life might not even be much worse (and might actually be better).
From a longtermist perspective, or a “dignity of human civilization” perspective, maybe the stories are pretty similar. But I expect “the average person” to be much more alarmed by the first story than the second, and not necessarily for bad reasons.
I don’t want to speak for Ryan or Paul, but at least tentatively this is my position: I basically think the difference from a resource management perspective of whether to keep humans around physically vs copies of them saved is ~0 when you have the cosmic endowment to play with, so small idiosyncratic preferences that’s significant enough to want to save human brain states should also be enough to be okay with keeping humans physically around; especially if humans strongly express a preference for the latter happening (which I think they do).