When you make a claim like “misaligned AIs kill literally everyone”, then reasonable people will be like “but will they?” and you should be a in a position where you can defend this claim.
I think most reasonable people will round off “some humans may be kept as brain scans that may have arbitrary cruelties done to them” to be equivalent to “everyone will be killed (or worse)” and not care about this particular point, seeing it as nitpicking that would not make the scenario any less horrible even if it was true.
I disagree. I think it matters a good amount. Like if the risk scenario is indeed “humans will probably get a solar system or two because it’s cheap from the perspective of the AI”. I also think there is a risk of AI torturing the uploads it has, and I agree that if that is the reason why humans are still alive then I would feel comfortable bracketing it, but I think Ryan is arguing more that something like “humans will get a solar system or two and basically get to have decent lives”.
Ryan is arguing more that something like “humans will get a solar system or two and basically get to have decent lives”.
Yep, this is an accurate description, but it is worth emphasizing that I think that horrible violent conflict and other bad outcomes for currently alive humans are reasonably likely.
I am not that confident about this. Or like, I don’t know, I do notice my psychological relationship to “all the stars explode” and “earth explodes” is very different, and I am not good enough at morality to be confident about dismissing that difference.
There’s definitely some difference, but I still think that the mathematical argument is just pretty strong, and losing a multiple of 1023 of your resources for hosting life and fun and goodness seems to me extremely close to “losing everything”.
@habryka I think you’re making a claim about whether or not the difference matters (IMO it does) but I perceived @Kaj_Sotala to be making a claim about whether “an average reasonably smart person out in society” would see the difference as meaningful (IMO they would not).
(My guess is you interpreted “reasonable people” to mean like “people who are really into reasoning about the world and trying to figure out the truth” and Kaj interpreted reasonable people to mean like “an average person.” Kaj should feel free to correct me if I’m wrong.)
The details matter here! Sometimes when (MIRI?) people say “unaligned AIs might be a bit nice and may not literally kill everyone” the modal story in their heads is something like some brain states of humans are saved in a hard drive somewhere for trade with more competent aliens. And sometimes when other people [1]say “unaligned humans might be a bit nice and may not literally kill everyone” the modal story in their heads is that some X% of humanity may or may not die in a violent coup, but the remaining humans get to live their normal lives on Earth (or even a solar system or two), with some AI survelliance but our subjective quality of life might not even be much worse (and might actually be better).
From a longtermist perspective, or a “dignity of human civilization” perspective, maybe the stories are pretty similar. But I expect “the average person” to be much more alarmed by the first story than the second, and not necessarily for bad reasons.
I don’t want to speak for Ryan or Paul, but at least tentatively this is my position: I basically think the difference from a resource management perspective of whether to keep humans around physically vs copies of them saved is ~0 when you have the cosmic endowment to play with, so small idiosyncratic preferences that’s significant enough to want to save human brain states should also be enough to be okay with keeping humans physically around; especially if humans strongly express a preference for the latter happening (which I think they do).
Note that “everyone will be killed (or worse)” is a different claim from “everyone will be killed”! (And see Oliver’s point that Ryan isn’t talking about mistreated brain scans.)
I think most reasonable people will round off “some humans may be kept as brain scans that may have arbitrary cruelties done to them” to be equivalent to “everyone will be killed (or worse)” and not care about this particular point, seeing it as nitpicking that would not make the scenario any less horrible even if it was true.
I disagree. I think it matters a good amount. Like if the risk scenario is indeed “humans will probably get a solar system or two because it’s cheap from the perspective of the AI”. I also think there is a risk of AI torturing the uploads it has, and I agree that if that is the reason why humans are still alive then I would feel comfortable bracketing it, but I think Ryan is arguing more that something like “humans will get a solar system or two and basically get to have decent lives”.
Yep, this is an accurate description, but it is worth emphasizing that I think that horrible violent conflict and other bad outcomes for currently alive humans are reasonably likely.
IMO this is an utter loss scenario, to be clear.
I am not that confident about this. Or like, I don’t know, I do notice my psychological relationship to “all the stars explode” and “earth explodes” is very different, and I am not good enough at morality to be confident about dismissing that difference.
There’s definitely some difference, but I still think that the mathematical argument is just pretty strong, and losing a multiple of 1023 of your resources for hosting life and fun and goodness seems to me extremely close to “losing everything”.
@habryka I think you’re making a claim about whether or not the difference matters (IMO it does) but I perceived @Kaj_Sotala to be making a claim about whether “an average reasonably smart person out in society” would see the difference as meaningful (IMO they would not).
(My guess is you interpreted “reasonable people” to mean like “people who are really into reasoning about the world and trying to figure out the truth” and Kaj interpreted reasonable people to mean like “an average person.” Kaj should feel free to correct me if I’m wrong.)
The details matter here! Sometimes when (MIRI?) people say “unaligned AIs might be a bit nice and may not literally kill everyone” the modal story in their heads is something like some brain states of humans are saved in a hard drive somewhere for trade with more competent aliens. And sometimes when other people [1]say “unaligned humans might be a bit nice and may not literally kill everyone” the modal story in their heads is that some X% of humanity may or may not die in a violent coup, but the remaining humans get to live their normal lives on Earth (or even a solar system or two), with some AI survelliance but our subjective quality of life might not even be much worse (and might actually be better).
From a longtermist perspective, or a “dignity of human civilization” perspective, maybe the stories are pretty similar. But I expect “the average person” to be much more alarmed by the first story than the second, and not necessarily for bad reasons.
I don’t want to speak for Ryan or Paul, but at least tentatively this is my position: I basically think the difference from a resource management perspective of whether to keep humans around physically vs copies of them saved is ~0 when you have the cosmic endowment to play with, so small idiosyncratic preferences that’s significant enough to want to save human brain states should also be enough to be okay with keeping humans physically around; especially if humans strongly express a preference for the latter happening (which I think they do).
Note that “everyone will be killed (or worse)” is a different claim from “everyone will be killed”! (And see Oliver’s point that Ryan isn’t talking about mistreated brain scans.)