Suppose Alice lives naturally for 100 years and is cremated. And suppose Bob lives naturally for 40 years then has his brain frozen for 60 years, and then has his brain cremated. The odds that Bob gets tortured by a spiteful AI should be pretty much exactly the same as for Alice. Basically, its the odds that spiteful AIs appear before 2034.
if you’re alive, you can kill yourself when s-risks increases beyond your comfort point. if you’re preserved, then you rely on other people to execute on those wishes
Killing oneself with high certainty of effectiveness is more difficult than most assume. The side effects on health and personal freedom of a failed attempt to end one’s life in the current era are rather extreme.
Anyways, emulating or reviving humans will always incur some cost; I suspect that those who are profitable to emulate or revive will get a lot more emulation time than those who are not.
If a future hostile agent just wants to maximize suffering, will foregoing preservation protect you from it? I think it’s far more likely that an unfriendly agent will simply disregard suffering in pursuit of some other goal. I’ve spent my regular life trying to figure out how to accomplish arbitrary goals more effectively with less suffering, so more of the same set of challenges in an afterlife would be nothing new.
Killing oneself with high certainty of effectiveness is more difficult than most assume.
Dying naturally also isn’t as smooth as plenty of people assume. I’m pretty sure that “taking things into your hands” leads to higher amount of expected suffering reduction in most cases, and it’s not informed rational analysis that prevents people from taking that option.
If a future hostile agent just wants to maximize suffering, will foregoing preservation protect you from it?
Yes? I mean, unless we entertain some extreme abstractions like it simulating all possible minds of certain complexity or whatever.
It’s not obvious to me that those are the same, though they might be. Either way, it’s not what I was thinking of. I was considering the Bob-1 you describe vs. a Bob-2 that lives the same 40 years and doesn’t have his brain frozen. It seems to me that Bob-1 (40L + 60F) is taking on a greater s-risk than Bob-2 (40L+0F).
(Of course, Bob-1 is simultaneously buying a shot at revival, which is the whole point after all. Tradeoffs are tradeoffs.)
I don’t understand the s-risk consideration.
Suppose Alice lives naturally for 100 years and is cremated. And suppose Bob lives naturally for 40 years then has his brain frozen for 60 years, and then has his brain cremated. The odds that Bob gets tortured by a spiteful AI should be pretty much exactly the same as for Alice. Basically, its the odds that spiteful AIs appear before 2034.
if you’re alive, you can kill yourself when s-risks increases beyond your comfort point. if you’re preserved, then you rely on other people to execute on those wishes
Killing oneself with high certainty of effectiveness is more difficult than most assume. The side effects on health and personal freedom of a failed attempt to end one’s life in the current era are rather extreme.
Anyways, emulating or reviving humans will always incur some cost; I suspect that those who are profitable to emulate or revive will get a lot more emulation time than those who are not.
If a future hostile agent just wants to maximize suffering, will foregoing preservation protect you from it? I think it’s far more likely that an unfriendly agent will simply disregard suffering in pursuit of some other goal. I’ve spent my regular life trying to figure out how to accomplish arbitrary goals more effectively with less suffering, so more of the same set of challenges in an afterlife would be nothing new.
Dying naturally also isn’t as smooth as plenty of people assume. I’m pretty sure that “taking things into your hands” leads to higher amount of expected suffering reduction in most cases, and it’s not informed rational analysis that prevents people from taking that option.
Yes? I mean, unless we entertain some extreme abstractions like it simulating all possible minds of certain complexity or whatever.
Right, but you might prefer
living now >
not living, no chance of revival or torture >
not living, chance of revival later and chance of torture
It’s not obvious to me that those are the same, though they might be. Either way, it’s not what I was thinking of. I was considering the Bob-1 you describe vs. a Bob-2 that lives the same 40 years and doesn’t have his brain frozen. It seems to me that Bob-1 (40L + 60F) is taking on a greater s-risk than Bob-2 (40L+0F).
(Of course, Bob-1 is simultaneously buying a shot at revival, which is the whole point after all. Tradeoffs are tradeoffs.)