You’re argument rests on the fact that people who have suffered a million years of suffering could—in theory—be rescued and made happy, with it only requiring “tech and time”. In an S-risk scenario, that doesn’t happen.
In what I’d consider the archetypical S-risk scenario, an AI takes over, starts simulating humans who suffer greatly, and there is no more human agency ever again. The (simulated) humans experience great suffering until the AI runs out of power (some time trillions of years in the future when the universe can no longer power any more computation) at which point they die anyway.
As for your points on consistency, I’m pretty sure a utilitarian philosophy that simply assigns utility zero to the brain state of being dead is consistent. Whether this is actually consistent with people’s revealed preferences and moral intuitions I’m not sure.
In the described scenario, the end result is omnicide. Thus, it is not much different from the AI immediately killing all humans.
The important difference is that there is some non-zero chance that in the trillions of years the AI might change its mind, and reverse its deed. Thus, I would say that the S-risk scenario is somewhat more preferable than the fast killing.
As for your points on consistency, I’m pretty sure a utilitarian philosophy that simply assigns utility zero to the brain state of being dead is consistent.
In this case, the philosophy’s adherents have no preference between dying and doing something else with zero utility (e.g. touching their nose). As humans encounter countless actions of a zero utility, the adherents are either all dead or being inconsistent.
In the described scenario, the end result is omnicide. Thus, it is not much different from the AI immediately killing all humans.
I strongly disagree with this. I would much, much rather be killed immediately than suffer for a trillion years and then die. This is for the same reason that I would rather enjoy a trillion years of life and then die, than die immediately.
In this case, the philosophy’s adherents have no preference between dying and doing something else with zero utility (e.g. touching their nose). As humans encounter countless actions of a zero utility, the adherents are either all dead or being inconsistent.
I think you’re confusing the utility of a scenario with the expected utility of an action. Assigning zero utility to being dead is not the same as assigning zero expected utility to dying over not dying. If we let the expected utility of an action be defined relative to the expected utility of not doing that action, then “touching my nose”, which doesn’t affect my future utility, does have an expected utility of zero. But if I assign positive utility to my future existence, then killing myself has negative expected utility relative to not doing so.
You’re argument rests on the fact that people who have suffered a million years of suffering could—in theory—be rescued and made happy, with it only requiring “tech and time”. In an S-risk scenario, that doesn’t happen.
In what I’d consider the archetypical S-risk scenario, an AI takes over, starts simulating humans who suffer greatly, and there is no more human agency ever again. The (simulated) humans experience great suffering until the AI runs out of power (some time trillions of years in the future when the universe can no longer power any more computation) at which point they die anyway.
As for your points on consistency, I’m pretty sure a utilitarian philosophy that simply assigns utility zero to the brain state of being dead is consistent. Whether this is actually consistent with people’s revealed preferences and moral intuitions I’m not sure.
In the described scenario, the end result is omnicide. Thus, it is not much different from the AI immediately killing all humans.
The important difference is that there is some non-zero chance that in the trillions of years the AI might change its mind, and reverse its deed. Thus, I would say that the S-risk scenario is somewhat more preferable than the fast killing.
In this case, the philosophy’s adherents have no preference between dying and doing something else with zero utility (e.g. touching their nose). As humans encounter countless actions of a zero utility, the adherents are either all dead or being inconsistent.
I strongly disagree with this. I would much, much rather be killed immediately than suffer for a trillion years and then die. This is for the same reason that I would rather enjoy a trillion years of life and then die, than die immediately.
I think you’re confusing the utility of a scenario with the expected utility of an action. Assigning zero utility to being dead is not the same as assigning zero expected utility to dying over not dying. If we let the expected utility of an action be defined relative to the expected utility of not doing that action, then “touching my nose”, which doesn’t affect my future utility, does have an expected utility of zero. But if I assign positive utility to my future existence, then killing myself has negative expected utility relative to not doing so.