In the described scenario, the end result is omnicide. Thus, it is not much different from the AI immediately killing all humans.
I strongly disagree with this. I would much, much rather be killed immediately than suffer for a trillion years and then die. This is for the same reason that I would rather enjoy a trillion years of life and then die, than die immediately.
In this case, the philosophy’s adherents have no preference between dying and doing something else with zero utility (e.g. touching their nose). As humans encounter countless actions of a zero utility, the adherents are either all dead or being inconsistent.
I think you’re confusing the utility of a scenario with the expected utility of an action. Assigning zero utility to being dead is not the same as assigning zero expected utility to dying over not dying. If we let the expected utility of an action be defined relative to the expected utility of not doing that action, then “touching my nose”, which doesn’t affect my future utility, does have an expected utility of zero. But if I assign positive utility to my future existence, then killing myself has negative expected utility relative to not doing so.
I strongly disagree with this. I would much, much rather be killed immediately than suffer for a trillion years and then die. This is for the same reason that I would rather enjoy a trillion years of life and then die, than die immediately.
I think you’re confusing the utility of a scenario with the expected utility of an action. Assigning zero utility to being dead is not the same as assigning zero expected utility to dying over not dying. If we let the expected utility of an action be defined relative to the expected utility of not doing that action, then “touching my nose”, which doesn’t affect my future utility, does have an expected utility of zero. But if I assign positive utility to my future existence, then killing myself has negative expected utility relative to not doing so.