You don’t need it anywhere near as stark a contrast as this. In fact, it’s even harder if the agent (like many actual humans) has previously considered suicide, and has experienced joy that they didn’t do so, followed by periods of reconsideration. Intertemporal preference inconsistency is one effect of the fact that we’re not actually rational agents. Your question boils down to “when an agent has inconsistent preferences, how do we choose which to support?”
My answer is “support the versions that seem to make my future universe better”. If someone wants to die, and I think the rest of us would be better off if that someone lives, I’ll oppose their death, regardless of what they “really” want. I’ll likely frame it as convincing them they don’t really want to die, and use the fact that they didn’t want that in the past as “evidence”, but really it’s mostly me imposing my preferences.
There are some with whom I can have the altruistic conversation: future-you AND future-me both prefer you stick around. Do it for us? Even then, you can’t support any real person’s actual preferences, because they don’t exist. You can only support your current vision of their preferred-by-you preferences.
You don’t need it anywhere near as stark a contrast as this. In fact, it’s even harder if the agent (like many actual humans) has previously considered suicide, and has experienced joy that they didn’t do so, followed by periods of reconsideration. Intertemporal preference inconsistency is one effect of the fact that we’re not actually rational agents. Your question boils down to “when an agent has inconsistent preferences, how do we choose which to support?”
My answer is “support the versions that seem to make my future universe better”. If someone wants to die, and I think the rest of us would be better off if that someone lives, I’ll oppose their death, regardless of what they “really” want. I’ll likely frame it as convincing them they don’t really want to die, and use the fact that they didn’t want that in the past as “evidence”, but really it’s mostly me imposing my preferences.
There are some with whom I can have the altruistic conversation: future-you AND future-me both prefer you stick around. Do it for us? Even then, you can’t support any real person’s actual preferences, because they don’t exist. You can only support your current vision of their preferred-by-you preferences.