To discuss truth of a claim, it’s first crucial to clarify what it means. What does it mean for quantum immortality to be true or not? The only relevant thing that comes to mind is whether MWI is correct. Large quantum computers might give evidence to that claim (though ASI very likely will be here first, unless there is a very robust AI Pause).
Once we know there are physical branching worlds, there is no further fact of “quantum immortality” to figure out. There are various instances of yourself in various world branches, a situation that doesn’t seem that different from multiple instances that can occur within a single world. Decision theory then ought to say how to weigh the consequences of possible influences and behaviors spread across those instances.
QI is a claim about first-person perspective observables – that I will always observe the next observer moment. This claim is stronger than just postulating that MWI is true and that there are many me-like minds in it from a third-person perspective. This difference can be illustrated by some people’s views about copies. They say: “I know that somewhere there will be my copy, but it will not be me, and if I die, I will die forever.” So they agree with the factual part but deny the perspectival part.
I agree that the main consideration here is decision-theoretic. However, we need to be suspicious of any decision theory that was designed specifically to prevent paradoxes like QI, or we end up with circular logic: “QI is false because our XDT, which was designed to prevent things like QI, says that we should ignore it.”
There is a counterargument (was it you who suggested it?) that there is no decision difference regardless of whether QI is valid or not. But this argument only holds for altruistic and updateless theories. For an egoistic EDT agent, QI would recommend playing Russian roulette for money.
A person is a complicated machine, we can observe how this machine develops or could develop through processes that we could set up in the world or hypothetically. This is already quite clear, and things like “first person perspective” or “I will observe” don’t make this clearer.
So I don’t see a decision theory proclaiming “QI is false!”, it’s just not a consideration it needs to deal with at any point, even if somehow there was a way of saying more clearly what that consideration means. Like a chip designer doesn’t need to appreciate the taste of good cheese to make better AI accelerators.
To discuss truth of a claim, it’s first crucial to clarify what it means. What does it mean for quantum immortality to be true or not? The only relevant thing that comes to mind is whether MWI is correct. Large quantum computers might give evidence to that claim (though ASI very likely will be here first, unless there is a very robust AI Pause).
Once we know there are physical branching worlds, there is no further fact of “quantum immortality” to figure out. There are various instances of yourself in various world branches, a situation that doesn’t seem that different from multiple instances that can occur within a single world. Decision theory then ought to say how to weigh the consequences of possible influences and behaviors spread across those instances.
QI is a claim about first-person perspective observables – that I will always observe the next observer moment. This claim is stronger than just postulating that MWI is true and that there are many me-like minds in it from a third-person perspective. This difference can be illustrated by some people’s views about copies. They say: “I know that somewhere there will be my copy, but it will not be me, and if I die, I will die forever.” So they agree with the factual part but deny the perspectival part.
I agree that the main consideration here is decision-theoretic. However, we need to be suspicious of any decision theory that was designed specifically to prevent paradoxes like QI, or we end up with circular logic: “QI is false because our XDT, which was designed to prevent things like QI, says that we should ignore it.”
There is a counterargument (was it you who suggested it?) that there is no decision difference regardless of whether QI is valid or not. But this argument only holds for altruistic and updateless theories. For an egoistic EDT agent, QI would recommend playing Russian roulette for money.
A person is a complicated machine, we can observe how this machine develops or could develop through processes that we could set up in the world or hypothetically. This is already quite clear, and things like “first person perspective” or “I will observe” don’t make this clearer.
So I don’t see a decision theory proclaiming “QI is false!”, it’s just not a consideration it needs to deal with at any point, even if somehow there was a way of saying more clearly what that consideration means. Like a chip designer doesn’t need to appreciate the taste of good cheese to make better AI accelerators.