A person is a complicated machine, we can observe how this machine develops or could develop through processes that we could set up in the world or hypothetically. This is already quite clear, and things like “first person perspective” or “I will observe” don’t make this clearer.
So I don’t see a decision theory proclaiming “QI is false!”, it’s just not a consideration it needs to deal with at any point, even if somehow there was a way of saying more clearly what that consideration means. Like a chip designer doesn’t need to appreciate the taste of good cheese to make better AI accelerators.
We can escape the first-person perspective question by analyzing the optimal betting strategy of a rational agent regarding the most likely way of survival.
In the original thought experiment, there are 10 similar timelines where 10 otherwise identical agents guess a digit of pi (each guessing a different digit). Each agent has a 1⁄128 chance to survive via a random coin toss.
The total survival chances are either 1⁄10 via guessing pi correctly (one agent survives) or approximately 10⁄128 via random coin tosses (ignoring here the more complex equation for combining probabilities). 1⁄10 is still larger.
The experiment can be modified to use 10 random coins to get more decisive results.
Therefore, any agent can reasonably bet that if they survive, the most likely way of survival would be through correctly guessing the pi digit. (Here goes also all caveats about limits of betting).
Whether to call this “immortality” is more of an aesthetic choice, but the fact remains that some of my copies survive any risk in Many-Worlds Interpretation. The crux is whether agent should treat his declining measure as a partial death.
Death/survival/selection have the might makes right issue, of maintaining the normativity/actuality distinction. I think a major use of weak orthogonality thesis is in rescuing these framings. That is, for most aims, there is a way of formulating their pursuit as “maximally ruthless” without compromising any nuance of the aims/values/preferences, including any aspects of respect for autonomy or kindness within them. But that’s only the strange framing adding up to normality, useful where you need that framing for technical reasons.
Making decisions in a way that ignores declining measure of influence on the world due to death in most eventualities doesn’t add up to normality. It’s a bit like saying that you can be represented by a natural number, and so don’t need to pay attention to reality at all, since all natural numbers are out there somewhere, including those representing you. I don’t see a way of rescuing this kind of line of argument.
Indeed, QI matters depending on what I care. If mother cares about her child, quantum suicide will be a stupid act for her, as in the most worlds the child will be left alone. If a person cares only about what he feels, QC has more sense (the same way as euthanasia has sense only if quantum immortality is false).
A decision theory needs to have orthogonality, otherwise it’s not going to be applicable. Decisions about content of values are always wrong, the only prudent choice is to defer them.
Orthogonality between goals and DT makes sense only if I don’t have preferences about the type of DT or the outcomes which one of them necessitates.
In the case of QI, orthogonality works if we use QI to earn money or to care about relatives.
However, humans have preferences about existence and non-existence beyond normal money utility. In general, people strongly don’t want to die. It means that I have a strong preference that some of my copies survive anyway, even if it is not very useful for some other preferences under some other DT.
Another point is the difference between Quantum suicide and QI. QS is an action, but QI is just a prediction of future observations and because of that it is less affected by decision theories. We can say that those copies of me who survive [high chance of death event] will say that they survived because of QI.
Having preferenes is very different from knowing them. There’s always a process of reflection that refines preferences, so any current guess is always wrong at least in detail. For a decision theory to have a shot at normativity, it needs to be able to adapt to corrections and ideally anticipate their inevitability (not locking in the older guess and preventing further reflection; instead facilitating further reflection and being corrigible).
Orthogonality asks the domain of applicability to be wide enough that both various initial guesses and longer term refinements to them won’t fall out of scope. When a theory makes assumptions about value content, that makes it a moral theory rather than a decision theory. A moral theory explores particular guesses about preferences of some nature.
So in the way you use the term, quantum immortality seems to be a moral theory, involving claims that quantum suicide can be a good idea. For example “use QI to earn money” is a recommendation that depends on this assumption about preferences (of at least some people in some situations).
A person is a complicated machine, we can observe how this machine develops or could develop through processes that we could set up in the world or hypothetically. This is already quite clear, and things like “first person perspective” or “I will observe” don’t make this clearer.
So I don’t see a decision theory proclaiming “QI is false!”, it’s just not a consideration it needs to deal with at any point, even if somehow there was a way of saying more clearly what that consideration means. Like a chip designer doesn’t need to appreciate the taste of good cheese to make better AI accelerators.
We can escape the first-person perspective question by analyzing the optimal betting strategy of a rational agent regarding the most likely way of survival.
In the original thought experiment, there are 10 similar timelines where 10 otherwise identical agents guess a digit of pi (each guessing a different digit). Each agent has a 1⁄128 chance to survive via a random coin toss.
The total survival chances are either 1⁄10 via guessing pi correctly (one agent survives) or approximately 10⁄128 via random coin tosses (ignoring here the more complex equation for combining probabilities). 1⁄10 is still larger.
The experiment can be modified to use 10 random coins to get more decisive results.
Therefore, any agent can reasonably bet that if they survive, the most likely way of survival would be through correctly guessing the pi digit. (Here goes also all caveats about limits of betting).
Whether to call this “immortality” is more of an aesthetic choice, but the fact remains that some of my copies survive any risk in Many-Worlds Interpretation. The crux is whether agent should treat his declining measure as a partial death.
Death/survival/selection have the might makes right issue, of maintaining the normativity/actuality distinction. I think a major use of weak orthogonality thesis is in rescuing these framings. That is, for most aims, there is a way of formulating their pursuit as “maximally ruthless” without compromising any nuance of the aims/values/preferences, including any aspects of respect for autonomy or kindness within them. But that’s only the strange framing adding up to normality, useful where you need that framing for technical reasons.
Making decisions in a way that ignores declining measure of influence on the world due to death in most eventualities doesn’t add up to normality. It’s a bit like saying that you can be represented by a natural number, and so don’t need to pay attention to reality at all, since all natural numbers are out there somewhere, including those representing you. I don’t see a way of rescuing this kind of line of argument.
Indeed, QI matters depending on what I care. If mother cares about her child, quantum suicide will be a stupid act for her, as in the most worlds the child will be left alone. If a person cares only about what he feels, QC has more sense (the same way as euthanasia has sense only if quantum immortality is false).
A decision theory needs to have orthogonality, otherwise it’s not going to be applicable. Decisions about content of values are always wrong, the only prudent choice is to defer them.
Orthogonality between goals and DT makes sense only if I don’t have preferences about the type of DT or the outcomes which one of them necessitates.
In the case of QI, orthogonality works if we use QI to earn money or to care about relatives.
However, humans have preferences about existence and non-existence beyond normal money utility. In general, people strongly don’t want to die. It means that I have a strong preference that some of my copies survive anyway, even if it is not very useful for some other preferences under some other DT.
Another point is the difference between Quantum suicide and QI. QS is an action, but QI is just a prediction of future observations and because of that it is less affected by decision theories. We can say that those copies of me who survive [high chance of death event] will say that they survived because of QI.
Having preferenes is very different from knowing them. There’s always a process of reflection that refines preferences, so any current guess is always wrong at least in detail. For a decision theory to have a shot at normativity, it needs to be able to adapt to corrections and ideally anticipate their inevitability (not locking in the older guess and preventing further reflection; instead facilitating further reflection and being corrigible).
Orthogonality asks the domain of applicability to be wide enough that both various initial guesses and longer term refinements to them won’t fall out of scope. When a theory makes assumptions about value content, that makes it a moral theory rather than a decision theory. A moral theory explores particular guesses about preferences of some nature.
So in the way you use the term, quantum immortality seems to be a moral theory, involving claims that quantum suicide can be a good idea. For example “use QI to earn money” is a recommendation that depends on this assumption about preferences (of at least some people in some situations).
In that case, we can say that QI works only in EDT, but not in CDT or in UDT.
An interesting question arise, can we have a DT which shows that quantum suicide for money is bad idea, but euthanasia is also bad idea in QI-world.