Indeed, QI matters depending on what I care. If mother cares about her child, quantum suicide will be a stupid act for her, as in the most worlds the child will be left alone. If a person cares only about what he feels, QC has more sense (the same way as euthanasia has sense only if quantum immortality is false).
A decision theory needs to have orthogonality, otherwise it’s not going to be applicable. Decisions about content of values are always wrong, the only prudent choice is to defer them.
Orthogonality between goals and DT makes sense only if I don’t have preferences about the type of DT or the outcomes which one of them necessitates.
In the case of QI, orthogonality works if we use QI to earn money or to care about relatives.
However, humans have preferences about existence and non-existence beyond normal money utility. In general, people strongly don’t want to die. It means that I have a strong preference that some of my copies survive anyway, even if it is not very useful for some other preferences under some other DT.
Another point is the difference between Quantum suicide and QI. QS is an action, but QI is just a prediction of future observations and because of that it is less affected by decision theories. We can say that those copies of me who survive [high chance of death event] will say that they survived because of QI.
Having preferenes is very different from knowing them. There’s always a process of reflection that refines preferences, so any current guess is always wrong at least in detail. For a decision theory to have a shot at normativity, it needs to be able to adapt to corrections and ideally anticipate their inevitability (not locking in the older guess and preventing further reflection; instead facilitating further reflection and being corrigible).
Orthogonality asks the domain of applicability to be wide enough that both various initial guesses and longer term refinements to them won’t fall out of scope. When a theory makes assumptions about value content, that makes it a moral theory rather than a decision theory. A moral theory explores particular guesses about preferences of some nature.
So in the way you use the term, quantum immortality seems to be a moral theory, involving claims that quantum suicide can be a good idea. For example “use QI to earn money” is a recommendation that depends on this assumption about preferences (of at least some people in some situations).
Indeed, QI matters depending on what I care. If mother cares about her child, quantum suicide will be a stupid act for her, as in the most worlds the child will be left alone. If a person cares only about what he feels, QC has more sense (the same way as euthanasia has sense only if quantum immortality is false).
A decision theory needs to have orthogonality, otherwise it’s not going to be applicable. Decisions about content of values are always wrong, the only prudent choice is to defer them.
Orthogonality between goals and DT makes sense only if I don’t have preferences about the type of DT or the outcomes which one of them necessitates.
In the case of QI, orthogonality works if we use QI to earn money or to care about relatives.
However, humans have preferences about existence and non-existence beyond normal money utility. In general, people strongly don’t want to die. It means that I have a strong preference that some of my copies survive anyway, even if it is not very useful for some other preferences under some other DT.
Another point is the difference between Quantum suicide and QI. QS is an action, but QI is just a prediction of future observations and because of that it is less affected by decision theories. We can say that those copies of me who survive [high chance of death event] will say that they survived because of QI.
Having preferenes is very different from knowing them. There’s always a process of reflection that refines preferences, so any current guess is always wrong at least in detail. For a decision theory to have a shot at normativity, it needs to be able to adapt to corrections and ideally anticipate their inevitability (not locking in the older guess and preventing further reflection; instead facilitating further reflection and being corrigible).
Orthogonality asks the domain of applicability to be wide enough that both various initial guesses and longer term refinements to them won’t fall out of scope. When a theory makes assumptions about value content, that makes it a moral theory rather than a decision theory. A moral theory explores particular guesses about preferences of some nature.
So in the way you use the term, quantum immortality seems to be a moral theory, involving claims that quantum suicide can be a good idea. For example “use QI to earn money” is a recommendation that depends on this assumption about preferences (of at least some people in some situations).
In that case, we can say that QI works only in EDT, but not in CDT or in UDT.
An interesting question arise, can we have a DT which shows that quantum suicide for money is bad idea, but euthanasia is also bad idea in QI-world.