Predicated on MWI being correct, and Quantum Immortality being true:
It is most advantageous for any individual (although not necessarily for society) to take as many high-risk high-reward opportunities as possible as long as the result of failure is likely to be death. 90%
Phrased more precisely: it is most advantageous for the quantum immortalist to attempt highly unlikely, high reward activity, after making a stern precommitment to commit suicide in a fast and decisive way (decapitation?) if they don’t work out.
This seems like a great reason not to trust quantum immortality.
Not sure how I should vote this. Predicated on quantum immortality being true, the assertion seems almost tautological, so that’d be a downvote. The main question to me is whether quantum immortality should be taken seriously to begin with.
However, a different assertion that says that in case MWI is correct, you should assume quantum immortality works and try to give yourself anthropic superpowers by pointing a gun to your head would make for an interesting rationality game point.
Which way do I vote things that aren’t so much wrong as they are fundamentally confused?
Thinking about QI as something about which to ask ‘true or false?’ implies not having fully grasped the implications of (MWI) quantum mechanics on preference functions. At very least the question would need to e changed to ‘desired or undesired’.
So, the question to ask is whether quantum immortality ought to be reflected in our preferences, right?
It’s clear that evolution would not have given humans a set of preferences that anticipates quantum immortality. The only sense in which I can imagine it to be “true” is if it turns out that there’s an argument that can convince a sufficiently rational person that they ought to anticipate quantum immortality when making decisions.
(Note: I have endorsed the related idea of quantum suicide in the past, but now I am highly skeptical.)
My strategy is to behave as though quantum immortality is false until I’m reasonably sure I’ve lost at least 1-1e-4 of my measure due to factors beyond my control, then switch to acting as though quantum immortality works.
If you lose measure with time, you’ll lose any given amount given enough time. It’s better to follow a two-outcome lottery where for one outcome of probability 1-1e-4 you continue business as usual, otherwise as if quantum suicide preserves value.
Do you think there is a difference between what you would care about before you jumped in the box to play with Schrodinger’s cat and what you would care about after?
Predicated on MWI being correct, and Quantum Immortality being true:
It is most advantageous for any individual (although not necessarily for society) to take as many high-risk high-reward opportunities as possible as long as the result of failure is likely to be death. 90%
Phrased more precisely: it is most advantageous for the quantum immortalist to attempt highly unlikely, high reward activity, after making a stern precommitment to commit suicide in a fast and decisive way (decapitation?) if they don’t work out.
This seems like a great reason not to trust quantum immortality.
Not sure how I should vote this. Predicated on quantum immortality being true, the assertion seems almost tautological, so that’d be a downvote. The main question to me is whether quantum immortality should be taken seriously to begin with.
However, a different assertion that says that in case MWI is correct, you should assume quantum immortality works and try to give yourself anthropic superpowers by pointing a gun to your head would make for an interesting rationality game point.
Perhaps a separate vote on that then?
Which way do I vote things that aren’t so much wrong as they are fundamentally confused?
Thinking about QI as something about which to ask ‘true or false?’ implies not having fully grasped the implications of (MWI) quantum mechanics on preference functions. At very least the question would need to e changed to ‘desired or undesired’.
So, the question to ask is whether quantum immortality ought to be reflected in our preferences, right?
It’s clear that evolution would not have given humans a set of preferences that anticipates quantum immortality. The only sense in which I can imagine it to be “true” is if it turns out that there’s an argument that can convince a sufficiently rational person that they ought to anticipate quantum immortality when making decisions.
(Note: I have endorsed the related idea of quantum suicide in the past, but now I am highly skeptical.)
My strategy is to behave as though quantum immortality is false until I’m reasonably sure I’ve lost at least 1-1e-4 of my measure due to factors beyond my control, then switch to acting as though quantum immortality works.
If you lose measure with time, you’ll lose any given amount given enough time. It’s better to follow a two-outcome lottery where for one outcome of probability 1-1e-4 you continue business as usual, otherwise as if quantum suicide preserves value.
I can’t think of any purely self-interested reason why any individual should care about their measure (I grant there are altruistic reasons)
Do you think there is a difference between what you would care about before you jumped in the box to play with Schrodinger’s cat and what you would care about after?
Yes, but it’s unclear why I should.