Which way do I vote things that aren’t so much wrong as they are fundamentally confused?
Thinking about QI as something about which to ask ‘true or false?’ implies not having fully grasped the implications of (MWI) quantum mechanics on preference functions. At very least the question would need to e changed to ‘desired or undesired’.
So, the question to ask is whether quantum immortality ought to be reflected in our preferences, right?
It’s clear that evolution would not have given humans a set of preferences that anticipates quantum immortality. The only sense in which I can imagine it to be “true” is if it turns out that there’s an argument that can convince a sufficiently rational person that they ought to anticipate quantum immortality when making decisions.
(Note: I have endorsed the related idea of quantum suicide in the past, but now I am highly skeptical.)
My strategy is to behave as though quantum immortality is false until I’m reasonably sure I’ve lost at least 1-1e-4 of my measure due to factors beyond my control, then switch to acting as though quantum immortality works.
If you lose measure with time, you’ll lose any given amount given enough time. It’s better to follow a two-outcome lottery where for one outcome of probability 1-1e-4 you continue business as usual, otherwise as if quantum suicide preserves value.
Do you think there is a difference between what you would care about before you jumped in the box to play with Schrodinger’s cat and what you would care about after?
Which way do I vote things that aren’t so much wrong as they are fundamentally confused?
Thinking about QI as something about which to ask ‘true or false?’ implies not having fully grasped the implications of (MWI) quantum mechanics on preference functions. At very least the question would need to e changed to ‘desired or undesired’.
So, the question to ask is whether quantum immortality ought to be reflected in our preferences, right?
It’s clear that evolution would not have given humans a set of preferences that anticipates quantum immortality. The only sense in which I can imagine it to be “true” is if it turns out that there’s an argument that can convince a sufficiently rational person that they ought to anticipate quantum immortality when making decisions.
(Note: I have endorsed the related idea of quantum suicide in the past, but now I am highly skeptical.)
My strategy is to behave as though quantum immortality is false until I’m reasonably sure I’ve lost at least 1-1e-4 of my measure due to factors beyond my control, then switch to acting as though quantum immortality works.
If you lose measure with time, you’ll lose any given amount given enough time. It’s better to follow a two-outcome lottery where for one outcome of probability 1-1e-4 you continue business as usual, otherwise as if quantum suicide preserves value.
I can’t think of any purely self-interested reason why any individual should care about their measure (I grant there are altruistic reasons)
Do you think there is a difference between what you would care about before you jumped in the box to play with Schrodinger’s cat and what you would care about after?
Yes, but it’s unclear why I should.