Celebrity or not, both are quite likely to reply to a polite yet anxious email, since they can actually relate to your worries, if maybe not on the same topic.
Celebrity or not, both are quite likely to reply to a polite yet anxious email, since they can actually relate to your worries, if maybe not on the same topic.
Unfortunately, neither of them seem to grasp the argument—the whole point of it is that as a conscious being, you cannot experience any outcome where you die. So even if your survival is ridiculously improbable in the universal wavefunction, you can’t ‘wake up dead’. Hence you will always find your subjective self in that improbable branch.
Another terrible thought: what if it doesn’t depend on you dying as a whole? What if no part of your consciousness can be removed or degrade?
EDIT: Sleep doesn’t refute that as there is no real proof that you experience less when unconscious (rather, you may simply just not be self-aware). But it would imply people with brain damage are P-zombies, so that seems untenable.
Hence you will always find your subjective self in that improbable branch.
The meaning of “you will always find” has a connotation of certainty or high probability, but we are specifically talking about essentially impossible outcomes. This calls for tabooing “you will always find” to reconcile an intended meaning with extreme improbability of the outcome. Worrying about such outcomes might make sense when they are seen as a risk on the dust speck side of Torture vs. Dust Specks (their extreme disutility overcomes their extreme improbability). But conditioning on survival seems to be a wrong way of formulating values (see also), because the thing to value is the world, not exclusively subjective experience, even if subjective experience manages to get significant part of that value.
The meaning of “you will always find” has a connotation of certainty or high probability, but we are specifically talking about essentially impossible outcomes.
Why? Nothing is technically impossible with quantum mechanics. It is indeed possible for every single atom of our planet to spontaneously disappear.
This could make sense as a risk on the dust speck side of , but conditioning on survival seems to be just wrong as a way of formulating values (see also).
You’re not understanding that all of our measure is going into those branches where we survive.
Nothing is technically impossible with quantum mechanics.
By “essentially impossible” I meant “extremely improbable”. The word “essentially” was meant to distinguish this from “physically impossible”.
You’re not understanding that all of our measure is going into those branches where we survive.
There is a useful distinction between knowing the meaning of an idea and knowing its truth. I’m disagreeing with the claim that “all of our measure is going into those branches where we survive”, understood in the sense that only those branches have moral value (see What Are Probabilities, Anyway?), in particular the other branches taken together have less value. See the posts linked from the grandparent comment for a more detailed discussion (I’ve edited it a bit).
This meaning could be different from one you intend, in which case I’m not understanding your claim correctly, and I’m only disagreeing with my incorrect interpretation of it. But in that case I’m not understanding what you mean by “all of our measure is going into those branches where we survive”, not that “all of our measure is going into those branches where we survive” in the sense you intend, because the latter would require me to know the intended meaning of that claim first, at which point it becomes possible for me to fail to understand its truth.
By “essentially impossible” I meant “extremely improbable”. The word “essentially” was meant to distinguish this from “physically impossible”.
I don’t see how it refutes the possibility of QI, then.
There is a useful distinction between knowing the meaning of an idea and knowing its truth. I’m disagreeing with the claim that “all of our measure is going into those branches where we survive”, understood in the sense that only those branches have moral value (see What Are Probabilities, Anyway?), in particular the other branches taken together have less value. See the posts linked from the grandparent comment for a more detailed discussion (I’ve edited it a bit).
This meaning could be different from one you intend, in which case I’m not understanding your claim correctly, and I’m only disagreeing with my incorrect interpretation of it. But in that case I’m not understanding what you mean by “all of our measure is going into those branches where we survive”, not that “all of our measure is going into those branches where we survive” in the sense you intend, because the latter would require me to know the intended meaning of that claim first, at which point it becomes possible for me to fail to understand its truth.
According to QI, we (as in our internal subjective experience) will continue on only in branches where we stay alive. Since I care about my subjective internal experience, I wouldn’t want it to suffer (if you disagree, press a live clothes iron to your arm and you’ll see what I mean).
I don’t see how it refutes the possibility of QI, then.
See the context of that phrase. I don’t see how it could be about “refuting the possibility of QI”. (What is “the possibility of QI”? I don’t find anything wrong with QI scenarios themselves, only with some arguments about them, in particular the argument that their existence has decision-relevant implications because of conditioning on subjective experience. I’m not certain that they don’t have decision-relevant implications that hold for other reasons.)
[We] (as in our internal subjective experience) will continue on only in branches where we stay alive.
This seems tautologously correct. See the points about moral value in the grandparent comment and in the rest of this comment for what I disagree with, and why I don’t find this statement relevant.
Since I care about my subjective internal experience, I wouldn’t want it to suffer
Neither would I. But this is not all that people care about. We also seem to care about what happens outside our subjective experience, and in quantum immortality scenarios that component of value (things that are not personally experienced) is dominant.
No, it isn’t. The same thing will happen to everyone in your branch (you don’t see it, of course, but it will subjectively happen to them).
Perhaps you don’t understand what the argument says. You, as in the person you are right now, is going to experience that. Not a infinitesimal proportion of other ‘yous’ while the majority die. Your own subjective experience, 100% of it.
You, as in the person you are right now, is going to experience that.
This has the same issue with “is going to experience” as the “you will always find” I talked about in my first comment.
Not a infinitesimal proportion of other ‘yous’ while the majority die. Your own subjective experience, 100% of it.
Yes. All of the surviving versions of myself will experience their survival. This happens with extremely small probability. I will experience nothing else. The rest of the probability goes to the worlds where there are no surviving versions of myself, and I won’t experience those worlds. But I still value those worlds more than the worlds that have surviving versions of myself. The things that happen to all of my surviving subjective experiences matter less to me than the things that I won’t experience happening in the other worlds. Furthermore, I believe that not as a matter of unusual personal preference, but for general reasons about the structure of valuing of things that I think should convince most other people, see the links in the above comments.
To be clear: your argument is that every human being who has ever lived may suffer eternally after death, and there are good reasons for not caring...?
That requires an answer that, at the very least, you should be able to put in your own words. How does our subjective suffering improve anything in the worlds where you die?
To be clear: your argument is that every human being who has ever lived may suffer eternally after death, and there are good reasons for not caring...?
It’s not my argument, but it follows from what I’m saying, yes. Even if people should care about this, there are probably good reasons not to, just not good enough to tilt the balance. There are good reasons for all kinds of wrong conclusions, it should be suspicious when there aren’t. Note that caring about this too much is the same as caring about other things too little. Also, as an epistemic principle, appreciation of arguments shouldn’t depend on consequences of agreeing with them.
How does our subjective suffering improve anything in the worlds where you die?
Focusing effort on the worlds where you’ll eventually die (as well as the worlds where you survive in a normal non-QI way) improves them at the cost of neglecting the worlds where you eternally suffer for QI reasons.
Rationalists love criticism that helps them improve their thinking. But this complaint is too vague to be any help to us. What exactly went wrong, and how can we do better?
The meaning of “you will always find” has a connotation of certainty or high probability, but we are specifically talking about essentially impossible outcomes
High subjective probability is compatible with low objective probability.
I don’t buy that argument about sleep, but what about anesthesia? I see no reason why successor observer-moments have to be contiguous in either time or space. They likely will be due to the laws of physics, but we’re talking about improbable outcomes here. Your unconscious body is not your successor. It’s an inanimate object that has a high probability of generating a successor observer-moment at a later time. (That is, it might wake up as you.)
I do have a PhD in Physics, classical General Relativity specifically. But you wanted something who adheres to MWI, and that is not me.
Some thoughts from Sean Carroll on the topic of Quantum Immortality:
https://www.reddit.com/r/seancarroll/comments/9drd25/quantum_immortality/e5l663t/
And this one from Scott Aaronson:
https://www.scottaaronson.com/blog/?p=2643#comment-1001030
Celebrity or not, both are quite likely to reply to a polite yet anxious email, since they can actually relate to your worries, if maybe not on the same topic.
That would be optimal, but I still would like to hear your thoughts.
Unfortunately, neither of them seem to grasp the argument—the whole point of it is that as a conscious being, you cannot experience any outcome where you die. So even if your survival is ridiculously improbable in the universal wavefunction, you can’t ‘wake up dead’. Hence you will always find your subjective self in that improbable branch.
Another terrible thought: what if it doesn’t depend on you dying as a whole? What if no part of your consciousness can be removed or degrade?
EDIT: Sleep doesn’t refute that as there is no real proof that you experience less when unconscious (rather, you may simply just not be self-aware). But it would imply people with brain damage are P-zombies, so that seems untenable.
The meaning of “you will always find” has a connotation of certainty or high probability, but we are specifically talking about essentially impossible outcomes. This calls for tabooing “you will always find” to reconcile an intended meaning with extreme improbability of the outcome. Worrying about such outcomes might make sense when they are seen as a risk on the dust speck side of Torture vs. Dust Specks (their extreme disutility overcomes their extreme improbability). But conditioning on survival seems to be a wrong way of formulating values (see also), because the thing to value is the world, not exclusively subjective experience, even if subjective experience manages to get significant part of that value.
Why? Nothing is technically impossible with quantum mechanics. It is indeed possible for every single atom of our planet to spontaneously disappear.
You’re not understanding that all of our measure is going into those branches where we survive.
By “essentially impossible” I meant “extremely improbable”. The word “essentially” was meant to distinguish this from “physically impossible”.
There is a useful distinction between knowing the meaning of an idea and knowing its truth. I’m disagreeing with the claim that “all of our measure is going into those branches where we survive”, understood in the sense that only those branches have moral value (see What Are Probabilities, Anyway?), in particular the other branches taken together have less value. See the posts linked from the grandparent comment for a more detailed discussion (I’ve edited it a bit).
This meaning could be different from one you intend, in which case I’m not understanding your claim correctly, and I’m only disagreeing with my incorrect interpretation of it. But in that case I’m not understanding what you mean by “all of our measure is going into those branches where we survive”, not that “all of our measure is going into those branches where we survive” in the sense you intend, because the latter would require me to know the intended meaning of that claim first, at which point it becomes possible for me to fail to understand its truth.
I don’t see how it refutes the possibility of QI, then.
According to QI, we (as in our internal subjective experience) will continue on only in branches where we stay alive. Since I care about my subjective internal experience, I wouldn’t want it to suffer (if you disagree, press a live clothes iron to your arm and you’ll see what I mean).
See the context of that phrase. I don’t see how it could be about “refuting the possibility of QI”. (What is “the possibility of QI”? I don’t find anything wrong with QI scenarios themselves, only with some arguments about them, in particular the argument that their existence has decision-relevant implications because of conditioning on subjective experience. I’m not certain that they don’t have decision-relevant implications that hold for other reasons.)
This seems tautologously correct. See the points about moral value in the grandparent comment and in the rest of this comment for what I disagree with, and why I don’t find this statement relevant.
Neither would I. But this is not all that people care about. We also seem to care about what happens outside our subjective experience, and in quantum immortality scenarios that component of value (things that are not personally experienced) is dominant.
No, it isn’t. The same thing will happen to everyone in your branch (you don’t see it, of course, but it will subjectively happen to them).
Perhaps you don’t understand what the argument says. You, as in the person you are right now, is going to experience that. Not a infinitesimal proportion of other ‘yous’ while the majority die. Your own subjective experience, 100% of it.
This has the same issue with “is going to experience” as the “you will always find” I talked about in my first comment.
Yes. All of the surviving versions of myself will experience their survival. This happens with extremely small probability. I will experience nothing else. The rest of the probability goes to the worlds where there are no surviving versions of myself, and I won’t experience those worlds. But I still value those worlds more than the worlds that have surviving versions of myself. The things that happen to all of my surviving subjective experiences matter less to me than the things that I won’t experience happening in the other worlds. Furthermore, I believe that not as a matter of unusual personal preference, but for general reasons about the structure of valuing of things that I think should convince most other people, see the links in the above comments.
To be clear: your argument is that every human being who has ever lived may suffer eternally after death, and there are good reasons for not caring...?
That requires an answer that, at the very least, you should be able to put in your own words. How does our subjective suffering improve anything in the worlds where you die?
It’s not my argument, but it follows from what I’m saying, yes. Even if people should care about this, there are probably good reasons not to, just not good enough to tilt the balance. There are good reasons for all kinds of wrong conclusions, it should be suspicious when there aren’t. Note that caring about this too much is the same as caring about other things too little. Also, as an epistemic principle, appreciation of arguments shouldn’t depend on consequences of agreeing with them.
Focusing effort on the worlds where you’ll eventually die (as well as the worlds where you survive in a normal non-QI way) improves them at the cost of neglecting the worlds where you eternally suffer for QI reasons.
...and here’s about when I realize what a mistake it was setting foot in Lesswrong again for answers.
Rationalists love criticism that helps them improve their thinking. But this complaint is too vague to be any help to us. What exactly went wrong, and how can we do better?
Asking for exact complete error report might be a bit taunting in challenging error states. I am sure also partial hints would be appriciated.
High subjective probability is compatible with low objective probability.
I don’t buy that argument about sleep, but what about anesthesia? I see no reason why successor observer-moments have to be contiguous in either time or space. They likely will be due to the laws of physics, but we’re talking about improbable outcomes here. Your unconscious body is not your successor. It’s an inanimate object that has a high probability of generating a successor observer-moment at a later time. (That is, it might wake up as you.)