By “essentially impossible” I meant “extremely improbable”. The word “essentially” was meant to distinguish this from “physically impossible”.
I don’t see how it refutes the possibility of QI, then.
There is a useful distinction between knowing the meaning of an idea and knowing its truth. I’m disagreeing with the claim that “all of our measure is going into those branches where we survive”, understood in the sense that only those branches have moral value (see What Are Probabilities, Anyway?), in particular the other branches taken together have less value. See the posts linked from the grandparent comment for a more detailed discussion (I’ve edited it a bit).
This meaning could be different from one you intend, in which case I’m not understanding your claim correctly, and I’m only disagreeing with my incorrect interpretation of it. But in that case I’m not understanding what you mean by “all of our measure is going into those branches where we survive”, not that “all of our measure is going into those branches where we survive” in the sense you intend, because the latter would require me to know the intended meaning of that claim first, at which point it becomes possible for me to fail to understand its truth.
According to QI, we (as in our internal subjective experience) will continue on only in branches where we stay alive. Since I care about my subjective internal experience, I wouldn’t want it to suffer (if you disagree, press a live clothes iron to your arm and you’ll see what I mean).
I don’t see how it refutes the possibility of QI, then.
See the context of that phrase. I don’t see how it could be about “refuting the possibility of QI”. (What is “the possibility of QI”? I don’t find anything wrong with QI scenarios themselves, only with some arguments about them, in particular the argument that their existence has decision-relevant implications because of conditioning on subjective experience. I’m not certain that they don’t have decision-relevant implications that hold for other reasons.)
[We] (as in our internal subjective experience) will continue on only in branches where we stay alive.
This seems tautologously correct. See the points about moral value in the grandparent comment and in the rest of this comment for what I disagree with, and why I don’t find this statement relevant.
Since I care about my subjective internal experience, I wouldn’t want it to suffer
Neither would I. But this is not all that people care about. We also seem to care about what happens outside our subjective experience, and in quantum immortality scenarios that component of value (things that are not personally experienced) is dominant.
No, it isn’t. The same thing will happen to everyone in your branch (you don’t see it, of course, but it will subjectively happen to them).
Perhaps you don’t understand what the argument says. You, as in the person you are right now, is going to experience that. Not a infinitesimal proportion of other ‘yous’ while the majority die. Your own subjective experience, 100% of it.
You, as in the person you are right now, is going to experience that.
This has the same issue with “is going to experience” as the “you will always find” I talked about in my first comment.
Not a infinitesimal proportion of other ‘yous’ while the majority die. Your own subjective experience, 100% of it.
Yes. All of the surviving versions of myself will experience their survival. This happens with extremely small probability. I will experience nothing else. The rest of the probability goes to the worlds where there are no surviving versions of myself, and I won’t experience those worlds. But I still value those worlds more than the worlds that have surviving versions of myself. The things that happen to all of my surviving subjective experiences matter less to me than the things that I won’t experience happening in the other worlds. Furthermore, I believe that not as a matter of unusual personal preference, but for general reasons about the structure of valuing of things that I think should convince most other people, see the links in the above comments.
To be clear: your argument is that every human being who has ever lived may suffer eternally after death, and there are good reasons for not caring...?
That requires an answer that, at the very least, you should be able to put in your own words. How does our subjective suffering improve anything in the worlds where you die?
To be clear: your argument is that every human being who has ever lived may suffer eternally after death, and there are good reasons for not caring...?
It’s not my argument, but it follows from what I’m saying, yes. Even if people should care about this, there are probably good reasons not to, just not good enough to tilt the balance. There are good reasons for all kinds of wrong conclusions, it should be suspicious when there aren’t. Note that caring about this too much is the same as caring about other things too little. Also, as an epistemic principle, appreciation of arguments shouldn’t depend on consequences of agreeing with them.
How does our subjective suffering improve anything in the worlds where you die?
Focusing effort on the worlds where you’ll eventually die (as well as the worlds where you survive in a normal non-QI way) improves them at the cost of neglecting the worlds where you eternally suffer for QI reasons.
Rationalists love criticism that helps them improve their thinking. But this complaint is too vague to be any help to us. What exactly went wrong, and how can we do better?
I don’t see how it refutes the possibility of QI, then.
According to QI, we (as in our internal subjective experience) will continue on only in branches where we stay alive. Since I care about my subjective internal experience, I wouldn’t want it to suffer (if you disagree, press a live clothes iron to your arm and you’ll see what I mean).
See the context of that phrase. I don’t see how it could be about “refuting the possibility of QI”. (What is “the possibility of QI”? I don’t find anything wrong with QI scenarios themselves, only with some arguments about them, in particular the argument that their existence has decision-relevant implications because of conditioning on subjective experience. I’m not certain that they don’t have decision-relevant implications that hold for other reasons.)
This seems tautologously correct. See the points about moral value in the grandparent comment and in the rest of this comment for what I disagree with, and why I don’t find this statement relevant.
Neither would I. But this is not all that people care about. We also seem to care about what happens outside our subjective experience, and in quantum immortality scenarios that component of value (things that are not personally experienced) is dominant.
No, it isn’t. The same thing will happen to everyone in your branch (you don’t see it, of course, but it will subjectively happen to them).
Perhaps you don’t understand what the argument says. You, as in the person you are right now, is going to experience that. Not a infinitesimal proportion of other ‘yous’ while the majority die. Your own subjective experience, 100% of it.
This has the same issue with “is going to experience” as the “you will always find” I talked about in my first comment.
Yes. All of the surviving versions of myself will experience their survival. This happens with extremely small probability. I will experience nothing else. The rest of the probability goes to the worlds where there are no surviving versions of myself, and I won’t experience those worlds. But I still value those worlds more than the worlds that have surviving versions of myself. The things that happen to all of my surviving subjective experiences matter less to me than the things that I won’t experience happening in the other worlds. Furthermore, I believe that not as a matter of unusual personal preference, but for general reasons about the structure of valuing of things that I think should convince most other people, see the links in the above comments.
To be clear: your argument is that every human being who has ever lived may suffer eternally after death, and there are good reasons for not caring...?
That requires an answer that, at the very least, you should be able to put in your own words. How does our subjective suffering improve anything in the worlds where you die?
It’s not my argument, but it follows from what I’m saying, yes. Even if people should care about this, there are probably good reasons not to, just not good enough to tilt the balance. There are good reasons for all kinds of wrong conclusions, it should be suspicious when there aren’t. Note that caring about this too much is the same as caring about other things too little. Also, as an epistemic principle, appreciation of arguments shouldn’t depend on consequences of agreeing with them.
Focusing effort on the worlds where you’ll eventually die (as well as the worlds where you survive in a normal non-QI way) improves them at the cost of neglecting the worlds where you eternally suffer for QI reasons.
...and here’s about when I realize what a mistake it was setting foot in Lesswrong again for answers.
Rationalists love criticism that helps them improve their thinking. But this complaint is too vague to be any help to us. What exactly went wrong, and how can we do better?
Asking for exact complete error report might be a bit taunting in challenging error states. I am sure also partial hints would be appriciated.