I’ve just finished reading your post. Basically what is says is, if I care about reality I should care about all future branches, not just the ones where I’m alive (or have achieved some desired result, like a million dollars). Okay, I get that. I do care about all future branches (well, the ones I can affect, anyway). But here’s the thing: I care even more about the first-person mental states that I will actually be/experience.
Let’s say that a version of me will be tortured in branch A, while another version of me will be sipping his coffee in branch B. From an outside perspective, it’s irrelevant (meaningless, even) which version of me gets tortured; but if ‘I’ ‘end up’ in branch A, I’ll care a whole lot.
So yeah, if I don’t sign up for cryonics and if Aubrey de Grey and Eliezer slack off too much, I expect to die, in the same sense that I don’t expect to win the lottery. I also expect to actually have the first-person experience of dying over the course of millenia. And I care about both of these things, but in different ways. Is there a contradiction here? I don’t think there is.
The two senses of “care” are different, and it’s dangerous to confuse them. (I’m going to ignore the psychological aspects of their role and will talk only about their consequentialist role.) The first is relevant to the decisions that affect whether you die and what other events happen in those worlds, you have to care about the event of dying and the worlds where that happens in order to plan the shape of the events in those worlds, including avoidance of death. The second sense of “caring” is relevant to giving up, to planning for the event of not dying, where you no longer control the worlds where you died, and so there is no point in taking them into account in your planning (within that hypothetical).
The caring about the futures where you survive is an optimization trick, and its applicability depends on the following considerations: (1) the probability of survival, hence the relative importance of planning for survival as opposed to other possibilities, (2) the marginal value of planning further for the general case, taking the worlds where you don’t survive into account, (3) the marginal value of planning further for the special case of survival. If, as is the case with quantum immortality, the probability of survival is too low, it isn’t worth your thought to work on the situation where you survive, you should instead worry about the general case. Once you get into an improbable quantum immortality situation (i.e. survive), only then should you start caring about it (since at that point you do lose control about the general situation), and not before.
I’ve just finished reading your post. Basically what is says is, if I care about reality I should care about all future branches, not just the ones where I’m alive (or have achieved some desired result, like a million dollars). Okay, I get that. I do care about all future branches (well, the ones I can affect, anyway). But here’s the thing: I care even more about the first-person mental states that I will actually be/experience.
Let’s say that a version of me will be tortured in branch A, while another version of me will be sipping his coffee in branch B. From an outside perspective, it’s irrelevant (meaningless, even) which version of me gets tortured; but if ‘I’ ‘end up’ in branch A, I’ll care a whole lot.
So yeah, if I don’t sign up for cryonics and if Aubrey de Grey and Eliezer slack off too much, I expect to die, in the same sense that I don’t expect to win the lottery. I also expect to actually have the first-person experience of dying over the course of millenia. And I care about both of these things, but in different ways. Is there a contradiction here? I don’t think there is.
The two senses of “care” are different, and it’s dangerous to confuse them. (I’m going to ignore the psychological aspects of their role and will talk only about their consequentialist role.) The first is relevant to the decisions that affect whether you die and what other events happen in those worlds, you have to care about the event of dying and the worlds where that happens in order to plan the shape of the events in those worlds, including avoidance of death. The second sense of “caring” is relevant to giving up, to planning for the event of not dying, where you no longer control the worlds where you died, and so there is no point in taking them into account in your planning (within that hypothetical).
The caring about the futures where you survive is an optimization trick, and its applicability depends on the following considerations: (1) the probability of survival, hence the relative importance of planning for survival as opposed to other possibilities, (2) the marginal value of planning further for the general case, taking the worlds where you don’t survive into account, (3) the marginal value of planning further for the special case of survival. If, as is the case with quantum immortality, the probability of survival is too low, it isn’t worth your thought to work on the situation where you survive, you should instead worry about the general case. Once you get into an improbable quantum immortality situation (i.e. survive), only then should you start caring about it (since at that point you do lose control about the general situation), and not before.