The two senses of “care” are different, and it’s dangerous to confuse them. (I’m going to ignore the psychological aspects of their role and will talk only about their consequentialist role.) The first is relevant to the decisions that affect whether you die and what other events happen in those worlds, you have to care about the event of dying and the worlds where that happens in order to plan the shape of the events in those worlds, including avoidance of death. The second sense of “caring” is relevant to giving up, to planning for the event of not dying, where you no longer control the worlds where you died, and so there is no point in taking them into account in your planning (within that hypothetical).
The caring about the futures where you survive is an optimization trick, and its applicability depends on the following considerations: (1) the probability of survival, hence the relative importance of planning for survival as opposed to other possibilities, (2) the marginal value of planning further for the general case, taking the worlds where you don’t survive into account, (3) the marginal value of planning further for the special case of survival. If, as is the case with quantum immortality, the probability of survival is too low, it isn’t worth your thought to work on the situation where you survive, you should instead worry about the general case. Once you get into an improbable quantum immortality situation (i.e. survive), only then should you start caring about it (since at that point you do lose control about the general situation), and not before.
The two senses of “care” are different, and it’s dangerous to confuse them. (I’m going to ignore the psychological aspects of their role and will talk only about their consequentialist role.) The first is relevant to the decisions that affect whether you die and what other events happen in those worlds, you have to care about the event of dying and the worlds where that happens in order to plan the shape of the events in those worlds, including avoidance of death. The second sense of “caring” is relevant to giving up, to planning for the event of not dying, where you no longer control the worlds where you died, and so there is no point in taking them into account in your planning (within that hypothetical).
The caring about the futures where you survive is an optimization trick, and its applicability depends on the following considerations: (1) the probability of survival, hence the relative importance of planning for survival as opposed to other possibilities, (2) the marginal value of planning further for the general case, taking the worlds where you don’t survive into account, (3) the marginal value of planning further for the special case of survival. If, as is the case with quantum immortality, the probability of survival is too low, it isn’t worth your thought to work on the situation where you survive, you should instead worry about the general case. Once you get into an improbable quantum immortality situation (i.e. survive), only then should you start caring about it (since at that point you do lose control about the general situation), and not before.