First of all, I think your argument from connection of past/future selves is just a specific case of the more general argument for reflective consistency, and thus does not imply any kind of “selfishness” in and of itself. More detail is needed to specify a notion of selfishness.
I understand your argument against identifying yourself with another person who might counterfactually have been in the same cell, but the problem here is that if you don’t know how the coin actually came up you still have to assign amounts of “care” to the possible selves that you could actually be.
Let’s say that, as in my reasoning above, there are two cells, B and C; when the coin comes up tails humans are created in both cell B and cell C, but when the coin comes up heads a human is created in either cell B or cell C, with equal probability. Thus there are 3 “possible worlds”: 1) p=1/2 human in both cells 2) p=1/4 human in cell B, cell C empty 3) p=1/4 human in cell C, cell B empty
If you’re a selfish human and you know you’re in cell B, then you don’t care about world (3) at all, because there is no “you” in it. However, you still don’t know whether you’re in world (1) or (2), so you still have to “care” about both worlds. Moreover, in either world the “you” you care about is clearly the person in cell B, and so I think the only utility function that makes sense is S = $B. If you want to think about it in terms of either SSA-like or SIA-like assumptions, you get the same answer because both in world (1) and world (2) there is only a single observer who could be identified as “you”.
Now, what if you didn’t know whether you were in cell B or cell C? That’s where things are a little different. In that case, there are two observers in world (1), either of whom could be “you”. There are basically two different ways of assigning utility over the two different “yous” in world (1)---adding them together, like a total utilitarian, and averaging them, like an average utilitarian; the resulting values are x=2/3 and x=1/2 respectively. Moreover, the first approach is equivalent to SIA, and the second is equivalent to SSA.
However, the SSA answer has a property that none of the others do. If the gnome was to tell the human “you’re in cell B”, an SSA-using human would change their cutoff point from 1⁄2 to 2⁄3. This seems to be rather strange indeed, because whether the human is in cell B or in cell C is not in any way relevant to the payoff. No human with any of the other utility functions we’ve considered would change his/her answer upon being told that they are in cell B.
First of all, I think your argument from connection of past/future selves is just a specific case of the more general argument for reflective consistency, and thus does not imply any kind of “selfishness” in and of itself. More detail is needed to specify a notion of selfishness.
I understand your argument against identifying yourself with another person who might counterfactually have been in the same cell, but the problem here is that if you don’t know how the coin actually came up you still have to assign amounts of “care” to the possible selves that you could actually be.
Let’s say that, as in my reasoning above, there are two cells, B and C; when the coin comes up tails humans are created in both cell B and cell C, but when the coin comes up heads a human is created in either cell B or cell C, with equal probability. Thus there are 3 “possible worlds”:
1) p=1/2 human in both cells
2) p=1/4 human in cell B, cell C empty
3) p=1/4 human in cell C, cell B empty
If you’re a selfish human and you know you’re in cell B, then you don’t care about world (3) at all, because there is no “you” in it. However, you still don’t know whether you’re in world (1) or (2), so you still have to “care” about both worlds. Moreover, in either world the “you” you care about is clearly the person in cell B, and so I think the only utility function that makes sense is S = $B. If you want to think about it in terms of either SSA-like or SIA-like assumptions, you get the same answer because both in world (1) and world (2) there is only a single observer who could be identified as “you”.
Now, what if you didn’t know whether you were in cell B or cell C? That’s where things are a little different. In that case, there are two observers in world (1), either of whom could be “you”. There are basically two different ways of assigning utility over the two different “yous” in world (1)---adding them together, like a total utilitarian, and averaging them, like an average utilitarian; the resulting values are x=2/3 and x=1/2 respectively. Moreover, the first approach is equivalent to SIA, and the second is equivalent to SSA.
However, the SSA answer has a property that none of the others do. If the gnome was to tell the human “you’re in cell B”, an SSA-using human would change their cutoff point from 1⁄2 to 2⁄3. This seems to be rather strange indeed, because whether the human is in cell B or in cell C is not in any way relevant to the payoff. No human with any of the other utility functions we’ve considered would change his/her answer upon being told that they are in cell B.