I don’t think I said it was incoherent. Where are you getting that from?
To expand on a point that may be confusing: indexically-selfish preferences (valuing yourself over copies of you) will get precommitted away if you are given the chance to precommit before being copied. Ordinary selfish preferences would also get precommitted away, but only if you had the chance to precommit sometime like before you came into existence (this is where Rawls comes in).
So if you have a decision theory that says “do what you would have precommitted to do,” well, you end up with different results depending on when people get to precommit. If we start from a completely ignorant agent and then add information, precommitting at each step, you end up with a Rawlsian altruist. If we just start form yesterday, then if you got copied two days ago you can be indexically selfish but if you got copied this morning you can’t.
I don’t think I said it was incoherent. Where are you getting that from?
To expand on a point that may be confusing: indexically-selfish preferences (valuing yourself over copies of you) will get precommitted away if you are given the chance to precommit before being copied. Ordinary selfish preferences would also get precommitted away, but only if you had the chance to precommit sometime like before you came into existence (this is where Rawls comes in).
So if you have a decision theory that says “do what you would have precommitted to do,” well, you end up with different results depending on when people get to precommit. If we start from a completely ignorant agent and then add information, precommitting at each step, you end up with a Rawlsian altruist. If we just start form yesterday, then if you got copied two days ago you can be indexically selfish but if you got copied this morning you can’t.
The problem is that Rawls gets the math wrong even in the case he analyzes.