Anyhow, the short answer is that the reason people have done a bunch of extra work is because we don’t just want an English-language explanation of what happens, we want to describe a specific computation. Not that the verbal descriptions aren’t really useful, but precision has its merits; it often takes stating a specific algorithm to realize that your algorithm does something you don’t want, and you actually have to go back and revise your verbal description.
For example, a decision algorithm based on precommitment is unable to hold selfish preferences (valuing a cookie for me more than a cookie for a copy of me) in anthropic situations (apologies for how messy that series of posts is). But since I’m of the opinion that it’s okay to have selfish preferences, this means that I need to use a more general model of what an ideal decision theory looks like.
For example, a decision algorithm based on precommitment is unable to hold selfish preferences (valuing a cookie for me more than a cookie for a copy of me) in anthropic situations
I disagree that it makes sense to talk about one of the future copies of you being “you” whereas the other isn’t. They’re both you to the same degree (if they’re exact copies).
I agree with you there—what I mean by selfish preferences is that after the copies are made, each copy will value a cookie for itself more than a cookie for the other copy—it’s possible that they wouldn’t buy their copy a cookie for $1, but would buy themselves a cookie for $1. This is the indexically-selfish case of the sort of preferences people have that cause them to buy themselves a $1 cookie rather than giving that $1 to GiveDirectly (which is what they’d do if they made their precommitments behind a Rawlsian veil of ignorance).
I don’t think I said it was incoherent. Where are you getting that from?
To expand on a point that may be confusing: indexically-selfish preferences (valuing yourself over copies of you) will get precommitted away if you are given the chance to precommit before being copied. Ordinary selfish preferences would also get precommitted away, but only if you had the chance to precommit sometime like before you came into existence (this is where Rawls comes in).
So if you have a decision theory that says “do what you would have precommitted to do,” well, you end up with different results depending on when people get to precommit. If we start from a completely ignorant agent and then add information, precommitting at each step, you end up with a Rawlsian altruist. If we just start form yesterday, then if you got copied two days ago you can be indexically selfish but if you got copied this morning you can’t.
Does it? I’m not so sure.
Anyhow, the short answer is that the reason people have done a bunch of extra work is because we don’t just want an English-language explanation of what happens, we want to describe a specific computation. Not that the verbal descriptions aren’t really useful, but precision has its merits; it often takes stating a specific algorithm to realize that your algorithm does something you don’t want, and you actually have to go back and revise your verbal description.
For example, a decision algorithm based on precommitment is unable to hold selfish preferences (valuing a cookie for me more than a cookie for a copy of me) in anthropic situations (apologies for how messy that series of posts is). But since I’m of the opinion that it’s okay to have selfish preferences, this means that I need to use a more general model of what an ideal decision theory looks like.
I disagree that it makes sense to talk about one of the future copies of you being “you” whereas the other isn’t. They’re both you to the same degree (if they’re exact copies).
I agree with you there—what I mean by selfish preferences is that after the copies are made, each copy will value a cookie for itself more than a cookie for the other copy—it’s possible that they wouldn’t buy their copy a cookie for $1, but would buy themselves a cookie for $1. This is the indexically-selfish case of the sort of preferences people have that cause them to buy themselves a $1 cookie rather than giving that $1 to GiveDirectly (which is what they’d do if they made their precommitments behind a Rawlsian veil of ignorance).
Confused. What’s incoherent about caring equally about copies of myself, and less about everyone else?
I don’t think I said it was incoherent. Where are you getting that from?
To expand on a point that may be confusing: indexically-selfish preferences (valuing yourself over copies of you) will get precommitted away if you are given the chance to precommit before being copied. Ordinary selfish preferences would also get precommitted away, but only if you had the chance to precommit sometime like before you came into existence (this is where Rawls comes in).
So if you have a decision theory that says “do what you would have precommitted to do,” well, you end up with different results depending on when people get to precommit. If we start from a completely ignorant agent and then add information, precommitting at each step, you end up with a Rawlsian altruist. If we just start form yesterday, then if you got copied two days ago you can be indexically selfish but if you got copied this morning you can’t.
The problem is that Rawls gets the math wrong even in the case he analyzes.