I think the paper’s treatment (section 3.3.3) of “selfish” (i.e., indexically expressed) preferences is wrong, unless I’m not understanding it correctly. Assuming the incubator variant, what does your solution say a Beauty should do if we tell her that she is in Room 1 and ask her what price she would pay for a lottery ticket that pays $1 on Heads? Applying section 3.3.3 seems to suggest that she should also pay $0.50 for this ticket, but that is clearly wrong. Or rather, either that’s wrong or it’s wrong that she should pay $0.50 for the original lottery ticket where we didn’t tell her her room number, because otherwise we can money-pump her and make her lose money with probability 1.
“Selfish” preferences are still very confusing to me, especially if copying or death is a future possibility. Are they even legitimate preferences, or just insanity that should be discarded (as steven0461 suggested)? If the former, should we convert them into non-indexically expressed preferences (i.e., instead of “Give me that chocolate bar”, “Give that chocolate bar to X” where X is a detailed description of my body), or should our decision theory handle such preferences natively? (Note that UDT can’t handle such preferences without prior conversion.) I don’t know how to do either, and this paper doesn’t seem to be supplying the solution that I’ve been looking for.
I think the paper’s treatment (section 3.3.3) of “selfish” (i.e., indexically expressed) preferences is wrong, unless I’m not understanding it correctly. Assuming the incubator variant, what does your solution say a Beauty should do if we tell her that she is in Room 1 and ask her what price she would pay for a lottery ticket that pays $1 on Heads? Applying section 3.3.3 seems to suggest that she should also pay $0.50 for this ticket, but that is clearly wrong. Or rather, either that’s wrong or it’s wrong that she should pay $0.50 for the original lottery ticket where we didn’t tell her her room number, because otherwise we can money-pump her and make her lose money with probability 1.
“Selfish” preferences are still very confusing to me, especially if copying or death is a future possibility. Are they even legitimate preferences, or just insanity that should be discarded (as steven0461 suggested)? If the former, should we convert them into non-indexically expressed preferences (i.e., instead of “Give me that chocolate bar”, “Give that chocolate bar to X” where X is a detailed description of my body), or should our decision theory handle such preferences natively? (Note that UDT can’t handle such preferences without prior conversion.) I don’t know how to do either, and this paper doesn’t seem to be supplying the solution that I’ve been looking for.