My preferred way of doing anthropics while keeping probabilities around is to update your probabilities according to the chance that at least one of the decision making agents that your decision is logically linked to exists, and then prioritise the worlds where there are more of those agents by acknowledging that you’re making the decision for all of them. This yields the same (correct) conclusions as SIA when you’re only making decisions for yourself, and FNC when you’re making decisions for all of your identical copies, but it avoids the paradoxes brought up in this article and it allows you to take into account that you’re making decisions for all of your similar copies, which you want to have for newcombs problem like situations.
However, I think it’s possible to construct even more contorted scenarios where conservation of expected evidence is violated for this as well. If there are 2 copies of you, a coin is flipped, and:
If it’s heads the copies are presented with two different choices.
If it’s tails the copies are presented with the same choice.
then you know that you will update towards heads when you’re presented with a choice after a minute, since heads make it twice as likely that anyone would be presented with that specific choice. I don’t know if there’s any way around this. Maybe if you update your probabilities according to the chance that someone following your decision theory is around, rather than someone making your exact choice, or something like that?
My preferred way of doing anthropics while keeping probabilities around is to update your probabilities according to the chance that at least one of the decision making agents that your decision is logically linked to exists, and then prioritise the worlds where there are more of those agents by acknowledging that you’re making the decision for all of them. This yields the same (correct) conclusions as SIA when you’re only making decisions for yourself, and FNC when you’re making decisions for all of your identical copies, but it avoids the paradoxes brought up in this article and it allows you to take into account that you’re making decisions for all of your similar copies, which you want to have for newcombs problem like situations.
However, I think it’s possible to construct even more contorted scenarios where conservation of expected evidence is violated for this as well. If there are 2 copies of you, a coin is flipped, and:
If it’s heads the copies are presented with two different choices.
If it’s tails the copies are presented with the same choice.
then you know that you will update towards heads when you’re presented with a choice after a minute, since heads make it twice as likely that anyone would be presented with that specific choice. I don’t know if there’s any way around this. Maybe if you update your probabilities according to the chance that someone following your decision theory is around, rather than someone making your exact choice, or something like that?