Have a series of coin ten flips that create copies of the agent (tails) or not (heads). The flips are known to all agents soon after they are created.
If you take the sequences of ten full coin flips, and put any utility across your copies having $s in those universes, you can get something that looks like any sort of updating.
If you value when your copies get money—for instance, you value them getting it at HTH, but not (or not so much) after either HTHH… or HTHT..., then this looks like a non-Bayesian updating process.
Yeah, I’m not sure which invariants hold in anthropic situations either. Can you try to come up with a toy example of such process?
Have a series of coin ten flips that create copies of the agent (tails) or not (heads). The flips are known to all agents soon after they are created.
If you take the sequences of ten full coin flips, and put any utility across your copies having $s in those universes, you can get something that looks like any sort of updating.
If you value when your copies get money—for instance, you value them getting it at HTH, but not (or not so much) after either HTHH… or HTHT..., then this looks like a non-Bayesian updating process.