I go back and forth as to whether this is a deep result or not.
It’s clear that different ways of aggregating lead to different “effective anthropic probabilities”. If you want to be correct in the most worlds, follow SSA; if you want most of your copies to be correct, follow SIA.
You’ve described a situation in which people update according to their experience in the way you describe, and, because this update is used to weigh their gain/loss, the previous copies behave as if they were using the updated values as a probability.
It seems like you could use any “updating” process—even a non-Bayesian one, one that violated conservation of expected evidence—to similar effect.
Have a series of coin ten flips that create copies of the agent (tails) or not (heads). The flips are known to all agents soon after they are created.
If you take the sequences of ten full coin flips, and put any utility across your copies having $s in those universes, you can get something that looks like any sort of updating.
If you value when your copies get money—for instance, you value them getting it at HTH, but not (or not so much) after either HTHH… or HTHT..., then this looks like a non-Bayesian updating process.
I go back and forth as to whether this is a deep result or not.
It’s clear that different ways of aggregating lead to different “effective anthropic probabilities”. If you want to be correct in the most worlds, follow SSA; if you want most of your copies to be correct, follow SIA.
You’ve described a situation in which people update according to their experience in the way you describe, and, because this update is used to weigh their gain/loss, the previous copies behave as if they were using the updated values as a probability.
It seems like you could use any “updating” process—even a non-Bayesian one, one that violated conservation of expected evidence—to similar effect.
Yeah, I’m not sure which invariants hold in anthropic situations either. Can you try to come up with a toy example of such process?
Have a series of coin ten flips that create copies of the agent (tails) or not (heads). The flips are known to all agents soon after they are created.
If you take the sequences of ten full coin flips, and put any utility across your copies having $s in those universes, you can get something that looks like any sort of updating.
If you value when your copies get money—for instance, you value them getting it at HTH, but not (or not so much) after either HTHH… or HTHT..., then this looks like a non-Bayesian updating process.