Because there’s no information flow between the coins, to stay self-consistent the “always assign 50% to heads” method has to not change its probabilities under irrelevant information. So this isn’t so much a reconciliation as a demonstration that always assigning 50% to heads violates an axiom.
Irrelevant information is just information that doesn’t change the probabilities. If this does, it’s relevant.
Irrelevant information is just information that doesn’t change the probabilities as long as you follow the axioms of probability. If we speculate that always assigning 50% to heads can be the wrong method, i.e. axiom-violating, then deciding which information is relevant based on it is putting the cart before the horse.
What other ways can you tell whether information is relevant? Bayes’ rule is a good tool for it, because you know it follows the right axioms of probability. Here is the path I followed: If the probabilities of the two “worlds” are 1⁄3 and 2⁄3, you expect to see 50% heads and 50% tails on the second coin. If the probabilities in the two worlds are 1⁄2 and 1⁄2, you still expect 50⁄50. The probabilities on the second coin are then 50⁄50 no matter which rule is right. If see heads or tails, then Bayes’ rule says we should update our probabilities by a factor of P(B|A)/P(B), or 0.5/0.5, or 1. No change. Since we trust Bayes’ rule to follow the axioms of probability, something that disagrees with it doesn’t.
Or you might go the conservation of expected evidence route. If the second coin landing heads makes you change your probabilities one way, conservation of expected evidence (another thing that has a fairly short and trustworthy derivation from the axioms of probability) says that the coin landing tails should make you change your probabilities the opposite way. Does it?
The underlying “why” reason the information is irrelevant is because in our causal world, you don’t get a correlation (i.e. information about one from knowing the other) without having a causal path between the two events—like a common ancestor, or conditioning on a common causal descendant. But the coinflips were independent when flipped, and we didn’t condition on any causal descendants of the second coinflip (like, say, creating more copies).
Because there’s no information flow between the coins, to stay self-consistent the “always assign 50% to heads” method has to not change its probabilities under irrelevant information. So this isn’t so much a reconciliation as a demonstration that always assigning 50% to heads violates an axiom.
Irrelevant information is just information that doesn’t change the probabilities. If this does, it’s relevant.
It’s not a reconciliation. They get about the same results, not exactly the same.
Irrelevant information is just information that doesn’t change the probabilities as long as you follow the axioms of probability. If we speculate that always assigning 50% to heads can be the wrong method, i.e. axiom-violating, then deciding which information is relevant based on it is putting the cart before the horse.
What other ways can you tell whether information is relevant? Bayes’ rule is a good tool for it, because you know it follows the right axioms of probability. Here is the path I followed: If the probabilities of the two “worlds” are 1⁄3 and 2⁄3, you expect to see 50% heads and 50% tails on the second coin. If the probabilities in the two worlds are 1⁄2 and 1⁄2, you still expect 50⁄50. The probabilities on the second coin are then 50⁄50 no matter which rule is right. If see heads or tails, then Bayes’ rule says we should update our probabilities by a factor of P(B|A)/P(B), or 0.5/0.5, or 1. No change. Since we trust Bayes’ rule to follow the axioms of probability, something that disagrees with it doesn’t.
Or you might go the conservation of expected evidence route. If the second coin landing heads makes you change your probabilities one way, conservation of expected evidence (another thing that has a fairly short and trustworthy derivation from the axioms of probability) says that the coin landing tails should make you change your probabilities the opposite way. Does it?
The underlying “why” reason the information is irrelevant is because in our causal world, you don’t get a correlation (i.e. information about one from knowing the other) without having a causal path between the two events—like a common ancestor, or conditioning on a common causal descendant. But the coinflips were independent when flipped, and we didn’t condition on any causal descendants of the second coinflip (like, say, creating more copies).