I think your proof falls apart if you add “learning what the faction’s positions are” to the model? Because then the update on the biconditional could occur when learning the faction’s positions, rather than violating the martingale property.
I agree you could imagine someone who didn’t know the factions positions. But of course any real-world person who’s about to become politically opinionated DOES know the factions positions.
More generally, the proof is valid in the sense that if P1 and P2 are true (and the person’s degrees of belief are representable by a probability function), then Martingale fails. So you’d have to somehow say how adding that factor would lead one of P1 or P2 to be false. (I think if you were to press on this you should say P1 fails, since not knowing what the positions are still lets you know that people’s opinions (whatever they are) are correlated.)
Maybe a clearer way to frame it is that I’m objecting to this assumption:
Naturally, he treats the two independently: becoming convinced that abortion is wrong wouldn’t shift his opinions about guns. As a consequence, if he’s a Bayesian then he’s also 50-50 on the biconditional A<–>G.
I think your proof falls apart if you add “learning what the faction’s positions are” to the model? Because then the update on the biconditional could occur when learning the faction’s positions, rather than violating the martingale property.
I agree you could imagine someone who didn’t know the factions positions. But of course any real-world person who’s about to become politically opinionated DOES know the factions positions.
More generally, the proof is valid in the sense that if P1 and P2 are true (and the person’s degrees of belief are representable by a probability function), then Martingale fails. So you’d have to somehow say how adding that factor would lead one of P1 or P2 to be false. (I think if you were to press on this you should say P1 fails, since not knowing what the positions are still lets you know that people’s opinions (whatever they are) are correlated.)
Maybe a clearer way to frame it is that I’m objecting to this assumption: