I think there is some confusion here coming from the unclear notion of a Bayesian agent with beliefs about theorems of PA. The reformulation I gave with Alice, Bob and Carol makes the problem clearer, I think.
Yeah, I did find that reformulation clearer, but it also then seems to not be about filtered evidence?
Like, it seems like you need two conditions to get the impossibility result, now using English instead of math:
1. Alice believes Carol is always honest (at least with probability > 50%)
2. For any statement s: [if Carol will ever say s, Alice currently believes that Carol will eventually say s (at least with probability > 50%)]
It really seems like the difficulty here is with condition 2, not with condition 1, so I don’t see how this theorem has anything to do with filtered evidence.
Maybe the point is just “you can’t perfectly update on X and Carol-said-X , because you can’t have a perfect model of them, because you aren’t bigger than they are”?
(Probably you agree with this, given your comment.)
The problem is not in one of the conditions separately but in their conjunction: see my follow-up comment. You could argue that learning an exact model of Carol doesn’t really imply condition 2 since, although the model does imply everything Carol is ever going to say, Alice is not capable of extracting this information from the model. But then it becomes a philosophical question of what does it mean to “believe” something. I think there is value in the “behaviorist” interpretation that “believing X” means “behaving optimally given X”. In this sense, Alice can separately believe the two facts described by conditions 1 and 2, but cannot believe their conjunction.
I still don’t get it but probably not worth digging further. My current confusion is that even under the behaviorist interpretation, it seems like just believing condition 2 implies knowing all the things Carol would ever say (or Alice has a mistaken belief). Probably this is a confusion that would go away with enough formalization / math, but it doesn’t seem worth doing that.
I think there is some confusion here coming from the unclear notion of a Bayesian agent with beliefs about theorems of PA. The reformulation I gave with Alice, Bob and Carol makes the problem clearer, I think.
Yeah, I did find that reformulation clearer, but it also then seems to not be about filtered evidence?
Like, it seems like you need two conditions to get the impossibility result, now using English instead of math:
1. Alice believes Carol is always honest (at least with probability > 50%)
2. For any statement s: [if Carol will ever say s, Alice currently believes that Carol will eventually say s (at least with probability > 50%)]
It really seems like the difficulty here is with condition 2, not with condition 1, so I don’t see how this theorem has anything to do with filtered evidence.
Maybe the point is just “you can’t perfectly update on X and Carol-said-X , because you can’t have a perfect model of them, because you aren’t bigger than they are”?
(Probably you agree with this, given your comment.)
The problem is not in one of the conditions separately but in their conjunction: see my follow-up comment. You could argue that learning an exact model of Carol doesn’t really imply condition 2 since, although the model does imply everything Carol is ever going to say, Alice is not capable of extracting this information from the model. But then it becomes a philosophical question of what does it mean to “believe” something. I think there is value in the “behaviorist” interpretation that “believing X” means “behaving optimally given X”. In this sense, Alice can separately believe the two facts described by conditions 1 and 2, but cannot believe their conjunction.
I still don’t get it but probably not worth digging further. My current confusion is that even under the behaviorist interpretation, it seems like just believing condition 2 implies knowing all the things Carol would ever say (or Alice has a mistaken belief). Probably this is a confusion that would go away with enough formalization / math, but it doesn’t seem worth doing that.