Can someone try to make that argument more precise? It seems to me that the claim “Sorry. It can’t be done” sounds plausible but fails in the most obvious limit case: a proof of a mathematical theorem doesn’t become less correct if I found it by deliberately trying to prove the theorem. Since Bayesian reasoning approaches classical logic in the limit, the claim might be wrong for Bayesian reasoning too.
It is possible to gain evidence in favor of hypothesis X from Bob who you know has X as his bottom line. However, Bob can’t force this outcome, because it’s also possible that his attempt to convince you of X will backfire. For any fixed strategy on Bob’s part, the effect on your beliefs tends to be towards the true value of X, not towards the value that Bob wants; with mixed strategies (or just silence) he can prevent you from gaining but can’t reduce your net accuracy.
Applied to the finite likelihood case: Initially you assign some probability to X, and conditioned on any given value of X you have some probability distribution over the possible observations about X. Suppose Bob looks at those observations, filters out the ones that would be evidence against X if you had seen them directly, and gives you the remainder. But now what you’re actually observing is “number of observations that pass Bob’s filtering algorithm”, which is another variable that you assign different distributions to given different values of X, and if it takes a value that’s more likely given ~X than given X then you update downwards.
Applied to the deductive proof case: Initially you assign some probability to X. Bob goes looking for a mathematical proof of X. If there is one, then Bob tells you the proof, and you update to certainty of X. But if X is false, then there won’t be a proof, you know that Bob looked and didn’t find one, so you update downwards.
Can someone try to make that argument more precise? It seems to me that the claim “Sorry. It can’t be done” sounds plausible but fails in the most obvious limit case: a proof of a mathematical theorem doesn’t become less correct if I found it by deliberately trying to prove the theorem. Since Bayesian reasoning approaches classical logic in the limit, the claim might be wrong for Bayesian reasoning too.
It is possible to gain evidence in favor of hypothesis X from Bob who you know has X as his bottom line. However, Bob can’t force this outcome, because it’s also possible that his attempt to convince you of X will backfire. For any fixed strategy on Bob’s part, the effect on your beliefs tends to be towards the true value of X, not towards the value that Bob wants; with mixed strategies (or just silence) he can prevent you from gaining but can’t reduce your net accuracy.
Applied to the finite likelihood case: Initially you assign some probability to X, and conditioned on any given value of X you have some probability distribution over the possible observations about X. Suppose Bob looks at those observations, filters out the ones that would be evidence against X if you had seen them directly, and gives you the remainder. But now what you’re actually observing is “number of observations that pass Bob’s filtering algorithm”, which is another variable that you assign different distributions to given different values of X, and if it takes a value that’s more likely given ~X than given X then you update downwards.
Applied to the deductive proof case: Initially you assign some probability to X. Bob goes looking for a mathematical proof of X. If there is one, then Bob tells you the proof, and you update to certainty of X. But if X is false, then there won’t be a proof, you know that Bob looked and didn’t find one, so you update downwards.
Very nice, thanks!