On “If you can’t provide me with a reason …”, I think the correct position is: when someone says X (and apparently means it, is someone whose opinions you expect to have some correlation with reality, etc.) you update towards X, and if they then can’t give good reasons why X you then update towards not-X. Your overall update could end up being in either direction; if the person in question is particularly wise but not great at articulating reasons, or if X is the sort of thing whose supporting evidence you expect to be hard to articulate, the overall update is probably towards X.
A concern I didn’t mention in the post—it isn’t obvious how to respond to game-theoretic concerns. Carefully estimating the size of the update you should make when someone fails to provide good reason can be difficult, since you have to model other agents, and you might make exploitable errors.
An extreme way of addressing this is to ignore all evidence short of mathematical proof if you have any non-negligible suspicion about manipulation, similar to the mistake I describe myself making in the post. This seems too extreme, but it isn’t clear what the right thing to do overall is. The fully-Bayesian approach to estimating the amount of evidence should act similarly to a good game-theoretic solution, I think, but there might be reason to use a simpler strategy with less chance of exploitable patterns.
On “If you can’t provide me with a reason …”, I think the correct position is: when someone says X (and apparently means it, is someone whose opinions you expect to have some correlation with reality, etc.) you update towards X, and if they then can’t give good reasons why X you then update towards not-X. Your overall update could end up being in either direction; if the person in question is particularly wise but not great at articulating reasons, or if X is the sort of thing whose supporting evidence you expect to be hard to articulate, the overall update is probably towards X.
That seems about right.
A concern I didn’t mention in the post—it isn’t obvious how to respond to game-theoretic concerns. Carefully estimating the size of the update you should make when someone fails to provide good reason can be difficult, since you have to model other agents, and you might make exploitable errors.
An extreme way of addressing this is to ignore all evidence short of mathematical proof if you have any non-negligible suspicion about manipulation, similar to the mistake I describe myself making in the post. This seems too extreme, but it isn’t clear what the right thing to do overall is. The fully-Bayesian approach to estimating the amount of evidence should act similarly to a good game-theoretic solution, I think, but there might be reason to use a simpler strategy with less chance of exploitable patterns.