I find this outline helpful. I do however have a quibble.
If you believe X because you want to, any arguments you make for X no matter how strong they sound are devoid of informational context about X and should properly be ignored by a truth-seeker.
This seems slightly inaccurate. It would imply that a truth-seeking judge would decide cases just as well (or better) without hearing from the lawyers as with, because lawyers are paid to advocate for their clients. More accurate would be:
If you believe X because you want to, your belief in X is devoid of informational context about X and should properly be ignored by a truth-seeker.
If you believe X for reasons unrelated to X being true, your testimony becomes worthless because your belief in X is not correlated with X. But arguments for X are another matter.
Example: Alice says, “There is no largest prime number,” and backs it up with an argument. You are now in possession of two pieces of evidence for Alice’s claim C:
(1) Alice’s argument. Call this “Argument.” It is evidence in the sense that p(C|argument) > p(C).
(2) Alice’s own apparent belief that C. Call this “Alice.” It is evidence in the sense that p(C|Alice) > p(C).
Now suppose you discover that Alice has been paid handsomely to make this statement, and that she would gladly have made the opposite claim had her boss wanted her to. If the claim in the post is correct, then both items of evidence are zeroed out, such that :
(3) p(C) = p(C|Argument) = p(C|Alice)
Whereas the correct thing to do is to zero out “Alice” but not “Argument” thus:
I think this is an interesting question. If the arguer is cherry-picking evidence, we should ignore that to a large degree. We are often even justified in updating in the opposite direction of a motivated argument. In the pure mathematical case, it doesn’t matter anymore, so long as we are prepared to check the proof thoroughly. It seems to break down very quickly for any other situation, though.
In principle, the Bayesian answer is that we need to account for the filtering process when updating on filtered evidence. This collides with logical uncertainty when “evidence” includes logical/mathematical arguments. But, there is a largely seperate question of what we should do in practice when we encounter motivated arguments. It would be nice to have more tools for dealing with this!
Yes, this in an interesting issue. One unusual (at least, I have not seen anyone advocate it seriously elsewhere) perspective is that mentioned by Tyler Cowen here. The gist is that in Bayesian terms, the fact that someone thought an issue was important enough to lie about is evidence that their claim is correct.
Hmmm. It’s better evidence that they want you to believe the claim is correct.
For example, I might cherry-pick evidence to suggest that anyone who gives me $1 is significantly less likely to be killed by a crocodile. I don’t believe that myself, but it is to my advantage that you believe it, because then I am likely to get $1.
The Bayesian point only stands if the P(ClimateGate | AGW) > P(ClimateGate | ~AGW). That is the only way you can revise your prior upwards in light of ClimateGate
Now suppose you discover that Alice has been paid handsomely to make this statement, and that she would gladly have made the opposite claim had her boss wanted her to.
Are we to assume that Alice would have presented a equally convincing-sounding argument for the opposite side had that been her boss’ demand, or would she just have asserted the statement “There is a largest prime number” without an accompanying argument?
Hmm… I am not sure. Because the value of her testimony (as distinguished from her argument) is null whichever side she supports, I am not sure the answer matters. But I could be wrong. Does it matter?
Well, I agree that the value of Alice’s testimony is null. However, depending on the answer to my original question, the value of her argument may also become null. More specifically, if we assume that Alice would have made an argument of similar quality for the opposing side had it been requested of her by her boss, then her argument, like her testimony, is not dependent upon the truth condition of the statement “There is no largest prime number”, but rather upon her boss’ request. Assuming that Alice is a skilled enough arguer that you cannot easily distinguish any flaws in her argument, you would be wise to disregard her argument the moment you figure out that it was motivated by something other than truth.
Note that for a statement like “There is no largest prime number”, Alice probably would not be able to construct a convincing argument both for and against, simply due to the fact that it’s a fairly easy claim to prove as far as claims go. However, for a more ambiguous claim like “The education system in America is less effective than the education system is in China”, it’s very possible for Alice’s argument to sound convincing and yet be motivated by something other than truth, e.g. perhaps Alice is harbors heavily anti-American sentiments. In this case, Alice’s argument can and should be ignored because it is not entangled with reality, but rather Alice’s own disposition.
This advice does not apply to those who happen to be logically omniscient.
I find this outline helpful. I do however have a quibble.
This seems slightly inaccurate. It would imply that a truth-seeking judge would decide cases just as well (or better) without hearing from the lawyers as with, because lawyers are paid to advocate for their clients. More accurate would be:
If you believe X for reasons unrelated to X being true, your testimony becomes worthless because your belief in X is not correlated with X. But arguments for X are another matter.
Example: Alice says, “There is no largest prime number,” and backs it up with an argument. You are now in possession of two pieces of evidence for Alice’s claim C:
(1) Alice’s argument. Call this “Argument.” It is evidence in the sense that p(C|argument) > p(C).
(2) Alice’s own apparent belief that C. Call this “Alice.” It is evidence in the sense that p(C|Alice) > p(C).
Now suppose you discover that Alice has been paid handsomely to make this statement, and that she would gladly have made the opposite claim had her boss wanted her to. If the claim in the post is correct, then both items of evidence are zeroed out, such that :
(3) p(C) = p(C|Argument) = p(C|Alice)
Whereas the correct thing to do is to zero out “Alice” but not “Argument” thus:
(4) p(C|Alice) = p(C)
(5) p(C|Argument) > p(C)
*Edited for formatting
I think this is an interesting question. If the arguer is cherry-picking evidence, we should ignore that to a large degree. We are often even justified in updating in the opposite direction of a motivated argument. In the pure mathematical case, it doesn’t matter anymore, so long as we are prepared to check the proof thoroughly. It seems to break down very quickly for any other situation, though.
In principle, the Bayesian answer is that we need to account for the filtering process when updating on filtered evidence. This collides with logical uncertainty when “evidence” includes logical/mathematical arguments. But, there is a largely seperate question of what we should do in practice when we encounter motivated arguments. It would be nice to have more tools for dealing with this!
Yes, this in an interesting issue. One unusual (at least, I have not seen anyone advocate it seriously elsewhere) perspective is that mentioned by Tyler Cowen here. The gist is that in Bayesian terms, the fact that someone thought an issue was important enough to lie about is evidence that their claim is correct.
Or their position on the issue could be motivated by some other issue you don’t even know is on their agenda.
Or...pretty much anything.
Hmmm. It’s better evidence that they want you to believe the claim is correct.
For example, I might cherry-pick evidence to suggest that anyone who gives me $1 is significantly less likely to be killed by a crocodile. I don’t believe that myself, but it is to my advantage that you believe it, because then I am likely to get $1.
Someone points out in the comments to that:
Are we to assume that Alice would have presented a equally convincing-sounding argument for the opposite side had that been her boss’ demand, or would she just have asserted the statement “There is a largest prime number” without an accompanying argument?
Hmm… I am not sure. Because the value of her testimony (as distinguished from her argument) is null whichever side she supports, I am not sure the answer matters. But I could be wrong. Does it matter?
Well, I agree that the value of Alice’s testimony is null. However, depending on the answer to my original question, the value of her argument may also become null. More specifically, if we assume that Alice would have made an argument of similar quality for the opposing side had it been requested of her by her boss, then her argument, like her testimony, is not dependent upon the truth condition of the statement “There is no largest prime number”, but rather upon her boss’ request. Assuming that Alice is a skilled enough arguer that you cannot easily distinguish any flaws in her argument, you would be wise to disregard her argument the moment you figure out that it was motivated by something other than truth.
Note that for a statement like “There is no largest prime number”, Alice probably would not be able to construct a convincing argument both for and against, simply due to the fact that it’s a fairly easy claim to prove as far as claims go. However, for a more ambiguous claim like “The education system in America is less effective than the education system is in China”, it’s very possible for Alice’s argument to sound convincing and yet be motivated by something other than truth, e.g. perhaps Alice is harbors heavily anti-American sentiments. In this case, Alice’s argument can and should be ignored because it is not entangled with reality, but rather Alice’s own disposition.
This advice does not apply to those who happen to be logically omniscient.