The intuitive breakthrough for me was realizing that given a proposition P and an argument A that supports or opposes P, then showing that A is invalid has no effect on the truth or falsehood of P, and showing that P is true has no effect on the validity of A. This is the core of the “knowing biases can hurt you” problem, and while it’s obvious if put in formal terms, it’s counterintuitive in practice. The best way to get that to sink in, I think, is to practice demolishing bad arguments that support a conclusion you agree with.
The intuitive breakthrough for me was realizing that given a proposition P and an argument A that supports or opposes P, then showing that A is invalid has no effect on the truth or falsehood of P
That sort of makes sense if what you mean is “whatever we humans think about A has no effect on the truth or falsehood of P in a Platonic sense” but surely showing that A is invalid ought to change how likely you think that P is true?
and showing that P is true has no effect on the validity of A.
Similarly, if P is actually true, a random argument that concludes with “P is true” is more likely to be valid than a random argument that concludes with “P is false”. So showing P is true ought to make you think that A is more or less likely to be valid depending on its conclusion.
(Given that this comment was voted up to 3 and nobody gave a counterargument, I wonder if I’m missing something obvious.)
I wrote that two years ago, and you’re right that it’s imprecise in a way that makes it not literally true. In particular, if a skilled arguer gives you what they think is the best argument for a proposition, and the argument is invalid, then the proposition is likely false. What I was getting at, I think, is that my intuition used to vastly overestimate the correlation between the validity of arguments encountered and the truth of propositions they argue for, because people very often make bad arguments for true statements. This made me reject things I shouldn’t have, and easily get sidetracked into dealing with arguments too many layers removed from the interesting conclusions.
3 is still a small number. If it were 10+ then you should worry. I’m confused by this too.
The nearest correct idea I can think of to what Jim actually said, is that if you have a proposition P with an associated credence based on the available evidence, then finding an additional but invalid argument A shouldn’t affect your credence in P. The related error is assuming that if you argue with someone and are able to demolish all their arguments, that this means that you are correct, and giving too little weight to the possibility that they are a bad arguer with a true opinion. Jim, is that close to what you meant?
EDIT: Whoops, didn’t see Jim’s response. But it looks like I guessed right. I’ve also made the related error in the past, and this quote from Black Belt Bayesian was helpful in improving my truth-finding ability:
To win, you must fight not only the creature you encounter; you must fight the most horrible thing that can be constructed from its corpse.
The intuitive breakthrough for me was realizing that given a proposition P and an argument A that supports or opposes P, then showing that A is invalid has no effect on the truth or falsehood of P, and showing that P is true has no effect on the validity of A. This is the core of the “knowing biases can hurt you” problem, and while it’s obvious if put in formal terms, it’s counterintuitive in practice. The best way to get that to sink in, I think, is to practice demolishing bad arguments that support a conclusion you agree with.
That sort of makes sense if what you mean is “whatever we humans think about A has no effect on the truth or falsehood of P in a Platonic sense” but surely showing that A is invalid ought to change how likely you think that P is true?
Similarly, if P is actually true, a random argument that concludes with “P is true” is more likely to be valid than a random argument that concludes with “P is false”. So showing P is true ought to make you think that A is more or less likely to be valid depending on its conclusion.
(Given that this comment was voted up to 3 and nobody gave a counterargument, I wonder if I’m missing something obvious.)
I wrote that two years ago, and you’re right that it’s imprecise in a way that makes it not literally true. In particular, if a skilled arguer gives you what they think is the best argument for a proposition, and the argument is invalid, then the proposition is likely false. What I was getting at, I think, is that my intuition used to vastly overestimate the correlation between the validity of arguments encountered and the truth of propositions they argue for, because people very often make bad arguments for true statements. This made me reject things I shouldn’t have, and easily get sidetracked into dealing with arguments too many layers removed from the interesting conclusions.
Ok, that makes a lot more sense. Thanks for the clarification.
3 is still a small number. If it were 10+ then you should worry. I’m confused by this too.
The nearest correct idea I can think of to what Jim actually said, is that if you have a proposition P with an associated credence based on the available evidence, then finding an additional but invalid argument A shouldn’t affect your credence in P. The related error is assuming that if you argue with someone and are able to demolish all their arguments, that this means that you are correct, and giving too little weight to the possibility that they are a bad arguer with a true opinion. Jim, is that close to what you meant?
EDIT: Whoops, didn’t see Jim’s response. But it looks like I guessed right. I’ve also made the related error in the past, and this quote from Black Belt Bayesian was helpful in improving my truth-finding ability: