There is a common idea in the “critical thinking”/”traditional rationality” community that (roughly) you should, when exposed to an argument, either identify a problem with it or come to believe the argument’s conclusion. From a Bayesian framework, however, this idea seems clearly flawed. When presented with an argument for a certain conclusion, my failure to spot a flaw in the argument might be explained by either the argument’s being sound or by my inability to identify flawed arguments. So the degree to which I should update in either direction depends on my corresponding prior beliefs. In particular, if I have independent evidence that the argument’s conclusion is false and that my skills for detecting flaws in arguments are imperfect, it seems perfectly legitimate to say, “Look, your argument appears sound to me, but given what I know, both about the matter at hand and about my own cognitive abilities, it is much more likely that there’s a flaw in your argument which I cannot detect than that its conclusion is true.” Yet it is extremely rare to see LW folk or other rationalists say things like this. Why is this so?
In the first case, you have independent evidence that the conclusion is false, so you’re basically saying “If I considered your arguments in isolation, I would be convinced of your conclusion, but here are several pieces of external evidence which contradict your conclusion. I trust this external evidence more than I trust my ability to evaluate arguments.”
In the second case, you’re saying “I have already concluded that your conclusion is false because I have concluded that mine is true. I think it’s more likely that there is a flaw in your conclusion that I can’t detect than that there is a flaw in the reasoning that led to my conclusion.”
The person in the first case is far more likely to respond with “I don’t know” in response to the question of “So what do you think the real answer is, then?” In our culture (both outside, and, to a lesser but still significant degree inside LW), there is a stigma against arguing against a hypothesis without providing an alternative hypothesis. An exception is the argument of the form “If Y is true, how do you explain X?” which is quite common. Unfortunately, this form of argument is used extensively by people who are, as you say, entirely wedded to a particular conclusion, so using it makes you seem like one of those people and therefore less credible, especially in the eyes of LWers.
Rereading your comment, I see that there are two ways to interpret it. The first is “Rationalists do not use this form of argument because it makes them look like people who are wedded to a particular conclusion.” The second is “Rationalists do not use this form of argument because it is flawed—they see that anyone who is wedded to a particular conclusion can use it to avoid updating on evidence.” I agree with the first interpretation, but not the second—that form of argument can be valid, but reduces the credibility of the person using it in the eyes of other rationalists.
In the first case, you have independent evidence that the conclusion is false
“Independent evidence” is a tricky concept. Since we are talking Bayesianism here, at the moment you’re rejecting the argument it’s not evidence any more, it’s part of your prior. Maybe there was evidence in the past that you’ve updated on, but when you refuse to accept the argument, you’re refusing to accept it solely on the basis of your prior.
In the second case, you’re saying “I have already concluded that your conclusion is false because I have concluded that mine is true.”
Which is pretty much equivalent to saying “I have seen evidence that your conclusion is false, so I already updated that it is false and my position is true and that’s why I reject your argument”.
Not quite, your priors might be good. We’re talking here about ignoring evidence and that’s a separate issue from whether your priors are adequate or not.
I say things like this a lot in contexts where I know there are experts, but I have put no effort into learning which are the reliable ones. So when someone asserts something about (a) nutritional science (b) Biblical translation nuances (c) assorted other things in this category, I tend to say, “I really don’t have the relevant background to evaluate your argument, and it’s not a field I’m planning to do the legwork to understand very well.”
Yet it is extremely rare to see LW folk or other rationalists say things like this. Why is this so?
In my experience there are LW people who would in such cases simply declare that they won’t be convinced of the topic at hand and suggest to change the subject.
I particularly remember a conversation at the LW community camp about geopolitics where a person simply declared that they aren’t able to evaluate arguments on the matter and therefore won’t be convinced.
That was probably me. I don’t think I handled the situation particularly gracefully, but I really didn’t want to continue that conversation, and I couldn’t see whether the person in question was wearing a crocker’s rules tag.
I don’t remember my actual words, but I think I wasn’t trying to go for “nothing could possibly convince me”, so much as “nothing said in this conversation could convince me”.
It’s still more graceful than the “I think you are wrong based on my heuristics but I can’t tell you where you are wrong” that Pablo Stafforini advocates.
A similar situation that used to happen frequently to me in real life, was when the argument was too long, too complex, used information that I couldn’t verify… or ever could, but the verification would take a lot of time… something like: “There is this 1000 pages long book containing complex philosophical arguments and information from non-mainstream but cited sources, which totally proves that my religion is correct.” And there is nothing obviously incorrect within the first five pages. But I am certainly not going to read it all. And the other person tries to use my self-image of an intelligent person against me, insisting that I should promise that I will read the whole book and then debate about it (which is supposedly the rational thing to do in such situation: hey, here is the evidence, you just refuse to look at it), or else I am not really intelligent.
And in such situations I just waved my hands and said—well, I guess you just have to consider me unintelligent—and went away.
I didn’t think about how to formalize this properly. It was just this: I recognize the trap, and refuse to walk inside. If it happened to me these days, I could probably try explaining my reaction in Bayesian terms, but it would be still socially awkward. I mean, in the case of religion, the true answer would show that I believe my opponent is either dishonest or stupid (which is why I expect him to give me false arguments); which is not a nice thing to say to people. And yeah, it seems similar to ignoring evidence for irrational reasons.
Nothing, including rationality, requires you to look at ALL evidence that you could possibly access. Among other things, your time is both finite and valuable.
Related link: Peter van Inwagen’s article Is it wrong everywhere, always, and for everyone, to believe anything on insufficient evidence?. van Inwagen suggests not, on the grounds that if it were then no philosopher could ever continue believing something firmly when there are other smarter equally well informed philosophers who strongly disagree. I find this argument less compelling than van Inwagen does.
Haha. You should believe exactly what the evidence suggests, and exactly to the degree that it suggests it. The argument is also an amusing example of ‘one man’s modus ponens...’.
There is a common idea in the “critical thinking”/”traditional rationality” community that (roughly) you should, when exposed to an argument, either identify a problem with it or come to believe the argument’s conclusion. From a Bayesian framework, however, this idea seems clearly flawed. When presented with an argument for a certain conclusion, my failure to spot a flaw in the argument might be explained by either the argument’s being sound or by my inability to identify flawed arguments. So the degree to which I should update in either direction depends on my corresponding prior beliefs. In particular, if I have independent evidence that the argument’s conclusion is false and that my skills for detecting flaws in arguments are imperfect, it seems perfectly legitimate to say, “Look, your argument appears sound to me, but given what I know, both about the matter at hand and about my own cognitive abilities, it is much more likely that there’s a flaw in your argument which I cannot detect than that its conclusion is true.” Yet it is extremely rare to see LW folk or other rationalists say things like this. Why is this so?
Because the case where you are entirely wedded to a particular conclusion and want to just ignore the contrary evidence would look awfully similar...
Awfully similar, but not identical.
In the first case, you have independent evidence that the conclusion is false, so you’re basically saying “If I considered your arguments in isolation, I would be convinced of your conclusion, but here are several pieces of external evidence which contradict your conclusion. I trust this external evidence more than I trust my ability to evaluate arguments.”
In the second case, you’re saying “I have already concluded that your conclusion is false because I have concluded that mine is true. I think it’s more likely that there is a flaw in your conclusion that I can’t detect than that there is a flaw in the reasoning that led to my conclusion.”
The person in the first case is far more likely to respond with “I don’t know” in response to the question of “So what do you think the real answer is, then?” In our culture (both outside, and, to a lesser but still significant degree inside LW), there is a stigma against arguing against a hypothesis without providing an alternative hypothesis. An exception is the argument of the form “If Y is true, how do you explain X?” which is quite common. Unfortunately, this form of argument is used extensively by people who are, as you say, entirely wedded to a particular conclusion, so using it makes you seem like one of those people and therefore less credible, especially in the eyes of LWers.
Rereading your comment, I see that there are two ways to interpret it. The first is “Rationalists do not use this form of argument because it makes them look like people who are wedded to a particular conclusion.” The second is “Rationalists do not use this form of argument because it is flawed—they see that anyone who is wedded to a particular conclusion can use it to avoid updating on evidence.” I agree with the first interpretation, but not the second—that form of argument can be valid, but reduces the credibility of the person using it in the eyes of other rationalists.
“Independent evidence” is a tricky concept. Since we are talking Bayesianism here, at the moment you’re rejecting the argument it’s not evidence any more, it’s part of your prior. Maybe there was evidence in the past that you’ve updated on, but when you refuse to accept the argument, you’re refusing to accept it solely on the basis of your prior.
Which is pretty much equivalent to saying “I have seen evidence that your conclusion is false, so I already updated that it is false and my position is true and that’s why I reject your argument”.
I think both apply.
In fact that case is just a special case of the former with you having bad priors.
Not quite, your priors might be good. We’re talking here about ignoring evidence and that’s a separate issue from whether your priors are adequate or not.
This idea seems like a manifestation of epistemic learned helplessness.
I say things like this a lot in contexts where I know there are experts, but I have put no effort into learning which are the reliable ones. So when someone asserts something about (a) nutritional science (b) Biblical translation nuances (c) assorted other things in this category, I tend to say, “I really don’t have the relevant background to evaluate your argument, and it’s not a field I’m planning to do the legwork to understand very well.”
In my experience there are LW people who would in such cases simply declare that they won’t be convinced of the topic at hand and suggest to change the subject.
I particularly remember a conversation at the LW community camp about geopolitics where a person simply declared that they aren’t able to evaluate arguments on the matter and therefore won’t be convinced.
That was probably me. I don’t think I handled the situation particularly gracefully, but I really didn’t want to continue that conversation, and I couldn’t see whether the person in question was wearing a crocker’s rules tag.
I don’t remember my actual words, but I think I wasn’t trying to go for “nothing could possibly convince me”, so much as “nothing said in this conversation could convince me”.
It’s still more graceful than the “I think you are wrong based on my heuristics but I can’t tell you where you are wrong” that Pablo Stafforini advocates.
Because that ends the discussion. I think a lot of people around here just enjoy debating arguments (certainly I do).
I actually do say things like this pretty frequently, though I haven’t had the opportunity to do so on LW yet.
A similar situation that used to happen frequently to me in real life, was when the argument was too long, too complex, used information that I couldn’t verify… or ever could, but the verification would take a lot of time… something like: “There is this 1000 pages long book containing complex philosophical arguments and information from non-mainstream but cited sources, which totally proves that my religion is correct.” And there is nothing obviously incorrect within the first five pages. But I am certainly not going to read it all. And the other person tries to use my self-image of an intelligent person against me, insisting that I should promise that I will read the whole book and then debate about it (which is supposedly the rational thing to do in such situation: hey, here is the evidence, you just refuse to look at it), or else I am not really intelligent.
And in such situations I just waved my hands and said—well, I guess you just have to consider me unintelligent—and went away.
I didn’t think about how to formalize this properly. It was just this: I recognize the trap, and refuse to walk inside. If it happened to me these days, I could probably try explaining my reaction in Bayesian terms, but it would be still socially awkward. I mean, in the case of religion, the true answer would show that I believe my opponent is either dishonest or stupid (which is why I expect him to give me false arguments); which is not a nice thing to say to people. And yeah, it seems similar to ignoring evidence for irrational reasons.
Nothing, including rationality, requires you to look at ALL evidence that you could possibly access. Among other things, your time is both finite and valuable.
Related link: Peter van Inwagen’s article Is it wrong everywhere, always, and for everyone, to believe anything on insufficient evidence?. van Inwagen suggests not, on the grounds that if it were then no philosopher could ever continue believing something firmly when there are other smarter equally well informed philosophers who strongly disagree. I find this argument less compelling than van Inwagen does.
Haha. You should believe exactly what the evidence suggests, and exactly to the degree that it suggests it. The argument is also an amusing example of ‘one man’s modus ponens...’.