Eliezer Yudkowsky often emphasizes the fact that an argument can be valid or not independently of whether the conclusion holds. If I argue A⟹B⟹C and A is true but C is false, it could still be that A⟹B is a valid step.
Most people outside of LW don’t get this. If I criticize an argument about something political (but the conclusion is popular), usually the response is something about why the conclusion is true (or about how I’m a bad person for doubting the conclusion). But the really frustrating part is that they’re, in some sense, correct not to get it because the inference
x criticizes argument for y⟹x doesn't like y
is actually a pretty reliable conclusion on… well, on reddit, anyway.
And the problem… The conclusion of all of this is: even if everyone’s behaving perfectly rationally, and just making inferences justified by the correlations, you’re going to get this problem. And so in a way that’s depressing. But it was also kind of calming to me, because it made me… like, the fact that people are making these inferences about me feels sort of, “Well, it is Bayesian of them.”
Somehow, I only got annoyed about this after having heard her say it. I probably didn’t realize it was happening regularly before.
She also suggests a solution
So maybe I can sort of grudgingly force myself to try to give them enough other evidence, in my manner and in the things that I say, so that they don’t make that inference about me.
I think that the way to not get frustrated about this is to know your public and know when spending your time arguing something will have a positive outcome or not. You don’t need to be right or honest all the time, you just need to say things that are going to have the best outcome. If lying or omitting your opinions is the way of making people understand/not fight you, so be it. Failure to do this isn’t superior rationality, it’s just poor social skills.
While I am not a rule utilitarian and I think that, ultimately, honesty is not a terminal value, I also consider the norm against lying to be extremely important. I would need correspondingly strong reasons to break it, and those won’t exist as far as political discussions go (because they don’t matter enough and you can usually avoid them if you want).
The “keeping your opinions to yourself” part if your post is certainly a way to do it, though I currently don’t think that my involvement in political discussions is net harmful. But I strongly object to the idea that I should ever be dishonest, both online and offline.
It comes down to selection and attention as evidence of beliefs/values. The very fact that someone expends energy on an argument (pro or con) is pretty solid evidence that they care about the topic. They may also care (or even more strongly care) about validity of arguments, but even the most Spock-like rationalists are more likely to point out flaws in arguments when they are interested in the domain.
But I’m confused at your initial example—if the argument is A → B → C, and A is true and C is false, then EITHER A->B is false, or B->C is false. Either way, A->B->C is false.
But I’m confused at your initial example—if the argument is A → B → C, and A is true and C is false, then EITHER A->B is false, or B->C is false. Either way, A->B->C is false.
A → B → C is false, but A → B (which is a step in the argument) could be correct—that’s all I meant. I guess that was an unnecessarily complicated example. You could just say A and B are false but A → B is true.
Eliezer Yudkowsky often emphasizes the fact that an argument can be valid or not independently of whether the conclusion holds. If I argue A⟹B⟹C and A is true but C is false, it could still be that A⟹B is a valid step.
Most people outside of LW don’t get this. If I criticize an argument about something political (but the conclusion is popular), usually the response is something about why the conclusion is true (or about how I’m a bad person for doubting the conclusion). But the really frustrating part is that they’re, in some sense, correct not to get it because the inference
x criticizes argument for y⟹x doesn't like yis actually a pretty reliable conclusion on… well, on reddit, anyway.
Julia Galef made a very similar point once:
Somehow, I only got annoyed about this after having heard her say it. I probably didn’t realize it was happening regularly before.
She also suggests a solution
I think that the way to not get frustrated about this is to know your public and know when spending your time arguing something will have a positive outcome or not. You don’t need to be right or honest all the time, you just need to say things that are going to have the best outcome. If lying or omitting your opinions is the way of making people understand/not fight you, so be it. Failure to do this isn’t superior rationality, it’s just poor social skills.
While I am not a rule utilitarian and I think that, ultimately, honesty is not a terminal value, I also consider the norm against lying to be extremely important. I would need correspondingly strong reasons to break it, and those won’t exist as far as political discussions go (because they don’t matter enough and you can usually avoid them if you want).
The “keeping your opinions to yourself” part if your post is certainly a way to do it, though I currently don’t think that my involvement in political discussions is net harmful. But I strongly object to the idea that I should ever be dishonest, both online and offline.
It comes down to selection and attention as evidence of beliefs/values. The very fact that someone expends energy on an argument (pro or con) is pretty solid evidence that they care about the topic. They may also care (or even more strongly care) about validity of arguments, but even the most Spock-like rationalists are more likely to point out flaws in arguments when they are interested in the domain.
But I’m confused at your initial example—if the argument is A → B → C, and A is true and C is false, then EITHER A->B is false, or B->C is false. Either way, A->B->C is false.
A → B → C is false, but A → B (which is a step in the argument) could be correct—that’s all I meant. I guess that was an unnecessarily complicated example. You could just say A and B are false but A → B is true.