He also said if I wanted to win a lot of arguments I should learn about more fallacies.
This is actually one danger of learning about fallacies: you become more able at defeating arguments, and this holds irrespective of their truth, so if you have a standard tendency to privilege arguments for the positions you already hold, that makes it harder for you to change your mind. See the post Knowing About Biases Can Hurt People.
Thanks for the post, I’ll definitely look at it after I’m done replying to this one.
When you say “privilege arguments for the positions you already hold”, do you mean “only allow arguments that allow you a better chance of winning”?
This sounds like the wrong thing to say, but… I’ll say it anyways, I want to see your reaction: what if you don’t develop a tendency to fight the easier battle? What you say makes sense: losing less = learning less, until the point where you start to win/lose at a 50⁄50 rate at least. What if you pick arguments for the sake of arguing, or you promise yourself that you would only argue for the truth? Or, as is the case for me, what if you you have the tendency to fight for both sides (heck, this post)? I actually agree with you on all points, but for some reason I want to know how you would answer the opposing side, if I were on it.
Ideally, you should aim to defeat the strongest version of your opponent’s argument that you can think of—it’s a much better test of whether your position is actually correct, and it helps prevent rationalization. Rather than attacking a version of your opponent’s argument that is weak, you should attack the strongest possible version of it. On LessWrong we usually call this Least Convenient Possible World, or LCPW for short. (I’ve also seen it called “steel man,” because instead of constructing a weaker “straw man” version of your opponent’s argument, you fix it and make a stronger one.) You may be interested in the wiki entry on LCPW and the post that coined the term.
I’m not sure about the merits of arguing for positions you don’t actually believe. It can certainly be helpful in a context where your discussion partners are also tossing around ideas and collaborating by playing Devil’s Advocate, since it can help you find the weaknesses in your position, but repeatedly practicing rationalization might not be healthy in the long run.
This is actually one danger of learning about fallacies: you become more able at defeating arguments, and this holds irrespective of their truth, so if you have a standard tendency to privilege arguments for the positions you already hold, that makes it harder for you to change your mind. See the post Knowing About Biases Can Hurt People.
Thanks for the post, I’ll definitely look at it after I’m done replying to this one.
When you say “privilege arguments for the positions you already hold”, do you mean “only allow arguments that allow you a better chance of winning”?
This sounds like the wrong thing to say, but… I’ll say it anyways, I want to see your reaction: what if you don’t develop a tendency to fight the easier battle? What you say makes sense: losing less = learning less, until the point where you start to win/lose at a 50⁄50 rate at least. What if you pick arguments for the sake of arguing, or you promise yourself that you would only argue for the truth? Or, as is the case for me, what if you you have the tendency to fight for both sides (heck, this post)? I actually agree with you on all points, but for some reason I want to know how you would answer the opposing side, if I were on it.
Ideally, you should aim to defeat the strongest version of your opponent’s argument that you can think of—it’s a much better test of whether your position is actually correct, and it helps prevent rationalization. Rather than attacking a version of your opponent’s argument that is weak, you should attack the strongest possible version of it. On LessWrong we usually call this Least Convenient Possible World, or LCPW for short. (I’ve also seen it called “steel man,” because instead of constructing a weaker “straw man” version of your opponent’s argument, you fix it and make a stronger one.) You may be interested in the wiki entry on LCPW and the post that coined the term.
I’m not sure about the merits of arguing for positions you don’t actually believe. It can certainly be helpful in a context where your discussion partners are also tossing around ideas and collaborating by playing Devil’s Advocate, since it can help you find the weaknesses in your position, but repeatedly practicing rationalization might not be healthy in the long run.