Ideally, you should aim to defeat the strongest version of your opponent’s argument that you can think of—it’s a much better test of whether your position is actually correct, and it helps prevent rationalization. Rather than attacking a version of your opponent’s argument that is weak, you should attack the strongest possible version of it. On LessWrong we usually call this Least Convenient Possible World, or LCPW for short. (I’ve also seen it called “steel man,” because instead of constructing a weaker “straw man” version of your opponent’s argument, you fix it and make a stronger one.) You may be interested in the wiki entry on LCPW and the post that coined the term.
I’m not sure about the merits of arguing for positions you don’t actually believe. It can certainly be helpful in a context where your discussion partners are also tossing around ideas and collaborating by playing Devil’s Advocate, since it can help you find the weaknesses in your position, but repeatedly practicing rationalization might not be healthy in the long run.
Ideally, you should aim to defeat the strongest version of your opponent’s argument that you can think of—it’s a much better test of whether your position is actually correct, and it helps prevent rationalization. Rather than attacking a version of your opponent’s argument that is weak, you should attack the strongest possible version of it. On LessWrong we usually call this Least Convenient Possible World, or LCPW for short. (I’ve also seen it called “steel man,” because instead of constructing a weaker “straw man” version of your opponent’s argument, you fix it and make a stronger one.) You may be interested in the wiki entry on LCPW and the post that coined the term.
I’m not sure about the merits of arguing for positions you don’t actually believe. It can certainly be helpful in a context where your discussion partners are also tossing around ideas and collaborating by playing Devil’s Advocate, since it can help you find the weaknesses in your position, but repeatedly practicing rationalization might not be healthy in the long run.