On the “all arguments are soldiers” metaphorical battlefield, I often find myself in a repetition of a particular fight. One person whom I like, generally trust, and so have mentally marked as an Ally, directs me to arguments advanced by one of their Allies. Before reading the arguments or even fully recognizing the topic, I find myself seeking any reason, any charitable interpretation of the text, to accept the arguments. And in the contrary case, in a discussion with a person whose judgment I generally do not trust, and whom I have therefore marked as an (ideological) Enemy, it often happens that they direct me to arguments advanced by their own Allies. Again before reading the arguments or even fully recognizing the topic, I find myself seeking any reason, any flaw in the presentation of the argument or its application to my discussion, to reject the arguments. In both cases the behavior stems from matters of trust and an unconscious assignment of people to MySide or the OtherSide.
And weirdly enough, I find that that unconscious assignment can be hacked very easily. Consciously deciding that the author is really an Ally (or an Enemy) seems to override the unconscious assignment. So the moment I notice being stuck in Ally-mode or Enemy-mode, it’s possible to switch to the other. I don’t seem to have a neutral mode. YMMV! I’d be interested in hearing whether it works the same way for other people or not.
For best understanding of a topic, I suspect it might help to read an argument twice, once in Ally-mode to find its strengths and once in Enemy-mode to find its weaknesses.
Just wondering if it would make sense to consider everyone a Stupid Ally. That is, a good person who is just really really bad at understanding arguments. So the arguments they forward to you are worth examining, but must be examined carefully.
Just wondering if it would make sense to consider everyone a Stupid Ally. That is, a good person who is just really really bad at understanding arguments. So the arguments they forward to you are worth examining, but must be examined carefully.
This is generally how I read self-help books, especially the ones I have to hold my nose for while reading. (Because their logic stinks.) ;-)
Basically, I try to imagine Ignaz Semmelweiss telling me his bullshit theory about handwashing, when I know there is no way that there could be such a powerful poison in corpses, that a tiny invisible amount would be deadly. So I know that he is totally full of bullshit, at the same time that I must take at face value the observation that something different is happening in his clinic. So I try to screen out the theory, and instead look for:
What are they observing (as opposed to opining about the observations)
What concrete actions are they recommending, and
What effect of these actions are they predicting
As this information can be quite useful, once the insane theories are scrubbed off the top.
While it’s easy to laugh at doctors for ignoring Semmelweiss now, there was no scientific theory that could account for his observations until well after his death. A similar phenomenon exists in self-help, where scientists are only now producing research that validates self-help ideas introduced in the 70′s and 80′s… usually under different names, and with different theories. Practice has a way of preceding theory, because useful practices can be found by accident, but good theories take hard work.
So, “Stupid Ally” makes sense in the same way: even an idiot can be right by accident; they’re just unlikely to be right about why they’re right!
On the “all arguments are soldiers” metaphorical battlefield, I often find myself in a repetition of a particular fight. One person whom I like, generally trust, and so have mentally marked as an Ally, directs me to arguments advanced by one of their Allies. Before reading the arguments or even fully recognizing the topic, I find myself seeking any reason, any charitable interpretation of the text, to accept the arguments. And in the contrary case, in a discussion with a person whose judgment I generally do not trust, and whom I have therefore marked as an (ideological) Enemy, it often happens that they direct me to arguments advanced by their own Allies. Again before reading the arguments or even fully recognizing the topic, I find myself seeking any reason, any flaw in the presentation of the argument or its application to my discussion, to reject the arguments. In both cases the behavior stems from matters of trust and an unconscious assignment of people to MySide or the OtherSide.
And weirdly enough, I find that that unconscious assignment can be hacked very easily. Consciously deciding that the author is really an Ally (or an Enemy) seems to override the unconscious assignment. So the moment I notice being stuck in Ally-mode or Enemy-mode, it’s possible to switch to the other. I don’t seem to have a neutral mode. YMMV! I’d be interested in hearing whether it works the same way for other people or not.
For best understanding of a topic, I suspect it might help to read an argument twice, once in Ally-mode to find its strengths and once in Enemy-mode to find its weaknesses.
Just wondering if it would make sense to consider everyone a Stupid Ally. That is, a good person who is just really really bad at understanding arguments. So the arguments they forward to you are worth examining, but must be examined carefully.
This is generally how I read self-help books, especially the ones I have to hold my nose for while reading. (Because their logic stinks.) ;-)
Basically, I try to imagine Ignaz Semmelweiss telling me his bullshit theory about handwashing, when I know there is no way that there could be such a powerful poison in corpses, that a tiny invisible amount would be deadly. So I know that he is totally full of bullshit, at the same time that I must take at face value the observation that something different is happening in his clinic. So I try to screen out the theory, and instead look for:
What are they observing (as opposed to opining about the observations)
What concrete actions are they recommending, and
What effect of these actions are they predicting
As this information can be quite useful, once the insane theories are scrubbed off the top.
While it’s easy to laugh at doctors for ignoring Semmelweiss now, there was no scientific theory that could account for his observations until well after his death. A similar phenomenon exists in self-help, where scientists are only now producing research that validates self-help ideas introduced in the 70′s and 80′s… usually under different names, and with different theories. Practice has a way of preceding theory, because useful practices can be found by accident, but good theories take hard work.
So, “Stupid Ally” makes sense in the same way: even an idiot can be right by accident; they’re just unlikely to be right about why they’re right!