Reversal Tests in Argument and Debate
One thing that I’ve noticed recently is that simple reversal tests can be very useful for detecting bias when it comes to evaluating policy arguments or points made in a debate.
In other words, when encountering an argument it can be useful to think “Would I accept this sort of argument if it were being made for the other side?” or perhaps “If the ideological positions here were reversed, would this sort of reasoning be acceptable?”
This can be a very easy check to determine whether there is biased thinking going on. Here are some examples of situations where one might be able to apply this:
Someone is advocating a locally unpopular belief and being attacked for it. (Ask yourself whether the same sort of advocacy and reasoning would be mocked if it were being made towards locally popular conclusions; ask yourself whether the mockery would be accepted if it were being made against someone locally popular.)
Someone advocates an easy dismissal of one of the perspectives in an argument. (Ask yourself whether this sort of dismissal would seem reasonable if made against one of your own points.)
Someone makes arguments against a locally unpopular organization or belief. (Ask yourself whether these arguments would pass muster against something that wasn’t already derided locally.)
Often one will find that in fact that sort of argument or reasoning would not fly. This can be a good way to check your biases—people are often prone to accepting weak arguments for things that they already agree with or against thing they already disagree with, and stopping to check whether that reasoning would work in the “other direction” is useful.
(Other times, of course, one will find that the reasoning in question does pass the reversal test—but even so, it can good to check such things! “Trust but verify” and all that.)
This post is pointing at a good tool for identifying bias and motivated reasoning, but l don’t think that the use of “reversal test”, here aligns with how the term was coined in the original Bostrom / Ord paper (https://nickbostrom.com/ethics/statusquo.pdf). That use of the term makes the point that if you oppose some upward change in a scaler value, and you have no reason to think that that value is already precisely optimized, then you should want to change that value in the opposite direction.
I used this term because I think the fundamental move being pointed towards is fairly similar (although actually I think the Bostrom/Ord application of this method is incorrect, which maybe means I should have come up with a different name!).