Inquisitive vs. adversarial rationality

Epistemic status: prima facie unlikely in the usual framework, which I’ll try to reframe. Corroborated by loads of empirical observations. YMMV, but if you’ve held some contrarian view in the past that you came to realize was wrong, this might resonate.

In practical (and also not-so-practical) life, we often have to make a call as to which theory or fact of matter is probably true. In one particularly popular definition of rationality, being rational is making the right call, as often as possible. If you can make the map correspond to the territory, you should.

I believe that in many cases, the best way to do so is not to adopt what I will call inquisitive thinking, in which you, potentially after researching somewhat deeply on a topic, will go on and try to come up with your own arguments to support one side or the other. Rather, I think you should most often adopt adversarial thinking, in which you’ll simply judge which side of the debate is probably right on the basis of the existing arguments, without trying to come up with new arguments yourself.

You might feel the adjectives “inquisitive” and “adversarial” are being used wierdly here, but I’m taking them from the legal literature. An inquisitive (aka inquisitorial) legal system is one in which the judge acts as both judge and prosecutor, personally digging into the facts before ruling. An adversarial system, on the other hand, is one in which judges are mostly passive observers, and parties are to argue their case before them without much (or any) interference, for them at the end to rule on the basis of the evidence presented, not being allowed to go dig more evidence themselves.

There is a reason why most legal systems in use today have evolved from (mostly or all) inquisitive to (mostly or all) adversarial, and that’s because we have a gigantic body of evidence to suggest that inquisitive systems are particularly prone to render biased judgements. The more you allow judges to go dig, the more likely they are to lose their purported impartiality and start doing strange things.

I suggest that this phenomenon is not particular to judges, but is rather a common feature of human (and very possibly even non-human) rationality. The main point is that digging more and more evidence yourself is ultimately not selecting for truth, but rather for your particular biases. If you have a limited amount of pre-selected evidence to analyze – evidence selected by other people –, it’s unlikely to be tailored to your particular taste, and you’re thus more likely to weigh it impartially. On the other hand, once you allow yourself to go dig evidence for your own taste, you’re much more likely to select evidence that is flawed in ways that match your own biases.

As an intuition pump, that’s really much the same as an AI trained to identify pictures of cats that will, on request for the prototypical cat, generate something that looks like noise. Such an AI is not useless, mind you – it’s actually often pretty accurate in telling pre-selected images of cats and non-cats apart. But you may want to use it in a way that does not involve asking it to go dig the best cat picture out there in the space of possible pictures. Perhaps our brains are not so different, after all.