Even amongst altruists (at least human ones), excessive skepticism can be a virtue, due to the phenomenon of belief bias, in which “someone’s evaluation of the logical strength of an argument is biased by their belief in the truth or falsity of the conclusion”.
I wonder if there are ways to either teach scientists to compartmentalize in such a way that their irrational skepticism (or skepticism-like mind state) affects only their motivation to find flaws and no other decisions, or to set up institutions that make it possible for scientists to be irrationally skeptical without having the power to let that skepticism affect anything other than their motivation to find flaws.
More in general, in all these cases where it seems human psychology causes irrationality to do better than rationality, it seems like we should be able to get further improvements by sandboxing the irrationality.
This seems like a good question that’s worth thinking about. I wonder if adversarial legal systems (where the job of deciding who is guilty of a crime is divided into the roles of prosecutor, defense attorney, and judge/jury) can be considered an example of this, and if so, why don’t scientific institutions do something similar?
Adversarial legal systems were not necessarily designed to be role models of rational groups. They are more like a way to give opposing biased adversaries an incrementally fairer way of fighting it out than existed previously.
I’m guessing scientific institutions don’t do this because the people involved feel they are less biased (and probably actually are) than participants in a legal system.
I wonder if there are ways to either teach scientists to compartmentalize in such a way that their irrational skepticism (or skepticism-like mind state) affects only their motivation to find flaws and no other decisions, or to set up institutions that make it possible for scientists to be irrationally skeptical without having the power to let that skepticism affect anything other than their motivation to find flaws.
More in general, in all these cases where it seems human psychology causes irrationality to do better than rationality, it seems like we should be able to get further improvements by sandboxing the irrationality.
This seems like a good question that’s worth thinking about. I wonder if adversarial legal systems (where the job of deciding who is guilty of a crime is divided into the roles of prosecutor, defense attorney, and judge/jury) can be considered an example of this, and if so, why don’t scientific institutions do something similar?
Nominating adversarial legal systems as role models of rational groups, knowing how well they function in practice, seems a bit misplaced.
Adversarial legal systems were not necessarily designed to be role models of rational groups. They are more like a way to give opposing biased adversaries an incrementally fairer way of fighting it out than existed previously.
I’m guessing scientific institutions don’t do this because the people involved feel they are less biased (and probably actually are) than participants in a legal system.
But are they better than inquisitorial legal systems?
Arguably, peer review provides a vaguely similar function—a peer reviewer should turn their skepticism up a notch.