My actual best guess is that the village should be oriented around truthseeking and the mission oriented around [truthseeking and] impact.
John Tooby has suggested that whatever becomes the orienting thing of a community, becomes automatically the subject of mind-killing impulses:
Coalition-mindedness makes everyone, including scientists, far stupider in coalitional collectivities than as individuals. Paradoxically, a political party united by supernatural beliefs can revise its beliefs about economics or climate without revisers being bad coalition members. But people whose coalitional membership is constituted by their shared adherence to “rational,” scientific propositions have a problem when—as is generally the case—new information arises which requires belief revision. To question or disagree with coalitional precepts, even for rational reasons, makes one a bad and immoral coalition member—at risk of losing job offers, one’s friends, and one’s cherished group identity. This freezes belief revision.
Forming coalitions around scientific or factual questions is disastrous, because it pits our urge for scientific truth-seeking against the nearly insuperable human appetite to be a good coalition member. Once scientific propositions are moralized, the scientific process is wounded, often fatally. No one is behaving either ethically or scientifically who does not make the best case possible for rival theories with which one disagrees.
I wouldn’t go so far as to say that this makes truthseeking a bad idea to orient around, since there does seem to be a way to orient around it in a way which avoids this failure mode, but at least one should be very cautious about how exactly.
If I think of the communities which I’ve seen that seem to have successfully oriented around truthseeking to some extent, the difference seems to be something like a process vs. content distinction. People aren’t going around explicitly swearing allegiance to rationality, but they are constantly signaling a truthseeking orientation through their behavior, such as by actively looking for other people’s cruxes in conversation and indicating their own.
people whose coalitional membership is constituted by their shared adherence to “rational,” scientific propositions have a problem when—as is generally the case—new information arises which requires belief revision.
My first reaction was that perhaps the community should be centered around updating on evidence rather than any specific science.
But of course, that can fail, too. For example, people can signal their virtue by updating on tinier and tinier pieces of evidence. Like, when the probability increases from 0.000001 to 0.0000011, people start yelling about how this changes everything, and if you say “huh, for me that is almost no change at all”, you become the unworthy one who refuses to update in face of evidence.
(The people updating on the tiny evidence most likely won’t even be technically correct, because purposefully looking for microscopic pieces of evidence will naturally introduce selection bias and double counting.)
People aren’t going around explicitly swearing allegiance to rationality, but they are constantly signaling a truthseeking orientation through their behavior, such as by actively looking for other people’s cruxes in conversation and indicating their own.
John Tooby has suggested that whatever becomes the orienting thing of a community, becomes automatically the subject of mind-killing impulses:
I wouldn’t go so far as to say that this makes truthseeking a bad idea to orient around, since there does seem to be a way to orient around it in a way which avoids this failure mode, but at least one should be very cautious about how exactly.
If I think of the communities which I’ve seen that seem to have successfully oriented around truthseeking to some extent, the difference seems to be something like a process vs. content distinction. People aren’t going around explicitly swearing allegiance to rationality, but they are constantly signaling a truthseeking orientation through their behavior, such as by actively looking for other people’s cruxes in conversation and indicating their own.
My first reaction was that perhaps the community should be centered around updating on evidence rather than any specific science.
But of course, that can fail, too. For example, people can signal their virtue by updating on tinier and tinier pieces of evidence. Like, when the probability increases from 0.000001 to 0.0000011, people start yelling about how this changes everything, and if you say “huh, for me that is almost no change at all”, you become the unworthy one who refuses to update in face of evidence.
(The people updating on the tiny evidence most likely won’t even be technically correct, because purposefully looking for microscopic pieces of evidence will naturally introduce selection bias and double counting.)
Yeah, this is roughly what I meant.