The epistemic concerns here (i.e. warping your perception of the mission / home / resistance to giving up either) are definitely the strongest argument I can see for making sure there is a non-mission-centered village.
I’m not sure I’m persuaded, though, because of the aforementioned “something needs to orient and drive the community.” You could certainly pick something other than “The Mission.” But whatever you pick, you’re going to end up with something that you become overly attached to.
My actual best guess is that the village should be oriented around truthseeking and the mission oriented around [truthseeking and] impact.
I think if you are in the village and have bad epistemics, you should get at least subtle pressure to improve your epistemics (possibly less subtle pressure over time, especially if you are taking up memetic space). You should not receive pressure for being on board with the mission, but you should receive at least a little pressure to have thought a bit about the mission and have some kind of opinion about it that actually engages with it.
Another component here is to have more (healthy) competition among orgs. I’m still sorting out what this means when it comes to organizations that sort-of-want-to-be-natural-monopolies. But I think if there’s only one org doing [rationality training / village infrastructure / communication infrastructure] then you’re sort of forced to conflate “is this thing good?” with “is this org doing a good job” along with “am I good for supporting them?”, which leads to weird bucket-errors.
My actual best guess is that the village should be oriented around truthseeking and the mission oriented around [truthseeking and] impact.
John Tooby has suggested that whatever becomes the orienting thing of a community, becomes automatically the subject of mind-killing impulses:
Coalition-mindedness makes everyone, including scientists, far stupider in coalitional collectivities than as individuals. Paradoxically, a political party united by supernatural beliefs can revise its beliefs about economics or climate without revisers being bad coalition members. But people whose coalitional membership is constituted by their shared adherence to “rational,” scientific propositions have a problem when—as is generally the case—new information arises which requires belief revision. To question or disagree with coalitional precepts, even for rational reasons, makes one a bad and immoral coalition member—at risk of losing job offers, one’s friends, and one’s cherished group identity. This freezes belief revision.
Forming coalitions around scientific or factual questions is disastrous, because it pits our urge for scientific truth-seeking against the nearly insuperable human appetite to be a good coalition member. Once scientific propositions are moralized, the scientific process is wounded, often fatally. No one is behaving either ethically or scientifically who does not make the best case possible for rival theories with which one disagrees.
I wouldn’t go so far as to say that this makes truthseeking a bad idea to orient around, since there does seem to be a way to orient around it in a way which avoids this failure mode, but at least one should be very cautious about how exactly.
If I think of the communities which I’ve seen that seem to have successfully oriented around truthseeking to some extent, the difference seems to be something like a process vs. content distinction. People aren’t going around explicitly swearing allegiance to rationality, but they are constantly signaling a truthseeking orientation through their behavior, such as by actively looking for other people’s cruxes in conversation and indicating their own.
people whose coalitional membership is constituted by their shared adherence to “rational,” scientific propositions have a problem when—as is generally the case—new information arises which requires belief revision.
My first reaction was that perhaps the community should be centered around updating on evidence rather than any specific science.
But of course, that can fail, too. For example, people can signal their virtue by updating on tinier and tinier pieces of evidence. Like, when the probability increases from 0.000001 to 0.0000011, people start yelling about how this changes everything, and if you say “huh, for me that is almost no change at all”, you become the unworthy one who refuses to update in face of evidence.
(The people updating on the tiny evidence most likely won’t even be technically correct, because purposefully looking for microscopic pieces of evidence will naturally introduce selection bias and double counting.)
People aren’t going around explicitly swearing allegiance to rationality, but they are constantly signaling a truthseeking orientation through their behavior, such as by actively looking for other people’s cruxes in conversation and indicating their own.
The epistemic concerns here (i.e. warping your perception of the mission / home / resistance to giving up either) are definitely the strongest argument I can see for making sure there is a non-mission-centered village.
I’m not sure I’m persuaded, though, because of the aforementioned “something needs to orient and drive the community.” You could certainly pick something other than “The Mission.” But whatever you pick, you’re going to end up with something that you become overly attached to.
My actual best guess is that the village should be oriented around truthseeking and the mission oriented around [truthseeking and] impact.
I think if you are in the village and have bad epistemics, you should get at least subtle pressure to improve your epistemics (possibly less subtle pressure over time, especially if you are taking up memetic space). You should not receive pressure for being on board with the mission, but you should receive at least a little pressure to have thought a bit about the mission and have some kind of opinion about it that actually engages with it.
Another component here is to have more (healthy) competition among orgs. I’m still sorting out what this means when it comes to organizations that sort-of-want-to-be-natural-monopolies. But I think if there’s only one org doing [rationality training / village infrastructure / communication infrastructure] then you’re sort of forced to conflate “is this thing good?” with “is this org doing a good job” along with “am I good for supporting them?”, which leads to weird bucket-errors.
John Tooby has suggested that whatever becomes the orienting thing of a community, becomes automatically the subject of mind-killing impulses:
I wouldn’t go so far as to say that this makes truthseeking a bad idea to orient around, since there does seem to be a way to orient around it in a way which avoids this failure mode, but at least one should be very cautious about how exactly.
If I think of the communities which I’ve seen that seem to have successfully oriented around truthseeking to some extent, the difference seems to be something like a process vs. content distinction. People aren’t going around explicitly swearing allegiance to rationality, but they are constantly signaling a truthseeking orientation through their behavior, such as by actively looking for other people’s cruxes in conversation and indicating their own.
My first reaction was that perhaps the community should be centered around updating on evidence rather than any specific science.
But of course, that can fail, too. For example, people can signal their virtue by updating on tinier and tinier pieces of evidence. Like, when the probability increases from 0.000001 to 0.0000011, people start yelling about how this changes everything, and if you say “huh, for me that is almost no change at all”, you become the unworthy one who refuses to update in face of evidence.
(The people updating on the tiny evidence most likely won’t even be technically correct, because purposefully looking for microscopic pieces of evidence will naturally introduce selection bias and double counting.)
Yeah, this is roughly what I meant.