In my opinion, the Hamming problem of group rationality, and possibly the Hamming problem of rationality generally, is how to preserve epistemic rationality under the inherent political pressures existing in a group produces.
It is the Hamming problem because if it isn’t solved, everything else, including all the progress made on individual rationality, is doomed to become utterly worthless. We are not designed to be rational, and this is most harmful in group contexts, where the elephants in our brains take the most control from the riders and we have the least idea of what goals we are actually working towards.
I do not currently have any good models on how to attack it. The one person I thought might be making some progress on it was Brent, but he’s now been justly exiled, and I have the sense that his pre-exile intellectual output is now subject to a high degree of scrutiny. This is understandable, but since I think his explicit models were superior to any anyone else has publicly shared, it’s a significant setback.
Since that exile happened, I’ve attempted to find prior art elsewhere to build on, but the best prospect so far (C. Fred Alford, Group Psychology and Political Theory) turned out to be Freudian garbage.
The central descriptive insight I took from him is that most things we do are status-motivated, even when we think we have a clear picture of what our motivations are and status is not included in that picture. Our picture of what the truth looks like is fundamentally warped by status in ways that are very hard to fully adjust for.
Relatedly, I think the moderation policies of new LessWrong double down on this status-warping, and so I am reluctant to put anything of significant value on this site.
Moderation styles harsher than “Easy-going” are toxic to group rationality and failure to ban them results in an echo chamber. Banning users from commenting on your post(s) is toxic to group rationality and results in an echo chamber. Deleting comments without a trace, likewise. It is absolutely essential that anyone who is silencing people they don’t want to hear from must take significant effort to do so and that these actions must be extremely visible to everyone else, or else there is no way to note and shame people who abuse it. And virtually everyone who has this power and has social power will abuse it, whether they realize that’s what they’re doing or not.
My comment must necessary be something of an aside since I don’t know the Hamming problem. However, your statement “We are not designed to be rational” jumped out for me.
Is that to say something along the lines of rationality is not one of the characteristics that provided an evolutionary advantage for us ? Or would it mean rationality was a mutation of some “design” (and possibly one that was a good survival trait)?
Or is the correct understanding there something entirely different?
In my opinion, the Hamming problem of group rationality, and possibly the Hamming problem of rationality generally, is how to preserve epistemic rationality under the inherent political pressures existing in a group produces.
It is the Hamming problem because if it isn’t solved, everything else, including all the progress made on individual rationality, is doomed to become utterly worthless. We are not designed to be rational, and this is most harmful in group contexts, where the elephants in our brains take the most control from the riders and we have the least idea of what goals we are actually working towards.
I do not currently have any good models on how to attack it. The one person I thought might be making some progress on it was Brent, but he’s now been justly exiled, and I have the sense that his pre-exile intellectual output is now subject to a high degree of scrutiny. This is understandable, but since I think his explicit models were superior to any anyone else has publicly shared, it’s a significant setback.
Since that exile happened, I’ve attempted to find prior art elsewhere to build on, but the best prospect so far (C. Fred Alford, Group Psychology and Political Theory) turned out to be Freudian garbage.
What do you consider to be his core insights? Would you consider writing a post on this?
The central descriptive insight I took from him is that most things we do are status-motivated, even when we think we have a clear picture of what our motivations are and status is not included in that picture. Our picture of what the truth looks like is fundamentally warped by status in ways that are very hard to fully adjust for.
Relatedly, I think the moderation policies of new LessWrong double down on this status-warping, and so I am reluctant to put anything of significant value on this site.
Which policies in particular?
Literally everything described here: https://www.lesswrong.com/posts/adk5xv5Q4hjvpEhhh/meta-new-moderation-tools-and-moderation-guidelines
Moderation styles harsher than “Easy-going” are toxic to group rationality and failure to ban them results in an echo chamber. Banning users from commenting on your post(s) is toxic to group rationality and results in an echo chamber. Deleting comments without a trace, likewise. It is absolutely essential that anyone who is silencing people they don’t want to hear from must take significant effort to do so and that these actions must be extremely visible to everyone else, or else there is no way to note and shame people who abuse it. And virtually everyone who has this power and has social power will abuse it, whether they realize that’s what they’re doing or not.
My comment must necessary be something of an aside since I don’t know the Hamming problem. However, your statement “We are not designed to be rational” jumped out for me.
Is that to say something along the lines of rationality is not one of the characteristics that provided an evolutionary advantage for us ? Or would it mean rationality was a mutation of some “design” (and possibly one that was a good survival trait)?
Or is the correct understanding there something entirely different?
Hamming Question: “What is the most important problem in your field?”
Hamming Problem: The answer to that question.
Rationality did not boost inclusive fitness in the environment of evolutionary adaptedness and still doesn’t.