I don’t think “reasonable” is the correct word here. You keep assuming away the possibility of conflict. It’s easy to find a peaceful answer by simulating other people using empathy, if there’s nothing anyone cares about more than not rocking the boat. But what about the least convenient possible world where one party has Something to Protect which the other party doesn’t think is “reasonable”?
Yes, if someone has values that are in fact incompatible with the culture of the organization, they shouldn’t be joining that organization. I thought that was clear in my previous statements, but it may in fact have not been. If every damn time their own values are at odds with what are best for the organization given its’ values, that’s an incompatible difference. They should either find a different organization, or try the archipeligo model. There are such thing as irreconcilable value differences.
I don’t think the OP is compatible with the shared values and culture established in Sequences-era Overcoming Bias and Less Wrong.
I agree. I think when that culture was established, the community was missing important concepts about motivated reasoning and truth seeking and chose values that were in fact not optimized for the ultimate goal of creating a community that could solve important problems.
I think it is in fact good to experiment with the norms you’re talking about from the original site, but I think many of those norms originally caused the site to decline and people to go elsewhere. Given my current mental models, I predict a site that uses those norms to make less intellectual progress than a similar site using my norms although I expect you to have the opposite intuition. As I stated in the introduction, the goal of this post was simply to make sure that those mental models were in discourse.
Re your dialogue: The main thing that I got from it was that you think a lot of the arguments in the OP are motivated reasoning and will lead to bad incentives. I also got that this is a subject you care a lot about.
I think when that culture was established, the community was missing important concepts about motivated reasoning and truth seeking
Can you be more specific? Can you name three specific concepts about motivated reasoning and truthseeking that you know, but Sequences-era Overcoming Bias/Less Wrong didn’t?
I think many of those norms originally caused the site to decline and people to go elsewhere.
I mean, that’s one hypothesis. In contrast, my model has been that communities congregate around predictable sources of high-quality writing, and people who can produce high-quality content in high volume are very rare. Thus, once Eliezer Yudkowsky stopped being active, and Yvain a.k.a. the immortal Scott Alexander moved to Slate Star Codex (in part so that he could write about politics, which we’ve traditionally avoided), all the “intellectual energy” followed Scott to SSC.
Can you think of any testable predictions (or retrodictions) that would distinguish my model from your model?
I also got that this is a subject you care a lot about.
Can you be more specific? Can you name three specific concepts about motivated reasoning and truthseeking that you know, but Sequences-era Overcoming Bias/Less Wrong didn’t?
Here are a few:
The importance of creating a culture that develops Kegan 5 leaders that can take over for the current leaders and help meaningfully change the values as the context changes, in a way that doesn’t simply cause organizations to value drift along with the current broader culture.
How ignoring or not attending for people’s needs creates incentives for motivated reasoning, and how to create spaces that get rid of those incentives WITHOUT being hijacked by whoever screams the loudest.
The importance of cultural tradition and ritual in embedding concepts in augmenting the teaching and telling people what concepts are important.
Can you think of any testable predictions (or retrodictions) that would distinguish my model from your model?
No because I think that our models are compatible. My model is about how to attract, retain, and develop people with high potential or skill that are in alignment your community’s values, and your model says that not retaining, attracting, or developing people that matched our communities values and had high writing skill is what caused it to fail.
If you can give a specific model of why LW1 failed to attract, retain, and develop high quality writers, then I think there’s a better space for comparison. Perhaps you can also point out some testable predictions that each of our models would make.
In contrast, my model has been that communities congregate around predictable sources of high-quality writing, and people who can produce high-quality content in high volume are very rare. Thus, once Eliezer Yudkowsky stopped being active, and Yvain a.k.a. the immortal Scott Alexander moved to Slate Star Codex (in part so that he could write about politics, which we’ve traditionally avoided), all the “intellectual energy” followed Scott to SSC.
First, I want to state that I agree with this model. However, I also want to note that the SSC comments section tend to have fairly low-quality discussion (in comparison to the OB/LW 1.0 heyday), and I’m not sure why this is; candidate hypotheses include that Scott’s explicit politics attracted people with lower epistemic standards, or that the lack of an explicit karma system allowed low-quality discussion to persist (but I don’t think OB had an explicit karma system either?).
Overall, I’m unsure as to what kind of norms/technology maintains high-quality discussion (as opposed to just the presence of discussion in general), and it’s plausible to me that the two may actually be somewhat mutually exclusive (in the sense that norms/technology designed to promote the volume of high-quality discussion may in fact reduce the volume of discussion in general). It’s not clear to me how this tradeoff should be balanced.
in part so that he could write about politics, which we’ve traditionally avoided
I want to state that I agree with this model.
(I sometimes think that I might be well-positioned to fill the market niche that Scott occupied in 2014, but no longer can due to his being extortable (“As I became more careful in my own writings [...]”) in a way that I have been trained not to be. But I would need to learn to write faster.)
One thing is that I think early OBNYC and LW just actually had a lot of chaff comments too. I think people disproportionately remember the great parts.
Yes, if someone has values that are in fact incompatible with the culture of the organization, they shouldn’t be joining that organization. I thought that was clear in my previous statements, but it may in fact have not been. If every damn time their own values are at odds with what are best for the organization given its’ values, that’s an incompatible difference. They should either find a different organization, or try the archipeligo model. There are such thing as irreconcilable value differences.
I agree. I think when that culture was established, the community was missing important concepts about motivated reasoning and truth seeking and chose values that were in fact not optimized for the ultimate goal of creating a community that could solve important problems.
I think it is in fact good to experiment with the norms you’re talking about from the original site, but I think many of those norms originally caused the site to decline and people to go elsewhere. Given my current mental models, I predict a site that uses those norms to make less intellectual progress than a similar site using my norms although I expect you to have the opposite intuition. As I stated in the introduction, the goal of this post was simply to make sure that those mental models were in discourse.
Re your dialogue: The main thing that I got from it was that you think a lot of the arguments in the OP are motivated reasoning and will lead to bad incentives. I also got that this is a subject you care a lot about.
Can you be more specific? Can you name three specific concepts about motivated reasoning and truthseeking that you know, but Sequences-era Overcoming Bias/Less Wrong didn’t?
I mean, that’s one hypothesis. In contrast, my model has been that communities congregate around predictable sources of high-quality writing, and people who can produce high-quality content in high volume are very rare. Thus, once Eliezer Yudkowsky stopped being active, and Yvain a.k.a. the immortal Scott Alexander moved to Slate Star Codex (in part so that he could write about politics, which we’ve traditionally avoided), all the “intellectual energy” followed Scott to SSC.
Can you think of any testable predictions (or retrodictions) that would distinguish my model from your model?
Yes. Thanks for listening.
Here are a few:
The importance of creating a culture that develops Kegan 5 leaders that can take over for the current leaders and help meaningfully change the values as the context changes, in a way that doesn’t simply cause organizations to value drift along with the current broader culture.
How ignoring or not attending for people’s needs creates incentives for motivated reasoning, and how to create spaces that get rid of those incentives WITHOUT being hijacked by whoever screams the loudest.
The importance of cultural tradition and ritual in embedding concepts in augmenting the teaching and telling people what concepts are important.
No because I think that our models are compatible. My model is about how to attract, retain, and develop people with high potential or skill that are in alignment your community’s values, and your model says that not retaining, attracting, or developing people that matched our communities values and had high writing skill is what caused it to fail.
If you can give a specific model of why LW1 failed to attract, retain, and develop high quality writers, then I think there’s a better space for comparison. Perhaps you can also point out some testable predictions that each of our models would make.
First, I want to state that I agree with this model. However, I also want to note that the SSC comments section tend to have fairly low-quality discussion (in comparison to the OB/LW 1.0 heyday), and I’m not sure why this is; candidate hypotheses include that Scott’s explicit politics attracted people with lower epistemic standards, or that the lack of an explicit karma system allowed low-quality discussion to persist (but I don’t think OB had an explicit karma system either?).
Overall, I’m unsure as to what kind of norms/technology maintains high-quality discussion (as opposed to just the presence of discussion in general), and it’s plausible to me that the two may actually be somewhat mutually exclusive (in the sense that norms/technology designed to promote the volume of high-quality discussion may in fact reduce the volume of discussion in general). It’s not clear to me how this tradeoff should be balanced.
(I sometimes think that I might be well-positioned to fill the market niche that Scott occupied in 2014, but no longer can due to his being extortable (“As I became more careful in my own writings [...]”) in a way that I have been trained not to be. But I would need to learn to write faster.)
One thing is that I think early OBNYC and LW just actually had a lot of chaff comments too. I think people disproportionately remember the great parts.