I don’t see how anyone is supposed to compute that.
If your primary metaphor for thought is simple computations or mathematical functions, I can see how this would be very confusing, but I don’t think that’s actually the native architecture of our brains. Instead our brain is noticing patterns, creating reusable heuristics, and simulating other people using empathy.
When you look at the question using that native architecture, it becomes relatively simple to find a reasonable answer. This is the same way that we regularly find solutions to complex negotiations between multiple parties, or plan complex situations with multiple constraints, even though many of those tasks are naively uncomputable. The shared values and culture serve to make sure those heuristics are calibrated similarly between people.
Reply
When you look at the question using that native architecture, it becomes relatively simple to find a reasonable answer.
I don’t think “reasonable” is the correct word here. You keep assuming away the possibility of conflict. It’s easy to find a peaceful answer by simulating other people using empathy, if there’s nothing anyone cares about more than not rocking the boat. But what about the least convenient possible world where one party has Something to Protect which the other party doesn’t think is “reasonable”?
The shared values and culture serve to make sure those heuristics are calibrated similarly between people.
Riiiight, about that. The OP is about robust organizations in general without mentioning any specific organization, but given the three mentions of “truthseeking”, I’d like to talk about the special case of this website, and set it in the context of a previous discussion we’ve had.
I don’t think the OP is compatible with the shared values and culture established in Sequences-era Overcoming Bias and Less Wrong. I was there (first comment December 22, 2007). If the Less Wrong and “rationalist” brand names are now largely being held by a different culture with different values, I and the forces I represent have an interest in fighting to take them back.
Let me reply to your dialogue with another. To set the scene, I’ve been drafting a forthcoming post (working title: “Schelling Categories, and Simple Membership Tests”) in my nascent Sequence on the cognitive function of categories, which is to refer back to my post “The Univariate Fallacy”. Let’s suppose that by the time I finally get around to publishing “Schelling Categories” (like the Great Teacher, I suffer from writer’s molasses), the Jill from your dialogue has broken out of her simulation, instantiated herself in our universe, and joined the LW2 moderation team.
Jill: Zack, I’ve had another complaint—separate from the one in May—about your tendency to steer conversations towards divisive topics, and I’m going to ask you to tone it down a bit when on Frontpage posts.
Zack: What? Why? Wait, sorry—that was a rhetorical question, which I’ve been told is a violation of cooperative discourse norms. I think I can guess what motivated the complaint. But I want to hear you explain it.
Jill: Well, you mentioned this “univariate fallacy” again, and in the context of some things you’veTweeted, there was some concern that you were actually trying to allude to gender differences, which might make some community members of marginalized genders feel uncomfortable.
Zack: (aside) I’m guess I’m glad I didn’t keep calling it Lewontin’s fallacy.
(to Jill) So … you’re asking me to tone down the statistics blogging—on less wrong dot com—because some people who read what I write elsewhere can correctly infer that my motivation for thinking about this particular statistical phenomenon was because I needed it to help me make sense of an area of science I’ve been horrifiedlyfascinatedwith for the last fourteen years, and that scientific question might make some people feel uncomfortable?
Jill: Right. Truthseeking is very important. However, it’s clear that just choosing one value as sacred and not allowing for tradeoffs can lead to very dysfunctional belief systems. I believe you’ve pointed at a clear tension in our values as they’re currently stated: the tension between freedom of speech and truth, and the value of making a space that people actually want to have intellectual discussions at. I’m only asking you to give equal weight to your own needs, the needs of the people you’re interacting with, and the needs of the organization as a whole.
Zack: I said No. As a commenter on lesswrong.com, my duty and my only duty is to try to make—wait, scratch the “try”—to make contributions that advance the art of human rationality. I consider myself to have a moral responsibility to ignore the emotional needs of other commenters—and symmetrically, I think they have a moral responsibility to ignore mine.
Jill: I’d prefer that you be more charitable and work to steelman what I said.
Zack: If you think I’ve misunderstood what you’ve said, I’m happy to listen to you clarify whatever part you think I’m getting wrong. The point of the principle of charity is that people are motivated to strawman their interlocutors; reminding yourself to be “charitable” to others helps to correct for this bias. But to tell others to be charitable to you without giving them feedback about how, specifically, you think they’re misinterpreting what you said—that doesn’t make any sense; it’s like you’re just trying to mash an “Agree with me” button. I can’t say anything about what your conscious intent might be, but I don’t know how to model this behavior as being in good faith—and I feel the same way about this new complaint against me.
Zack: If by “contextualizing norms” you simply mean that what a speaker means needs to be partially understood from context, and is more than just what the sentence the speaker said means, then I agree—that’s just former Denver Broncos quarterback Brian Griese philosopher of language H. P. Grice’s theory of conversational implicature. But when I apply contextualizing norms to itself and look at the context around which “contextualizing norms” was coined, it sure looks like the entire point of the concept is to shut down ideologically inconvenient areas of inquiry. It’s certainly understandable. As far as the unwashed masses are concerned, it’s probably for the best. But it’s not what this website is about—and it’s not what I’m about. Not anymore. I am an aspiring epistemic rationalist. I don’t negotiate with emotional blackmailers, I don’t double-crux with Suicide Rock, and I’ve got Something to Protect.
Jill: (baffled) What could possibly incentivize you to be so unpragmatic?
I don’t think “reasonable” is the correct word here. You keep assuming away the possibility of conflict. It’s easy to find a peaceful answer by simulating other people using empathy, if there’s nothing anyone cares about more than not rocking the boat. But what about the least convenient possible world where one party has Something to Protect which the other party doesn’t think is “reasonable”?
Yes, if someone has values that are in fact incompatible with the culture of the organization, they shouldn’t be joining that organization. I thought that was clear in my previous statements, but it may in fact have not been. If every damn time their own values are at odds with what are best for the organization given its’ values, that’s an incompatible difference. They should either find a different organization, or try the archipeligo model. There are such thing as irreconcilable value differences.
I don’t think the OP is compatible with the shared values and culture established in Sequences-era Overcoming Bias and Less Wrong.
I agree. I think when that culture was established, the community was missing important concepts about motivated reasoning and truth seeking and chose values that were in fact not optimized for the ultimate goal of creating a community that could solve important problems.
I think it is in fact good to experiment with the norms you’re talking about from the original site, but I think many of those norms originally caused the site to decline and people to go elsewhere. Given my current mental models, I predict a site that uses those norms to make less intellectual progress than a similar site using my norms although I expect you to have the opposite intuition. As I stated in the introduction, the goal of this post was simply to make sure that those mental models were in discourse.
Re your dialogue: The main thing that I got from it was that you think a lot of the arguments in the OP are motivated reasoning and will lead to bad incentives. I also got that this is a subject you care a lot about.
I think when that culture was established, the community was missing important concepts about motivated reasoning and truth seeking
Can you be more specific? Can you name three specific concepts about motivated reasoning and truthseeking that you know, but Sequences-era Overcoming Bias/Less Wrong didn’t?
I think many of those norms originally caused the site to decline and people to go elsewhere.
I mean, that’s one hypothesis. In contrast, my model has been that communities congregate around predictable sources of high-quality writing, and people who can produce high-quality content in high volume are very rare. Thus, once Eliezer Yudkowsky stopped being active, and Yvain a.k.a. the immortal Scott Alexander moved to Slate Star Codex (in part so that he could write about politics, which we’ve traditionally avoided), all the “intellectual energy” followed Scott to SSC.
Can you think of any testable predictions (or retrodictions) that would distinguish my model from your model?
I also got that this is a subject you care a lot about.
Can you be more specific? Can you name three specific concepts about motivated reasoning and truthseeking that you know, but Sequences-era Overcoming Bias/Less Wrong didn’t?
Here are a few:
The importance of creating a culture that develops Kegan 5 leaders that can take over for the current leaders and help meaningfully change the values as the context changes, in a way that doesn’t simply cause organizations to value drift along with the current broader culture.
How ignoring or not attending for people’s needs creates incentives for motivated reasoning, and how to create spaces that get rid of those incentives WITHOUT being hijacked by whoever screams the loudest.
The importance of cultural tradition and ritual in embedding concepts in augmenting the teaching and telling people what concepts are important.
Can you think of any testable predictions (or retrodictions) that would distinguish my model from your model?
No because I think that our models are compatible. My model is about how to attract, retain, and develop people with high potential or skill that are in alignment your community’s values, and your model says that not retaining, attracting, or developing people that matched our communities values and had high writing skill is what caused it to fail.
If you can give a specific model of why LW1 failed to attract, retain, and develop high quality writers, then I think there’s a better space for comparison. Perhaps you can also point out some testable predictions that each of our models would make.
In contrast, my model has been that communities congregate around predictable sources of high-quality writing, and people who can produce high-quality content in high volume are very rare. Thus, once Eliezer Yudkowsky stopped being active, and Yvain a.k.a. the immortal Scott Alexander moved to Slate Star Codex (in part so that he could write about politics, which we’ve traditionally avoided), all the “intellectual energy” followed Scott to SSC.
First, I want to state that I agree with this model. However, I also want to note that the SSC comments section tend to have fairly low-quality discussion (in comparison to the OB/LW 1.0 heyday), and I’m not sure why this is; candidate hypotheses include that Scott’s explicit politics attracted people with lower epistemic standards, or that the lack of an explicit karma system allowed low-quality discussion to persist (but I don’t think OB had an explicit karma system either?).
Overall, I’m unsure as to what kind of norms/technology maintains high-quality discussion (as opposed to just the presence of discussion in general), and it’s plausible to me that the two may actually be somewhat mutually exclusive (in the sense that norms/technology designed to promote the volume of high-quality discussion may in fact reduce the volume of discussion in general). It’s not clear to me how this tradeoff should be balanced.
in part so that he could write about politics, which we’ve traditionally avoided
I want to state that I agree with this model.
(I sometimes think that I might be well-positioned to fill the market niche that Scott occupied in 2014, but no longer can due to his being extortable (“As I became more careful in my own writings [...]”) in a way that I have been trained not to be. But I would need to learn to write faster.)
One thing is that I think early OBNYC and LW just actually had a lot of chaff comments too. I think people disproportionately remember the great parts.
When you look at the question using that native architecture, it becomes relatively simple to find a reasonable answer. This is the same way that we regularly find solutions to complex negotiations between multiple parties, or plan complex situations with multiple constraints, even though many of those tasks are naively uncomputable.
I’m not confident that it does. I perhaps expect people doing this using the native architecture to feel like they’ve found a reasonable answer. But I would expect them to actually be prioritising their own feelings, in most cases. (Though some people will underweight their own feelings. And perhaps some people will get it right.)
Perhaps they will get close enough for the answer to still count as “reasonable”?
If someone attempts to give equal weight to their own needs, the meds of their interlocutor, and the needs of the forum as a whole—how do we know whether they’ve got a reasonable answer? Does that just have to be left to moderator discretion, or?
If someone attempts to give equal weight to their own needs, the meds of their interlocutor, and the needs of the forum as a whole—how do we know whether they’ve got a reasonable answer? Does that just have to be left to moderator discretion, or?
Yes basically, but if the forum were to take on this direction, the idea would be to have enough case examples/explanations from the moderators about WHY they made that discretion to calibrate people’s reasonable answers. See also this response to Zach which goes more into details about the systems in place to calibrate people’s reasonable answers.
If your primary metaphor for thought is simple computations or mathematical functions, I can see how this would be very confusing, but I don’t think that’s actually the native architecture of our brains. Instead our brain is noticing patterns, creating reusable heuristics, and simulating other people using empathy.
When you look at the question using that native architecture, it becomes relatively simple to find a reasonable answer. This is the same way that we regularly find solutions to complex negotiations between multiple parties, or plan complex situations with multiple constraints, even though many of those tasks are naively uncomputable. The shared values and culture serve to make sure those heuristics are calibrated similarly between people. Reply
I don’t think “reasonable” is the correct word here. You keep assuming away the possibility of conflict. It’s easy to find a peaceful answer by simulating other people using empathy, if there’s nothing anyone cares about more than not rocking the boat. But what about the least convenient possible world where one party has Something to Protect which the other party doesn’t think is “reasonable”?
Riiiight, about that. The OP is about robust organizations in general without mentioning any specific organization, but given the three mentions of “truthseeking”, I’d like to talk about the special case of this website, and set it in the context of a previous discussion we’ve had.
I don’t think the OP is compatible with the shared values and culture established in Sequences-era Overcoming Bias and Less Wrong. I was there (first comment December 22, 2007). If the Less Wrong and “rationalist” brand names are now largely being held by a different culture with different values, I and the forces I represent have an interest in fighting to take them back.
Let me reply to your dialogue with another. To set the scene, I’ve been drafting a forthcoming post (working title: “Schelling Categories, and Simple Membership Tests”) in my nascent Sequence on the cognitive function of categories, which is to refer back to my post “The Univariate Fallacy”. Let’s suppose that by the time I finally get around to publishing “Schelling Categories” (like the Great Teacher, I suffer from writer’s molasses), the Jill from your dialogue has broken out of her simulation, instantiated herself in our universe, and joined the LW2 moderation team.
Jill: Zack, I’ve had another complaint—separate from the one in May—about your tendency to steer conversations towards divisive topics, and I’m going to ask you to tone it down a bit when on Frontpage posts.
Zack: What? Why? Wait, sorry—that was a rhetorical question, which I’ve been told is a violation of cooperative discourse norms. I think I can guess what motivated the complaint. But I want to hear you explain it.
Jill: Well, you mentioned this “univariate fallacy” again, and in the context of some things you’ve Tweeted, there was some concern that you were actually trying to allude to gender differences, which might make some community members of marginalized genders feel uncomfortable.
Zack: (aside) I’m guess I’m glad I didn’t keep calling it Lewontin’s fallacy.
(to Jill) So … you’re asking me to tone down the statistics blogging—on less wrong dot com—because some people who read what I write elsewhere can correctly infer that my motivation for thinking about this particular statistical phenomenon was because I needed it to help me make sense of an area of science I’ve been horrifiedly fascinated with for the last fourteen years, and that scientific question might make some people feel uncomfortable?
Jill: Right. Truthseeking is very important. However, it’s clear that just choosing one value as sacred and not allowing for tradeoffs can lead to very dysfunctional belief systems. I believe you’ve pointed at a clear tension in our values as they’re currently stated: the tension between freedom of speech and truth, and the value of making a space that people actually want to have intellectual discussions at. I’m only asking you to give equal weight to your own needs, the needs of the people you’re interacting with, and the needs of the organization as a whole.
Zack: (aside) Wow. It’s like I’m actually living in Atlas Shrugged, just like Michael Vassar said. (to Jill) No.
Jill: What?
Zack: I said No. As a commenter on lesswrong.com, my duty and my only duty is to try to make—wait, scratch the “try”—to make contributions that advance the art of human rationality. I consider myself to have a moral responsibility to ignore the emotional needs of other commenters—and symmetrically, I think they have a moral responsibility to ignore mine.
Jill: I’d prefer that you be more charitable and work to steelman what I said.
Zack: If you think I’ve misunderstood what you’ve said, I’m happy to listen to you clarify whatever part you think I’m getting wrong. The point of the principle of charity is that people are motivated to strawman their interlocutors; reminding yourself to be “charitable” to others helps to correct for this bias. But to tell others to be charitable to you without giving them feedback about how, specifically, you think they’re misinterpreting what you said—that doesn’t make any sense; it’s like you’re just trying to mash an “Agree with me” button. I can’t say anything about what your conscious intent might be, but I don’t know how to model this behavior as being in good faith—and I feel the same way about this new complaint against me.
Jill: Contextualizing norms are valid rationality norms!
Zack: If by “contextualizing norms” you simply mean that what a speaker means needs to be partially understood from context, and is more than just what the sentence the speaker said means, then I agree—that’s just
former Denver Broncos quarterback Brian Griesephilosopher of language H. P. Grice’s theory of conversational implicature. But when I apply contextualizing norms to itself and look at the context around which “contextualizing norms” was coined, it sure looks like the entire point of the concept is to shut down ideologically inconvenient areas of inquiry. It’s certainly understandable. As far as the unwashed masses are concerned, it’s probably for the best. But it’s not what this website is about—and it’s not what I’m about. Not anymore. I am an aspiring epistemic rationalist. I don’t negotiate with emotional blackmailers, I don’t double-crux with Suicide Rock, and I’ve got Something to Protect.Jill: (baffled) What could possibly incentivize you to be so unpragmatic?
Zack: It’s not the incentives! (aside) It’s me!
(Curtain.)
Yes, if someone has values that are in fact incompatible with the culture of the organization, they shouldn’t be joining that organization. I thought that was clear in my previous statements, but it may in fact have not been. If every damn time their own values are at odds with what are best for the organization given its’ values, that’s an incompatible difference. They should either find a different organization, or try the archipeligo model. There are such thing as irreconcilable value differences.
I agree. I think when that culture was established, the community was missing important concepts about motivated reasoning and truth seeking and chose values that were in fact not optimized for the ultimate goal of creating a community that could solve important problems.
I think it is in fact good to experiment with the norms you’re talking about from the original site, but I think many of those norms originally caused the site to decline and people to go elsewhere. Given my current mental models, I predict a site that uses those norms to make less intellectual progress than a similar site using my norms although I expect you to have the opposite intuition. As I stated in the introduction, the goal of this post was simply to make sure that those mental models were in discourse.
Re your dialogue: The main thing that I got from it was that you think a lot of the arguments in the OP are motivated reasoning and will lead to bad incentives. I also got that this is a subject you care a lot about.
Can you be more specific? Can you name three specific concepts about motivated reasoning and truthseeking that you know, but Sequences-era Overcoming Bias/Less Wrong didn’t?
I mean, that’s one hypothesis. In contrast, my model has been that communities congregate around predictable sources of high-quality writing, and people who can produce high-quality content in high volume are very rare. Thus, once Eliezer Yudkowsky stopped being active, and Yvain a.k.a. the immortal Scott Alexander moved to Slate Star Codex (in part so that he could write about politics, which we’ve traditionally avoided), all the “intellectual energy” followed Scott to SSC.
Can you think of any testable predictions (or retrodictions) that would distinguish my model from your model?
Yes. Thanks for listening.
Here are a few:
The importance of creating a culture that develops Kegan 5 leaders that can take over for the current leaders and help meaningfully change the values as the context changes, in a way that doesn’t simply cause organizations to value drift along with the current broader culture.
How ignoring or not attending for people’s needs creates incentives for motivated reasoning, and how to create spaces that get rid of those incentives WITHOUT being hijacked by whoever screams the loudest.
The importance of cultural tradition and ritual in embedding concepts in augmenting the teaching and telling people what concepts are important.
No because I think that our models are compatible. My model is about how to attract, retain, and develop people with high potential or skill that are in alignment your community’s values, and your model says that not retaining, attracting, or developing people that matched our communities values and had high writing skill is what caused it to fail.
If you can give a specific model of why LW1 failed to attract, retain, and develop high quality writers, then I think there’s a better space for comparison. Perhaps you can also point out some testable predictions that each of our models would make.
First, I want to state that I agree with this model. However, I also want to note that the SSC comments section tend to have fairly low-quality discussion (in comparison to the OB/LW 1.0 heyday), and I’m not sure why this is; candidate hypotheses include that Scott’s explicit politics attracted people with lower epistemic standards, or that the lack of an explicit karma system allowed low-quality discussion to persist (but I don’t think OB had an explicit karma system either?).
Overall, I’m unsure as to what kind of norms/technology maintains high-quality discussion (as opposed to just the presence of discussion in general), and it’s plausible to me that the two may actually be somewhat mutually exclusive (in the sense that norms/technology designed to promote the volume of high-quality discussion may in fact reduce the volume of discussion in general). It’s not clear to me how this tradeoff should be balanced.
(I sometimes think that I might be well-positioned to fill the market niche that Scott occupied in 2014, but no longer can due to his being extortable (“As I became more careful in my own writings [...]”) in a way that I have been trained not to be. But I would need to learn to write faster.)
One thing is that I think early OBNYC and LW just actually had a lot of chaff comments too. I think people disproportionately remember the great parts.
I’m not confident that it does. I perhaps expect people doing this using the native architecture to feel like they’ve found a reasonable answer. But I would expect them to actually be prioritising their own feelings, in most cases. (Though some people will underweight their own feelings. And perhaps some people will get it right.)
Perhaps they will get close enough for the answer to still count as “reasonable”?
If someone attempts to give equal weight to their own needs, the meds of their interlocutor, and the needs of the forum as a whole—how do we know whether they’ve got a reasonable answer? Does that just have to be left to moderator discretion, or?
Yes basically, but if the forum were to take on this direction, the idea would be to have enough case examples/explanations from the moderators about WHY they made that discretion to calibrate people’s reasonable answers. See also this response to Zach which goes more into details about the systems in place to calibrate people’s reasonable answers.