If not, presumably you think the benefit is outweighed by other costs—but what are those costs, specifically?
Some costs:
Such people seem much more likely to also themselves be fairly disagreeable.
There are many fewer of them. I think I’ve probably gotten net-positive value out of my interactions with them to date, but I’ve definitely gotten a lot of value out of interactions with many people who wouldn’t fit the bill, and selecting against them would be a mistake.
To be clear, if I were to select people to interact with primarily on whatever qualities I expect to result in the most useful intellectual progress, I do expect that those people would both be at lower risk of being cognitively hijacked and more disagreeable than the general population. But the correlation isn’t overwhelming, and selecting primarily for “low risk of being cognitively hijacked” would not get me the as much of the useful thing I actually want.
How large does something need to be in order to be a “community”?
As I mentioned in my reply to Said, I did in fact have medium-sized online communities in mind when writing that comment. I agree that stronger social bonds between individuals will usually change the calculus on communication norms. I also suspect that it’s positively tractable to change that frontier for any given individual relationship through deliberate effort, while that would be much more difficult[1] for larger communities.
There are many fewer of them [...] the correlation isn’t overwhelming [...] selecting primarily [...] would not get me the as much of the useful thing I actually want
Sure, but the same arguments go through for, say, mathematical ability, right? The correlation between math-smarts and the kind of intellectual progress we’re (ostensibly) trying to achieve on this website isn’t overwhelming; selecting primarily for math prowess would get you less advanced rationality when the tails come apart.
And yet, I would not take this as a reason not to “structure communities like LessWrong in ways which optimize for participants being further along on this axis” for fear of “driving away [a …] fraction of an existing community’s membership”. In my own intellectual history, I studied a lot of math and compsci stuff because the culture of the Overcoming Bias comment section of 2008 made that seem like a noble and high-status thing to do. A website that catered to my youthful ignorance instead of challenging me to remediate it would have made me weaker rather than stronger.
LessWrong is obviously structured in ways which optimize for participants being quite far along that axis relative to the general population; the question is whether further optimization is good or bad on the margin.
I think we need an individualist conflict-theoretic rather than a collective mistake-theoretic perspective to make sense of what’s going on here.
If the community were being optimized by the God-Empress, who is responsible for the whole community and everything in it, then She would decide whether more or less math is good on the margin for Her purposes.
But actually, there’s no such thing as the God-Empress; there are individual men and women, and there are families. That’s the context in which Said’s plea to “keep your thumb off the scales, as much as possible” can even be coherent. (If there were a God-Empress determining the whole community and everything in it as definitely as an author determines the words in a novel, then you couldn’t ask Her to keep Her thumb off the scales. What would that even mean?)
In contrast to the God-Empress, mortals have been known to make use of a computational shortcut they call “not my problem”. If I make a post, and you say, “This has too many equations in it; people don’t want to read a website with too many equations; you’re driving off more value to community than you’re creating”, it only makes sense to think of this as a disagreement if I’ve accepted the premise that my job is to optimize the whole community and everything in it, rather than to make good posts. If my position is instead, “I thought it was a good post; if it drives away people who don’t like equations, that’s not my problem,” then what we have is a conflict rather than a disagreement.
In contrast to the God-Empress, mortals have been known to make use of a computational shortcut they call “not my problem”. If I make a post, and you say, “This has too many equations in it; people don’t want to read a website with too many equations; you’re driving off more value to community than you’re creating”, it only makes sense to think of this as a disagreement if I’ve accepted the premise that my job is to optimize the whole community and everything in it, rather than to make good posts. If my position is instead, “I thought it was a good post; if it drives away people who don’t like equations, that’s not my problem,” then what we have is a conflict rather than a disagreement.
Indeed. In fact, we can take this analysis further, as follows:
If there are people whose problem it is to optimize the whole community and everything in it (let us skip for the moment the questions of why this is those people’s problem, and who decided that it should be, and how), then those people might say to you: “Indeed it is not your problem, to begin with; it is mine; I must solve it; and my approach to solving this problem is to make it your problem, by the power vested in me.” At that point you have various options: accede and cooperate, refuse and resist, perhaps others… but what you no longer have is the option of shrugging and saying “not my problem”, because in the course of the conflict which ensued when you initially shrugged thus, the problem has now been imposed upon you by force.
Of course, there are those questions which we skipped—why is this “problem” a problem for those people in authority; who decided this, and how; why are they in authority to begin with, and why do they have the powers that they have; how does this state of affairs comport with our interests, and what shall we do about it if the answer is “not very well”; and others in this vein. And, likewise, if we take the “refuse and resist” option, we can start a more general conversation about what we, collectively, are trying to accomplish, and what states of affairs “we” (i.e., the authorities, who may or may not represent our interests, and may or may not claim to do so) should take as problems to be solved, etc.
In short, this is an inescapably political question, with all the usual implications. It can be approached mistake-theoretically only if all involved (a) agree on the goals of the whole enterprise, and (b) represent honestly, in discussion with one another, their respective individual goals in participating in said enterprise. (And, obviously, assuming that (a) and (b) hold, as a starting point for discussion, is unwise, to say the least!)
I also suspect that it’s positively tractable to change that frontier for any given individual relationship through deliberate effort, while that would be much more difficult[1] for larger communities.
[1] I think basically impossible in nearly all cases, but don’t have legible justifications for that degree of belief.
This seems diametrically wrong to me. I would say that it’s difficult (though by no means impossible) for an individual to change in this way, but very easy for a community to do so—through selective (and, to a lesser degree, structural) methods. (But I suspect you were thinking of corrective methods instead, and for that reason judged the task to be “basically impossible”—no?)
No, I meant that it’s very difficult to do so for a community without it being net-negative with respect to valuable things coming out of the community. Obviously you can create a new community by driving away an arbitrarily large fraction of an existing community’s membership; this is not a very interesting claim. And obviously having some specific composition of members does not necessarily lead to valuable output, but whether this gets better or worse is mostly an empirical question, and I’ve already asked for evidence on the subject.
Obviously you can create a new community by driving away an arbitrarily large fraction of an existing community’s membership; this is not a very interesting claim.
Is it not? Why?
In my experience, it’s entirely possible for a community to be improved by getting rid of some fraction of its members. (Of course, it is usually then desirable to add some new members, different from the departed ones—but the effect of the departures themselves may help to draw in new members, of a sort who would not have joined the community as it was. And, in any case, new members may be attracted by all the usual means.)
As for your empirical claims (“it’s very difficult to do so for a community without it being net-negative …”, etc.), I definitely don’t agree, but it’s not clear what sort of evidence I could provide (nor what you could provide to support your view of things)…
Some costs:
Such people seem much more likely to also themselves be fairly disagreeable.
There are many fewer of them. I think I’ve probably gotten net-positive value out of my interactions with them to date, but I’ve definitely gotten a lot of value out of interactions with many people who wouldn’t fit the bill, and selecting against them would be a mistake.
To be clear, if I were to select people to interact with primarily on whatever qualities I expect to result in the most useful intellectual progress, I do expect that those people would both be at lower risk of being cognitively hijacked and more disagreeable than the general population. But the correlation isn’t overwhelming, and selecting primarily for “low risk of being cognitively hijacked” would not get me the as much of the useful thing I actually want.
As I mentioned in my reply to Said, I did in fact have medium-sized online communities in mind when writing that comment. I agree that stronger social bonds between individuals will usually change the calculus on communication norms. I also suspect that it’s positively tractable to change that frontier for any given individual relationship through deliberate effort, while that would be much more difficult[1] for larger communities.
I think basically impossible in nearly all cases, but don’t have legible justifications for that degree of belief.
Sure, but the same arguments go through for, say, mathematical ability, right? The correlation between math-smarts and the kind of intellectual progress we’re (ostensibly) trying to achieve on this website isn’t overwhelming; selecting primarily for math prowess would get you less advanced rationality when the tails come apart.
And yet, I would not take this as a reason not to “structure communities like LessWrong in ways which optimize for participants being further along on this axis” for fear of “driving away [a …] fraction of an existing community’s membership”. In my own intellectual history, I studied a lot of math and compsci stuff because the culture of the Overcoming Bias comment section of 2008 made that seem like a noble and high-status thing to do. A website that catered to my youthful ignorance instead of challenging me to remediate it would have made me weaker rather than stronger.
LessWrong is obviously structured in ways which optimize for participants being quite far along that axis relative to the general population; the question is whether further optimization is good or bad on the margin.
I think we need an individualist conflict-theoretic rather than a collective mistake-theoretic perspective to make sense of what’s going on here.
If the community were being optimized by the God-Empress, who is responsible for the whole community and everything in it, then She would decide whether more or less math is good on the margin for Her purposes.
But actually, there’s no such thing as the God-Empress; there are individual men and women, and there are families. That’s the context in which Said’s plea to “keep your thumb off the scales, as much as possible” can even be coherent. (If there were a God-Empress determining the whole community and everything in it as definitely as an author determines the words in a novel, then you couldn’t ask Her to keep Her thumb off the scales. What would that even mean?)
In contrast to the God-Empress, mortals have been known to make use of a computational shortcut they call “not my problem”. If I make a post, and you say, “This has too many equations in it; people don’t want to read a website with too many equations; you’re driving off more value to community than you’re creating”, it only makes sense to think of this as a disagreement if I’ve accepted the premise that my job is to optimize the whole community and everything in it, rather than to make good posts. If my position is instead, “I thought it was a good post; if it drives away people who don’t like equations, that’s not my problem,” then what we have is a conflict rather than a disagreement.
Indeed. In fact, we can take this analysis further, as follows:
If there are people whose problem it is to optimize the whole community and everything in it (let us skip for the moment the questions of why this is those people’s problem, and who decided that it should be, and how), then those people might say to you: “Indeed it is not your problem, to begin with; it is mine; I must solve it; and my approach to solving this problem is to make it your problem, by the power vested in me.” At that point you have various options: accede and cooperate, refuse and resist, perhaps others… but what you no longer have is the option of shrugging and saying “not my problem”, because in the course of the conflict which ensued when you initially shrugged thus, the problem has now been imposed upon you by force.
Of course, there are those questions which we skipped—why is this “problem” a problem for those people in authority; who decided this, and how; why are they in authority to begin with, and why do they have the powers that they have; how does this state of affairs comport with our interests, and what shall we do about it if the answer is “not very well”; and others in this vein. And, likewise, if we take the “refuse and resist” option, we can start a more general conversation about what we, collectively, are trying to accomplish, and what states of affairs “we” (i.e., the authorities, who may or may not represent our interests, and may or may not claim to do so) should take as problems to be solved, etc.
In short, this is an inescapably political question, with all the usual implications. It can be approached mistake-theoretically only if all involved (a) agree on the goals of the whole enterprise, and (b) represent honestly, in discussion with one another, their respective individual goals in participating in said enterprise. (And, obviously, assuming that (a) and (b) hold, as a starting point for discussion, is unwise, to say the least!)
This seems diametrically wrong to me. I would say that it’s difficult (though by no means impossible) for an individual to change in this way, but very easy for a community to do so—through selective (and, to a lesser degree, structural) methods. (But I suspect you were thinking of corrective methods instead, and for that reason judged the task to be “basically impossible”—no?)
No, I meant that it’s very difficult to do so for a community without it being net-negative with respect to valuable things coming out of the community. Obviously you can create a new community by driving away an arbitrarily large fraction of an existing community’s membership; this is not a very interesting claim. And obviously having some specific composition of members does not necessarily lead to valuable output, but whether this gets better or worse is mostly an empirical question, and I’ve already asked for evidence on the subject.
Is it not? Why?
In my experience, it’s entirely possible for a community to be improved by getting rid of some fraction of its members. (Of course, it is usually then desirable to add some new members, different from the departed ones—but the effect of the departures themselves may help to draw in new members, of a sort who would not have joined the community as it was. And, in any case, new members may be attracted by all the usual means.)
As for your empirical claims (“it’s very difficult to do so for a community without it being net-negative …”, etc.), I definitely don’t agree, but it’s not clear what sort of evidence I could provide (nor what you could provide to support your view of things)…