As an empirical matter of fact (per my anecdotal observations), it is very easy to derail conversations by “refusing to employ the bare minimum of social grace”. This does not require deception, though often it may require more effort to clear some threshold of “social grace” while communicating the same information.
People vary widely, but:
I think that most people (95%+) are at significant risk of being cognitively hijacked if they perceive rudeness, hostility, etc. from their interlocutor.
I don’t personally think I’d benefit from strongly selecting for conversational partners who are at low risk of being cognitively hijacked, and I think nearly all people who do believe that they’d benefit from this (compared to counterfactuals like “they operate unchanged in their current social environment” or “they put in some additional marginal effort to say true things with more social grace”) are mistaken.
Online conversations are one-to-many, not one-to-one. This multiplies the potential cost of that cognitive hijacking.
Obviously there are issues with incentives toward fragility here, but the fact that there does not, as far as I’m aware, exist any intellectually generative community which operates on the norms you’re advocating for, is evidence that such a community is (currently) unsustainable.
I don’t personally think I’d benefit from strongly selecting for conversational partners who are at low risk of being cognitively hijacked, and I think nearly all people who do believe that they’d benefit from this [...] are mistaken.
I find this claim surprising and would be very interested to hear more about why you think this!!
I think the case for benefit is straightforward: if your interlocutors are selected for low risk of getting triggered, there’s a wider space of ideas you can explore without worrying about offending them. Do you disagree with that case for benefit? If so, why? If not, presumably you think the benefit is outweighed by other costs—but what are those costs, specifically? (Are non-hijackable people dumber—or more realistically, do they have systematic biases that can only be corrected by hijackable people? What might those biases be, specifically?)
there does not, as far as I’m aware, exist any intellectually generative community which operates on the norms you’re advocating for
How large does something need to be in order to be a “community”? Anecdotally, my relationships with my “fighty”/disagreeable friends seem more intellectually generative than the typical Less Wrong 2.0 interaction in a way that seems deeply related to our fightiness: specifically, I’m wrong about stuff a lot, but I think I manage to be less wrong with the corrective help of my friends who know I’ll reward rather than punish them for asking incisive probing questions and calling me out on motivated distortions.
Your one-to-many point is well taken, though. (The special magical thing I have with my disagreeable friends seems hard to scale to an entire website. Even in the one-to-one setting, different friends vary on how much full-contact criticism we manage to do without spiraling into a drama explosion and hurting each other.)
If not, presumably you think the benefit is outweighed by other costs—but what are those costs, specifically?
Some costs:
Such people seem much more likely to also themselves be fairly disagreeable.
There are many fewer of them. I think I’ve probably gotten net-positive value out of my interactions with them to date, but I’ve definitely gotten a lot of value out of interactions with many people who wouldn’t fit the bill, and selecting against them would be a mistake.
To be clear, if I were to select people to interact with primarily on whatever qualities I expect to result in the most useful intellectual progress, I do expect that those people would both be at lower risk of being cognitively hijacked and more disagreeable than the general population. But the correlation isn’t overwhelming, and selecting primarily for “low risk of being cognitively hijacked” would not get me the as much of the useful thing I actually want.
How large does something need to be in order to be a “community”?
As I mentioned in my reply to Said, I did in fact have medium-sized online communities in mind when writing that comment. I agree that stronger social bonds between individuals will usually change the calculus on communication norms. I also suspect that it’s positively tractable to change that frontier for any given individual relationship through deliberate effort, while that would be much more difficult[1] for larger communities.
There are many fewer of them [...] the correlation isn’t overwhelming [...] selecting primarily [...] would not get me the as much of the useful thing I actually want
Sure, but the same arguments go through for, say, mathematical ability, right? The correlation between math-smarts and the kind of intellectual progress we’re (ostensibly) trying to achieve on this website isn’t overwhelming; selecting primarily for math prowess would get you less advanced rationality when the tails come apart.
And yet, I would not take this as a reason not to “structure communities like LessWrong in ways which optimize for participants being further along on this axis” for fear of “driving away [a …] fraction of an existing community’s membership”. In my own intellectual history, I studied a lot of math and compsci stuff because the culture of the Overcoming Bias comment section of 2008 made that seem like a noble and high-status thing to do. A website that catered to my youthful ignorance instead of challenging me to remediate it would have made me weaker rather than stronger.
LessWrong is obviously structured in ways which optimize for participants being quite far along that axis relative to the general population; the question is whether further optimization is good or bad on the margin.
I think we need an individualist conflict-theoretic rather than a collective mistake-theoretic perspective to make sense of what’s going on here.
If the community were being optimized by the God-Empress, who is responsible for the whole community and everything in it, then She would decide whether more or less math is good on the margin for Her purposes.
But actually, there’s no such thing as the God-Empress; there are individual men and women, and there are families. That’s the context in which Said’s plea to “keep your thumb off the scales, as much as possible” can even be coherent. (If there were a God-Empress determining the whole community and everything in it as definitely as an author determines the words in a novel, then you couldn’t ask Her to keep Her thumb off the scales. What would that even mean?)
In contrast to the God-Empress, mortals have been known to make use of a computational shortcut they call “not my problem”. If I make a post, and you say, “This has too many equations in it; people don’t want to read a website with too many equations; you’re driving off more value to community than you’re creating”, it only makes sense to think of this as a disagreement if I’ve accepted the premise that my job is to optimize the whole community and everything in it, rather than to make good posts. If my position is instead, “I thought it was a good post; if it drives away people who don’t like equations, that’s not my problem,” then what we have is a conflict rather than a disagreement.
In contrast to the God-Empress, mortals have been known to make use of a computational shortcut they call “not my problem”. If I make a post, and you say, “This has too many equations in it; people don’t want to read a website with too many equations; you’re driving off more value to community than you’re creating”, it only makes sense to think of this as a disagreement if I’ve accepted the premise that my job is to optimize the whole community and everything in it, rather than to make good posts. If my position is instead, “I thought it was a good post; if it drives away people who don’t like equations, that’s not my problem,” then what we have is a conflict rather than a disagreement.
Indeed. In fact, we can take this analysis further, as follows:
If there are people whose problem it is to optimize the whole community and everything in it (let us skip for the moment the questions of why this is those people’s problem, and who decided that it should be, and how), then those people might say to you: “Indeed it is not your problem, to begin with; it is mine; I must solve it; and my approach to solving this problem is to make it your problem, by the power vested in me.” At that point you have various options: accede and cooperate, refuse and resist, perhaps others… but what you no longer have is the option of shrugging and saying “not my problem”, because in the course of the conflict which ensued when you initially shrugged thus, the problem has now been imposed upon you by force.
Of course, there are those questions which we skipped—why is this “problem” a problem for those people in authority; who decided this, and how; why are they in authority to begin with, and why do they have the powers that they have; how does this state of affairs comport with our interests, and what shall we do about it if the answer is “not very well”; and others in this vein. And, likewise, if we take the “refuse and resist” option, we can start a more general conversation about what we, collectively, are trying to accomplish, and what states of affairs “we” (i.e., the authorities, who may or may not represent our interests, and may or may not claim to do so) should take as problems to be solved, etc.
In short, this is an inescapably political question, with all the usual implications. It can be approached mistake-theoretically only if all involved (a) agree on the goals of the whole enterprise, and (b) represent honestly, in discussion with one another, their respective individual goals in participating in said enterprise. (And, obviously, assuming that (a) and (b) hold, as a starting point for discussion, is unwise, to say the least!)
I also suspect that it’s positively tractable to change that frontier for any given individual relationship through deliberate effort, while that would be much more difficult[1] for larger communities.
[1] I think basically impossible in nearly all cases, but don’t have legible justifications for that degree of belief.
This seems diametrically wrong to me. I would say that it’s difficult (though by no means impossible) for an individual to change in this way, but very easy for a community to do so—through selective (and, to a lesser degree, structural) methods. (But I suspect you were thinking of corrective methods instead, and for that reason judged the task to be “basically impossible”—no?)
No, I meant that it’s very difficult to do so for a community without it being net-negative with respect to valuable things coming out of the community. Obviously you can create a new community by driving away an arbitrarily large fraction of an existing community’s membership; this is not a very interesting claim. And obviously having some specific composition of members does not necessarily lead to valuable output, but whether this gets better or worse is mostly an empirical question, and I’ve already asked for evidence on the subject.
Obviously you can create a new community by driving away an arbitrarily large fraction of an existing community’s membership; this is not a very interesting claim.
Is it not? Why?
In my experience, it’s entirely possible for a community to be improved by getting rid of some fraction of its members. (Of course, it is usually then desirable to add some new members, different from the departed ones—but the effect of the departures themselves may help to draw in new members, of a sort who would not have joined the community as it was. And, in any case, new members may be attracted by all the usual means.)
As for your empirical claims (“it’s very difficult to do so for a community without it being net-negative …”, etc.), I definitely don’t agree, but it’s not clear what sort of evidence I could provide (nor what you could provide to support your view of things)…
I think that most people (95%+) are at significant risk of being cognitively hijacked if they perceive rudeness, hostility, etc. from their interlocutor.
Would you include yourself in that 95%+?
there does not, as far as I’m aware, exist any intellectually generative community which operates on the norms you’re advocating for,
There certainly exist such communities. I’ve been part of multiple such, and have heard reports of numerous others.
Probably; I think I’m maybe in the 80th or 90th percentile on the axis of “can resist being hijacked”, but not 95th or higher.
There certainly exist such communities. I’ve been part of multiple such, and have heard reports of numerous others.
Can you list some? On a reread, my initial claim was too broad, in the sense that there are many things that could be called “intellectually generative communities” which could qualify, but they mostly aren’t the thing I care about (in context, not-tiny online communities where most members don’t have strong personal social ties to most other members).
Probably; I think I’m maybe in the 80th or 90th percentile on the axis of “can resist being hijacked”, but not 95th or higher.
Suppose you could move up along that axis, to the 95th percentile. Would you consider than a change for the better? For the worse? A neutral shift?
Can you list some?
I’m afraid I must decline to list any of the currently existing such communities which I have in mind, for reasons of prudence (or paranoia, if you like). (However, I will say that there is a very good chance that you’ve used websites or other software which were created in one of these places, or benefited from technological advances which were developed in one of these places.)
As for now-defunct such communities, though—well, there are many examples, although most of the ones I’m familiar with are domain-specific. A major category of such were web forums devoted to some hobby or other (D&D, World of Warcraft, other games), many of which were truly wondrous wellsprings of creativity and inventiveness in their respective domains—and which had norms basically identical to what Zack advocates.
Suppose you could move up along that axis, to the 95th percentile. Would you consider than a change for the better? For the worse? A neutral shift?
All else equal, better, of course. (In reality, all else is rarely equal; at a minimum there are opportunity costs.)
I’m afraid I must decline to list any of the currently existing such communities which I have in mind, for reasons of prudence (or paranoia, if you like). (However, I will say that there is a very good chance that you’ve used websites or other software which were created in one of these places, or benefited from technological advances which were developed in one of these places.)
See my response to Zack (and previous response to you) for clarification on the kinds of communities I had in mind; certainly I think such things are possible (& sometimes desirable) in more constrained circumstances.
ETA: and while in this case I have no particular reason to doubt your report that such communities exist, I have substantial reason to believe that if you were to share what those communities were with me, I probably wouldn’t find that most of them were meaningful counterevidence to my claim (for a variety of reasons, including that my initial claim was overbroad).
Suppose you could move up along that axis, to the 95th percentile. Would you consider than a change for the better? For the worse? A neutral shift?
All else equal, better, of course. (In reality, all else is rarely equal; at a minimum there are opportunity costs.)
Sure, opportunity costs are always a complication, but in this case they are somewhat beside the point. If indeed it’s better to be further along this axis (all else being equal), then it seems like a bad idea to encourage and incentivize being lower on this axis, and to discourage and disincentivize being further on it. But that is just what I see happening!
If indeed it’s better to be further along this axis (all else being equal), then it seems like a bad idea to encourage and incentivize being lower on this axis, and to discourage and disincentivize being further on it. But that is just what I see happening!
The consequent does not follow. It might be better for an individual to press a button, if pressing that button were free, which moved them further along that axis. It is not obviously better to structure communities like LessWrong in ways which optimize for participants being further along on this axis, both because this is not a reliable proxy for the thing we actually care about and because it’s not free.
That it’s “not free” is a trivial claim (very few things are truly free), but that it costs very little, to—not even encourage moving upward along that axis, but simply to avoid encouraging the opposite—to keep your thumb off the scales, as much as possible—this seems to me to be hard to dispute.
because this is not a reliable proxy for the thing we actually care about
Could you elaborate? What is the thing we actually care about, and what is the unreliable proxy?
See my response to Zack (and previous response to you) for clarification on the kinds of communities I had in mind; certainly I think such things are possible (& sometimes desirable) in more constrained circumstances.
Sorry, I’m not quite sure which “previous response” you refer to. Link, please?
As I mentioned in my reply to Said, I did in fact have medium-sized online communities in mind when writing that comment. I agree that stronger social bonds between individuals will usually change the calculus on communication norms. I also suspect that it’s positively tractable to change that frontier for any given individual relationship through deliberate effort, while that would be much more difficult[1] for larger communities.
they mostly aren’t the thing I care about (in context, not-tiny online communities where most members don’t have strong personal social ties to most other members)
So, “not-tiny online communities where most members don’t have strong personal social ties to most other members”…? But of course that is exactly the sort of thing I had in mind, too. (What did you think I was talking about…?)
Anyhow, please reconsider my claims, in light of this clarification.
ETA: and while in this case I have no particular reason to doubt your report that such communities exist, I have substantial reason to believe that if you were to share what those communities were with me, I probably wouldn’t find that most of them were meaningful counterevidence to my claim (for a variety of reasons, including that my initial claim was overbroad).
This is understandable, but in that case, do you care to reformulate your claim? I certainly don’t have any idea what you had in mind, given what you say here, so a clarification is in order, I think.
As an empirical matter of fact (per my anecdotal observations), it is very easy to derail conversations by “refusing to employ the bare minimum of social grace”. This does not require deception, though often it may require more effort to clear some threshold of “social grace” while communicating the same information.
People vary widely, but:
I think that most people (95%+) are at significant risk of being cognitively hijacked if they perceive rudeness, hostility, etc. from their interlocutor.
I don’t personally think I’d benefit from strongly selecting for conversational partners who are at low risk of being cognitively hijacked, and I think nearly all people who do believe that they’d benefit from this (compared to counterfactuals like “they operate unchanged in their current social environment” or “they put in some additional marginal effort to say true things with more social grace”) are mistaken.
Online conversations are one-to-many, not one-to-one. This multiplies the potential cost of that cognitive hijacking.
Obviously there are issues with incentives toward fragility here, but the fact that there does not, as far as I’m aware, exist any intellectually generative community which operates on the norms you’re advocating for, is evidence that such a community is (currently) unsustainable.
I find this claim surprising and would be very interested to hear more about why you think this!!
I think the case for benefit is straightforward: if your interlocutors are selected for low risk of getting triggered, there’s a wider space of ideas you can explore without worrying about offending them. Do you disagree with that case for benefit? If so, why? If not, presumably you think the benefit is outweighed by other costs—but what are those costs, specifically? (Are non-hijackable people dumber—or more realistically, do they have systematic biases that can only be corrected by hijackable people? What might those biases be, specifically?)
How large does something need to be in order to be a “community”? Anecdotally, my relationships with my “fighty”/disagreeable friends seem more intellectually generative than the typical Less Wrong 2.0 interaction in a way that seems deeply related to our fightiness: specifically, I’m wrong about stuff a lot, but I think I manage to be less wrong with the corrective help of my friends who know I’ll reward rather than punish them for asking incisive probing questions and calling me out on motivated distortions.
Your one-to-many point is well taken, though. (The special magical thing I have with my disagreeable friends seems hard to scale to an entire website. Even in the one-to-one setting, different friends vary on how much full-contact criticism we manage to do without spiraling into a drama explosion and hurting each other.)
Some costs:
Such people seem much more likely to also themselves be fairly disagreeable.
There are many fewer of them. I think I’ve probably gotten net-positive value out of my interactions with them to date, but I’ve definitely gotten a lot of value out of interactions with many people who wouldn’t fit the bill, and selecting against them would be a mistake.
To be clear, if I were to select people to interact with primarily on whatever qualities I expect to result in the most useful intellectual progress, I do expect that those people would both be at lower risk of being cognitively hijacked and more disagreeable than the general population. But the correlation isn’t overwhelming, and selecting primarily for “low risk of being cognitively hijacked” would not get me the as much of the useful thing I actually want.
As I mentioned in my reply to Said, I did in fact have medium-sized online communities in mind when writing that comment. I agree that stronger social bonds between individuals will usually change the calculus on communication norms. I also suspect that it’s positively tractable to change that frontier for any given individual relationship through deliberate effort, while that would be much more difficult[1] for larger communities.
I think basically impossible in nearly all cases, but don’t have legible justifications for that degree of belief.
Sure, but the same arguments go through for, say, mathematical ability, right? The correlation between math-smarts and the kind of intellectual progress we’re (ostensibly) trying to achieve on this website isn’t overwhelming; selecting primarily for math prowess would get you less advanced rationality when the tails come apart.
And yet, I would not take this as a reason not to “structure communities like LessWrong in ways which optimize for participants being further along on this axis” for fear of “driving away [a …] fraction of an existing community’s membership”. In my own intellectual history, I studied a lot of math and compsci stuff because the culture of the Overcoming Bias comment section of 2008 made that seem like a noble and high-status thing to do. A website that catered to my youthful ignorance instead of challenging me to remediate it would have made me weaker rather than stronger.
LessWrong is obviously structured in ways which optimize for participants being quite far along that axis relative to the general population; the question is whether further optimization is good or bad on the margin.
I think we need an individualist conflict-theoretic rather than a collective mistake-theoretic perspective to make sense of what’s going on here.
If the community were being optimized by the God-Empress, who is responsible for the whole community and everything in it, then She would decide whether more or less math is good on the margin for Her purposes.
But actually, there’s no such thing as the God-Empress; there are individual men and women, and there are families. That’s the context in which Said’s plea to “keep your thumb off the scales, as much as possible” can even be coherent. (If there were a God-Empress determining the whole community and everything in it as definitely as an author determines the words in a novel, then you couldn’t ask Her to keep Her thumb off the scales. What would that even mean?)
In contrast to the God-Empress, mortals have been known to make use of a computational shortcut they call “not my problem”. If I make a post, and you say, “This has too many equations in it; people don’t want to read a website with too many equations; you’re driving off more value to community than you’re creating”, it only makes sense to think of this as a disagreement if I’ve accepted the premise that my job is to optimize the whole community and everything in it, rather than to make good posts. If my position is instead, “I thought it was a good post; if it drives away people who don’t like equations, that’s not my problem,” then what we have is a conflict rather than a disagreement.
Indeed. In fact, we can take this analysis further, as follows:
If there are people whose problem it is to optimize the whole community and everything in it (let us skip for the moment the questions of why this is those people’s problem, and who decided that it should be, and how), then those people might say to you: “Indeed it is not your problem, to begin with; it is mine; I must solve it; and my approach to solving this problem is to make it your problem, by the power vested in me.” At that point you have various options: accede and cooperate, refuse and resist, perhaps others… but what you no longer have is the option of shrugging and saying “not my problem”, because in the course of the conflict which ensued when you initially shrugged thus, the problem has now been imposed upon you by force.
Of course, there are those questions which we skipped—why is this “problem” a problem for those people in authority; who decided this, and how; why are they in authority to begin with, and why do they have the powers that they have; how does this state of affairs comport with our interests, and what shall we do about it if the answer is “not very well”; and others in this vein. And, likewise, if we take the “refuse and resist” option, we can start a more general conversation about what we, collectively, are trying to accomplish, and what states of affairs “we” (i.e., the authorities, who may or may not represent our interests, and may or may not claim to do so) should take as problems to be solved, etc.
In short, this is an inescapably political question, with all the usual implications. It can be approached mistake-theoretically only if all involved (a) agree on the goals of the whole enterprise, and (b) represent honestly, in discussion with one another, their respective individual goals in participating in said enterprise. (And, obviously, assuming that (a) and (b) hold, as a starting point for discussion, is unwise, to say the least!)
This seems diametrically wrong to me. I would say that it’s difficult (though by no means impossible) for an individual to change in this way, but very easy for a community to do so—through selective (and, to a lesser degree, structural) methods. (But I suspect you were thinking of corrective methods instead, and for that reason judged the task to be “basically impossible”—no?)
No, I meant that it’s very difficult to do so for a community without it being net-negative with respect to valuable things coming out of the community. Obviously you can create a new community by driving away an arbitrarily large fraction of an existing community’s membership; this is not a very interesting claim. And obviously having some specific composition of members does not necessarily lead to valuable output, but whether this gets better or worse is mostly an empirical question, and I’ve already asked for evidence on the subject.
Is it not? Why?
In my experience, it’s entirely possible for a community to be improved by getting rid of some fraction of its members. (Of course, it is usually then desirable to add some new members, different from the departed ones—but the effect of the departures themselves may help to draw in new members, of a sort who would not have joined the community as it was. And, in any case, new members may be attracted by all the usual means.)
As for your empirical claims (“it’s very difficult to do so for a community without it being net-negative …”, etc.), I definitely don’t agree, but it’s not clear what sort of evidence I could provide (nor what you could provide to support your view of things)…
Would you include yourself in that 95%+?
There certainly exist such communities. I’ve been part of multiple such, and have heard reports of numerous others.
Probably; I think I’m maybe in the 80th or 90th percentile on the axis of “can resist being hijacked”, but not 95th or higher.
Can you list some? On a reread, my initial claim was too broad, in the sense that there are many things that could be called “intellectually generative communities” which could qualify, but they mostly aren’t the thing I care about (in context, not-tiny online communities where most members don’t have strong personal social ties to most other members).
Suppose you could move up along that axis, to the 95th percentile. Would you consider than a change for the better? For the worse? A neutral shift?
I’m afraid I must decline to list any of the currently existing such communities which I have in mind, for reasons of prudence (or paranoia, if you like). (However, I will say that there is a very good chance that you’ve used websites or other software which were created in one of these places, or benefited from technological advances which were developed in one of these places.)
As for now-defunct such communities, though—well, there are many examples, although most of the ones I’m familiar with are domain-specific. A major category of such were web forums devoted to some hobby or other (D&D, World of Warcraft, other games), many of which were truly wondrous wellsprings of creativity and inventiveness in their respective domains—and which had norms basically identical to what Zack advocates.
All else equal, better, of course. (In reality, all else is rarely equal; at a minimum there are opportunity costs.)
See my response to Zack (and previous response to you) for clarification on the kinds of communities I had in mind; certainly I think such things are possible (& sometimes desirable) in more constrained circumstances.
ETA: and while in this case I have no particular reason to doubt your report that such communities exist, I have substantial reason to believe that if you were to share what those communities were with me, I probably wouldn’t find that most of them were meaningful counterevidence to my claim (for a variety of reasons, including that my initial claim was overbroad).
Sure, opportunity costs are always a complication, but in this case they are somewhat beside the point. If indeed it’s better to be further along this axis (all else being equal), then it seems like a bad idea to encourage and incentivize being lower on this axis, and to discourage and disincentivize being further on it. But that is just what I see happening!
The consequent does not follow. It might be better for an individual to press a button, if pressing that button were free, which moved them further along that axis. It is not obviously better to structure communities like LessWrong in ways which optimize for participants being further along on this axis, both because this is not a reliable proxy for the thing we actually care about and because it’s not free.
That it’s “not free” is a trivial claim (very few things are truly free), but that it costs very little, to—not even encourage moving upward along that axis, but simply to avoid encouraging the opposite—to keep your thumb off the scales, as much as possible—this seems to me to be hard to dispute.
Could you elaborate? What is the thing we actually care about, and what is the unreliable proxy?
Sorry, I’m not quite sure which “previous response” you refer to. Link, please?
https://www.lesswrong.com/posts/h2Hk2c2Gp5sY4abQh/lack-of-social-grace-is-an-epistemic-virtue?commentId=QQxjoGE24o6fz7CYm
https://www.lesswrong.com/posts/h2Hk2c2Gp5sY4abQh/lack-of-social-grace-is-an-epistemic-virtue?commentId=Dy3uyzgvd2P9RZre6
So, “not-tiny online communities where most members don’t have strong personal social ties to most other members”…? But of course that is exactly the sort of thing I had in mind, too. (What did you think I was talking about…?)
Anyhow, please reconsider my claims, in light of this clarification.
This is understandable, but in that case, do you care to reformulate your claim? I certainly don’t have any idea what you had in mind, given what you say here, so a clarification is in order, I think.