*I’ll note that some of the Chicken Littles were clamoring for an off-ramp
I really dislike the dismissal of people who wanted to preserve easy exit as an abuse avoidance measure. I get that it can feel like an accusation, but being able to say “we should have this preventative measure” without implying anything negative about anyone is a critical to preventing abuse, because it lets you put the measure in place before the abuse is so bad it’s obvious. I also (knowing basically nothing about Duncan or the way the suggestion was delivered, and having vague positive feelings about the project goals) think that “something intolerable happens” is a reasonable concern and “leaving” is a reasonable solution.
All squares are rectangles, but not all rectangles are squares.
You are responding as if I said “all people who wanted to preserve an easy exit as an abuse avoidance measure were Chicken Littles.”
I did not. I said “Some of the Chicken Littles were clamoring for an easy exit as an abuse avoidance measure.”
This is an important distinction that I expect we should have a culture of not missing here on LessWrong. You can go back to the original post and see plenty of examples of me responding positively to people who were concerned about abuse risk. You can also see me publicly committing to specific changes, and publicly admitting specific updates. I was not dismissive in the way that your first sentence (correctly!) disagrees with being; I very much strongly agree with the sentence “being able to say ‘we should have this preventative measure’ without [that being taken as] implying anything negative about anyone is critical to preventing abuse.
(provided that being able to disagree with the call for Specific Preventative Measure X is not conflated with “doesn’t care about preventing abuse.”)
You are responding as if I said “all people who wanted to preserve an easy exit as an abuse avoidance measure were Chicken Littles.”
I did not. I said “Some of the Chicken Littles were clamoring for an easy exit as an abuse avoidance measure.”
English sentences like the latter can have something like the former as a possible meaning. (I want to say “implicature” but I’m not totally sure that’s the correct technical term.) So I think even putting aside politics and coalitions, the sentence is ambiguous as a matter of linguistics.
Consider this example:
Some nice people helped me look for my dog yesterday.
It seems clear that the meaning of this sentence is closer to “Some people helped look for my dog yesterday, and they were all nice people.” or “Some people helped look for my dog yesterday, and that makes me think they are nice.” than “The intersection of sets [nice people] and [people who helped me look for my dog yesterday] is non-empty.”
Or this example:
Some of the undisciplined children in his class couldn’t sit still for more than a few seconds at a time.
This one is more ambiguous than the one above. I can make my brain flip between perceiving two different meanings, one where there’s a pre-identified group of undisciplined children, and the speaker observed some of them not being able to sit still, and another one where the speaker thinks that not being able to sit still is prima facie evidence for a child being insufficiently disciplined.
… suppose the atheist posts on Tumblr: “I hate religious people who are rabidly certain that the world was created in seven days or that all their enemies will burn in Hell, and try to justify it through ‘faith’. You know, the sort of people who think that the Bible has all the answers and who hate anyone who tries to think for themselves.”
Now there’s practically no implication that these people are typical. So that’s fine, right?
On the other side of the world, a religious person is writing “I hate atheists who think morality is relative, and that this gives them the right to murder however many people stand between them and a world where no one is allowed to believe in God”.
Again, not a straw man. The Soviet Union contained several million of these people. But if you’re an atheist, would you just let this pass?
How about “I hate black thugs who rob people”?
What are the chances a black guy reads that and says “Well, good thing I’m not a thug who robs people, he’ll probably love me”?
Moral of the story being that, as that SSC post discusses in more detail, statements like ‘the chicken littles made this stupid argument’ line will end up rubbing off on everyone who made a similar-sounding argument, the same way that “I hate black thugs who rob people” still associates black people with thugs even though it’s specifically stated that it’s a subgroup of black people that’s being talked about.
People objecting to those kinds of lines, in a way which arguably misses the distinction being drawn, is a natural immune reaction intended to prevent the larger group from being infected by the stigma of the smaller group. I don’t know what the consequences of having a norm against making those objections would be, but given that it would be interfering with a natural immune reaction that seems to serve an important role in maintaining healthy social dynamics, it seems like the kind of thing that would be a bad idea to tamper with.
Or to put it differently, when you say
You are responding as if I said “all people who wanted to preserve an easy exit as an abuse avoidance measure were Chicken Littles.”
I did not. I said “Some of the Chicken Littles were clamoring for an easy exit as an abuse avoidance measure.”
Then while that is true, the fact that elizabeth is responding as if you had said the first thing, should be a hint of the fact that the first thing is how many people’s brains will tend to interpret your statement on a System 1 level; which means that the first one is the message that this line is actually sending, regardless of what the authorial intent was.
Literally the only reason I’m on LessWrong is because of the tiniest glimmer of a hope that this can be a place where people actually respond to what was said, rather than to their own knee-jerk stereotypes and rounded off assumptions. That this can be a place where people will actually put forth the effort to get the basic everywhere everyday flawed human communication bugs out of the picture, and do deliberate and intentional communication and collaborative truth seeking on a meaningfully higher level. That that’s the actual goal—that when people stick the flag of “Less Wrong” in the ground, they mean it, and are willing to put their social capital on the line to grow it and defend it. That this can be a place where we don’t just throw up our hands, give up, cave in, and cater to the lowest common denominator.
That this can be a place where the truth is ACTUALLY a defense against criticism. That if somebody here gets mad at you for what they think you said, and the record shows that you didn’t say that, everyone will agree that it’s the person who got mad who’s in the wrong and should’ve done differently, not you.
Everything Scott said in that post rings true to me about people and populations in general. But the hope is that LessWrong is not just humans doing business as usual. The hope is that LessWrong is actually different. That the mods and individual members will make it different, on purpose, with deliberate effort and maintenance, according to solid principles consistently adhered to. That we’ll put forth the effort to coordinate on this particular stag hunt, and not just keep cynically sliding back toward the same old boring rabbit rabbit rabbit and being apologists for our own irrationality. I can indeed give up on that hope, but the result will be me leaving and taking my content down and not coming back.
And yes, it’d be right in line with how “many people’s brains” work to interpret that as a threat, or an attempt to hold the community hostage, or whatever. But it’s not that—it’s simply a statement that that’s the value I see in LessWrong, and if that value isn’t there, LessWrong isn’t worth it. If LessWrong is a place where you can be punished for other people’s failure to listen and think carefully, then it’s not meaningfully different from the entire rest of the internet.
If this place isn’t trying to be special in that way, then in what way is it trying to be special?
I agree that we should strive for discussion norms which allow for more rational discussion, and which cause people to respond to what an author actually said, rather than responding to a stereotype in their heads. And that this is pretty much the whole point of Less Wrong.
At the same time, I think that something like “pick your battles” applies. Justified or not, there’s already a relatively strong norm against political discussion on LW, arising in part from the assumption that politics is such a mind-killer that there’s no point in even trying to discuss it in rational terms. That seems like a concession to the fact that we’re still human and driven by human coalitional instincts, and that does need to be taken into account, even if we strive to overcome it.
Now, by its nature, LW already tends to attract the kinds of people who want to focus on a relatively literal interpretation of what was said, and don’t need to be explicitly told so. Most of the implicit conversational norms here don’t arise from anybody needing to be told to “interpret this more literally”, but rather out of everyone naturally preferring that kind of conversation.
To me, this suggests that if there’s something which causes even many people like us to instinctively and automatically react by reading something in a non-literal way, then that reaction is a very powerful force, and that special caution is required. If we are suggesting a norm for dealing with that reaction, then we should at least try to do some basic cost/benefit analysis for that proposed norm, keeping in mind its likely function.
I read you as proposing some kind of norm like “always read claims about subsets of specific groups as only referring to that subset, if the claim is worded in such a way as to make that the literal interpretation”.
To me, a major cost would be that this feels kind of similar as a norm of “always stay rational when discussing politics”; leaving aside the fact that the politics norm is uselessly vague, there’s also the fact that “rationality when discussing politics” isn’t something that would be under people’s conscious control. Unless they are very very good, they are going to go tribal even if they tried not to.
Similarly, I think that people’s S1 associations of groups will be affected by claims where subgroups are lumped together with the larger group, regardless of how they’re told to read different sentences. If we try, we might be successful in enforcing a norm against complaining about such claims, but we can’t force people’s intuitions not to be affected by the claims.
So, if we were successful in enforcing that norm, then that would in effect be incentivizing people to associate low-status subgroups with larger groups they don’t like, since it couldn’t be called out anymore. That seems bad.
On the other hand, it would make things somewhat easier on well-meaning authors, since they wouldn’t need to worry so much about adding all kinds of explicit disclaimers when talking about subgroups who might be associated with larger groups. A lot of people have been complaining about LW being too hostile of an environment to post on, so reducing authorial stress seems good.
The alternative would be to not have such a norm, in which case authors will sometimes get challenged in the comments for subgroup claims, giving them the opportunity to clarify that yes, they only meant to refer to the subgroup not the whole group (as you’ve done). This seems like it would cause authors some occasional frustration, but given the general climate on LW, if they just clarify that “no I really did only mean the subgroup, the larger group is fine” to everyone who objects, then that should mostly settle it.
My current feeling is that, in large part due to the existing no-politics norm, it’s currently very rare for authors to really discuss subgroups in a manner which would necessitate any objections. Thus the cost for authors of not-having-a-proposed-norm seems quite low to me; symmetrically, the extra value had from establishing the norm would be low also. I find it difficult to guess what the actual cost of having the norm would be, so based on my heuristic of “be careful about the magnitude of costs you don’t have a good model for” and in light of the seemingly limited value, I would feel hesitant about endorsing the norm.
I read you as proposing some kind of norm like “always read claims about subsets of specific groups as only referring to that subset, if the claim is worded in such a way as to make that the literal interpretation”.
This is not at all what I’m proposing; your post is way more fixated on the particular example than I expected. The radical norm that I am proposing is simply “read the words that people say, and process them attentively, and respond to those words.” The political subset doesn’t need to be considered separately, because if you have a community that supports and reinforces actually reading the words and processing them and responding to them, that’s sufficient.
To me, this suggests that if there’s something which causes even many people like us to instinctively and automatically react by reading something in a non-literal way, then that reaction is a very powerful force, and that special caution is required. If we are suggesting a norm for dealing with that reaction, then we should at least try to do some basic cost/benefit analysis for that proposed norm, keeping in mind its likely function.
I don’t think cost/benefit analysis is the appropriate frame, here. I think this is the sole purpose, the sole mission. You don’t walk into a martial arts academy and say, let’s do a cost/benefit analysis on whether this whole kicking and punching thing is even worthwhile in a world full of guns. The frame is set—if you don’t like martial arts, don’t show up. Similarly, we shouldn’t be evaluating whether or not to hold ourselves to a standard of rationality, even if doing so is very difficult in some subsets of cases. That question should be answered before one decides whether or not to show up on LESS WRONG. If a person thinks it’s too costly, they shouldn’t be here.
Cost/benefit analyses can help us choose which of several different strategic paths toward the goal to take, and they can help us prioritize among multiple operationalizations of that goal, but they shouldn’t be used to let ourselves off the hook in exactly the areas where rationality is most missing or difficult, and therefore improvements are most needed.
I’m not going to add any further responses to this subthread, because I’ve said all I have to say. Either LW will agree that this is something worth coordinating to all-choose-stag on, or it won’t. It looks like, given the attitudes of most of the mods, it’ll probably be “won’t,” but there’s still room for hope.
The political subset doesn’t need to be considered separately, because if you have a community that supports and reinforces actually reading the words and processing them and responding to them, that’s sufficient.
Since you expressed a desire to disengage from the conversation, I’ll just briefly note for the benefit of others that this excerpt seems like the biggest crux and point of disagreement. To me, coalitional instincts are something that are always active in every group, and whose influence needs to be actively fought back, or they will subvert the goals of the group; just deciding to ignore the political aspects of things, without considering in detail the effect that this change will have on social dynamics, is never sufficient.
… I’ll just briefly note for the benefit of others that this excerpt seems like the biggest crux and point of disagreement. …
In tne interest of the general norm of “trying to identify cruxes and make them explicit”, I’d like to endorse this—except that to me, the issue goes well beyond “human coalitions” and also encompasses many other things that would generally fall under the rubric of ‘politics’ in a broad sense—or for that matter, of ‘ethics’ or ‘morality’! When people, plausibly, were ‘politically’ mindkilled by Duncan’s Dragon Army proposal, this was not necessarily due to their belonging to an “anti-Duncan”, “anti-rationality” or whatever-coalition; instead, the proposal itself may have been aversive to them in a rather deep sense, involving what they regarded as their basic values. This impacts the proposed solution as well, of course; it may not be sufficient to “actively fight back” a narrow coalitional instinct, but a need may arise for addressing “the political [or for that matter, moral, ethical etc.] aspects of things” at a somewhat deeper level, that goes beyond a conventional “arguments and evidence” structure to seek for ‘cruxes’ in our far more fundamental attitudes, and addresses them with meaningful and creative compromises.
Yeah, agreed. It’s not just “political instincts”, it’s that humans are always operating in what’s a fundamentally social reality, of which coalitional instincts are a very large part but not the entirety.
I kinda dislike the “actively fight back” framing too, since it feels like a “treating your own fundamental humanity as an enemy” kind of thing that’s by itself something that we should be trying to get out of; but the easiest links that I had available that concisely expressed the point used that language, so I went with that.
I actually thought the “coalitional” part did deserve a mention, precisely because it is one of the few facets of the problem that we can just fight (which is not to say that coalitions don’t have a social and formal role to play in any actual political system!) Again, I think Crick would also agree with this, and ISTM that he did grapple with these issues at a pretty deep level. If we’re going to go beyond our traditional “no politics!” attitude, I really have to wonder why he’s not considered a trusted reference here, on a par w/ the Sequences and whatever the latest AI textbook is.
Yeah, you’re probably correct; I don’t feel like I’d have a good enough handle of your model to even attempt your ITT. (If this is your way of subtly pointing out that the thing I identified as a crux is likely wrong, correction accepted.)
That this can be a place where people will actually put forth the effort to get the basic everywhere everyday flawed human communication bugs out of the picture, and do deliberate and intentional communication and collaborative truth seeking on a meaningfully higher level. … Everything Scott said in that post rings true to me about people and populations in general. But the hope is that LessWrong is not just humans doing business as usual. The hope is that LessWrong is actually different.
Look, I hate to break the news to you, but just like Soylent Green, Less Wrong is people! Your goals and aspirations are extremely worthwhile and I entirely agree with them, but to whatever extent they succeed, it will NOT be because “LessWrong is not just humans doing business as usual”! Rather, it will be bdcause the very definition of “business as usual”—and, crucially, “politics as usual”! - will have been successfully modified and perfected to make it more in line with both human values (humaneness) as they actually exist out there, in the real world, and a general norm of truth seeking and deliberation (which is however, i claim, not a sufficient condition to achieving this goal, other ‘norms of engagement’ being just as important). This is what it actually means to “raise the sanity waterline”! Making us less human and perhaps more Clippy-like (that is, with the entirety of accepted discourse being “Hey, it looks like you might be having problem X! Would you like me to use advanced Bayesian inference techniques to help you assess this problem and provide you with a helpful, canned solution to it? [OK]/[CANCEL]”) is not a sensible or feasible goal, and it is indeed somewhat puzzling that you as a CFAR instructor yourself do not immediately notice and engage with this important point.
I really dislike the dismissal of people who wanted to preserve easy exit as an abuse avoidance measure. I get that it can feel like an accusation, but being able to say “we should have this preventative measure” without implying anything negative about anyone is a critical to preventing abuse, because it lets you put the measure in place before the abuse is so bad it’s obvious. I also (knowing basically nothing about Duncan or the way the suggestion was delivered, and having vague positive feelings about the project goals) think that “something intolerable happens” is a reasonable concern and “leaving” is a reasonable solution.
Conflation.
All squares are rectangles, but not all rectangles are squares.
You are responding as if I said “all people who wanted to preserve an easy exit as an abuse avoidance measure were Chicken Littles.”
I did not. I said “Some of the Chicken Littles were clamoring for an easy exit as an abuse avoidance measure.”
This is an important distinction that I expect we should have a culture of not missing here on LessWrong. You can go back to the original post and see plenty of examples of me responding positively to people who were concerned about abuse risk. You can also see me publicly committing to specific changes, and publicly admitting specific updates. I was not dismissive in the way that your first sentence (correctly!) disagrees with being; I very much strongly agree with the sentence “being able to say ‘we should have this preventative measure’ without [that being taken as] implying anything negative about anyone is critical to preventing abuse.
(provided that being able to disagree with the call for Specific Preventative Measure X is not conflated with “doesn’t care about preventing abuse.”)
English sentences like the latter can have something like the former as a possible meaning. (I want to say “implicature” but I’m not totally sure that’s the correct technical term.) So I think even putting aside politics and coalitions, the sentence is ambiguous as a matter of linguistics.
Consider this example:
Some nice people helped me look for my dog yesterday.
It seems clear that the meaning of this sentence is closer to “Some people helped look for my dog yesterday, and they were all nice people.” or “Some people helped look for my dog yesterday, and that makes me think they are nice.” than “The intersection of sets [nice people] and [people who helped me look for my dog yesterday] is non-empty.”
Or this example:
Some of the undisciplined children in his class couldn’t sit still for more than a few seconds at a time.
This one is more ambiguous than the one above. I can make my brain flip between perceiving two different meanings, one where there’s a pre-identified group of undisciplined children, and the speaker observed some of them not being able to sit still, and another one where the speaker thinks that not being able to sit still is prima facie evidence for a child being insufficiently disciplined.
I linked this elsewhere in this thread too, but seems particularly relevant here: http://slatestarcodex.com/2014/05/12/weak-men-are-superweapons/
Moral of the story being that, as that SSC post discusses in more detail, statements like ‘the chicken littles made this stupid argument’ line will end up rubbing off on everyone who made a similar-sounding argument, the same way that “I hate black thugs who rob people” still associates black people with thugs even though it’s specifically stated that it’s a subgroup of black people that’s being talked about.
People objecting to those kinds of lines, in a way which arguably misses the distinction being drawn, is a natural immune reaction intended to prevent the larger group from being infected by the stigma of the smaller group. I don’t know what the consequences of having a norm against making those objections would be, but given that it would be interfering with a natural immune reaction that seems to serve an important role in maintaining healthy social dynamics, it seems like the kind of thing that would be a bad idea to tamper with.
Or to put it differently, when you say
Then while that is true, the fact that elizabeth is responding as if you had said the first thing, should be a hint of the fact that the first thing is how many people’s brains will tend to interpret your statement on a System 1 level; which means that the first one is the message that this line is actually sending, regardless of what the authorial intent was.
Literally the only reason I’m on LessWrong is because of the tiniest glimmer of a hope that this can be a place where people actually respond to what was said, rather than to their own knee-jerk stereotypes and rounded off assumptions. That this can be a place where people will actually put forth the effort to get the basic everywhere everyday flawed human communication bugs out of the picture, and do deliberate and intentional communication and collaborative truth seeking on a meaningfully higher level. That that’s the actual goal—that when people stick the flag of “Less Wrong” in the ground, they mean it, and are willing to put their social capital on the line to grow it and defend it. That this can be a place where we don’t just throw up our hands, give up, cave in, and cater to the lowest common denominator.
That this can be a place where the truth is ACTUALLY a defense against criticism. That if somebody here gets mad at you for what they think you said, and the record shows that you didn’t say that, everyone will agree that it’s the person who got mad who’s in the wrong and should’ve done differently, not you.
Everything Scott said in that post rings true to me about people and populations in general. But the hope is that LessWrong is not just humans doing business as usual. The hope is that LessWrong is actually different. That the mods and individual members will make it different, on purpose, with deliberate effort and maintenance, according to solid principles consistently adhered to. That we’ll put forth the effort to coordinate on this particular stag hunt, and not just keep cynically sliding back toward the same old boring rabbit rabbit rabbit and being apologists for our own irrationality. I can indeed give up on that hope, but the result will be me leaving and taking my content down and not coming back.
And yes, it’d be right in line with how “many people’s brains” work to interpret that as a threat, or an attempt to hold the community hostage, or whatever. But it’s not that—it’s simply a statement that that’s the value I see in LessWrong, and if that value isn’t there, LessWrong isn’t worth it. If LessWrong is a place where you can be punished for other people’s failure to listen and think carefully, then it’s not meaningfully different from the entire rest of the internet.
If this place isn’t trying to be special in that way, then in what way is it trying to be special?
I agree that we should strive for discussion norms which allow for more rational discussion, and which cause people to respond to what an author actually said, rather than responding to a stereotype in their heads. And that this is pretty much the whole point of Less Wrong.
At the same time, I think that something like “pick your battles” applies. Justified or not, there’s already a relatively strong norm against political discussion on LW, arising in part from the assumption that politics is such a mind-killer that there’s no point in even trying to discuss it in rational terms. That seems like a concession to the fact that we’re still human and driven by human coalitional instincts, and that does need to be taken into account, even if we strive to overcome it.
Now, by its nature, LW already tends to attract the kinds of people who want to focus on a relatively literal interpretation of what was said, and don’t need to be explicitly told so. Most of the implicit conversational norms here don’t arise from anybody needing to be told to “interpret this more literally”, but rather out of everyone naturally preferring that kind of conversation.
To me, this suggests that if there’s something which causes even many people like us to instinctively and automatically react by reading something in a non-literal way, then that reaction is a very powerful force, and that special caution is required. If we are suggesting a norm for dealing with that reaction, then we should at least try to do some basic cost/benefit analysis for that proposed norm, keeping in mind its likely function.
I read you as proposing some kind of norm like “always read claims about subsets of specific groups as only referring to that subset, if the claim is worded in such a way as to make that the literal interpretation”.
To me, a major cost would be that this feels kind of similar as a norm of “always stay rational when discussing politics”; leaving aside the fact that the politics norm is uselessly vague, there’s also the fact that “rationality when discussing politics” isn’t something that would be under people’s conscious control. Unless they are very very good, they are going to go tribal even if they tried not to.
Similarly, I think that people’s S1 associations of groups will be affected by claims where subgroups are lumped together with the larger group, regardless of how they’re told to read different sentences. If we try, we might be successful in enforcing a norm against complaining about such claims, but we can’t force people’s intuitions not to be affected by the claims.
So, if we were successful in enforcing that norm, then that would in effect be incentivizing people to associate low-status subgroups with larger groups they don’t like, since it couldn’t be called out anymore. That seems bad.
On the other hand, it would make things somewhat easier on well-meaning authors, since they wouldn’t need to worry so much about adding all kinds of explicit disclaimers when talking about subgroups who might be associated with larger groups. A lot of people have been complaining about LW being too hostile of an environment to post on, so reducing authorial stress seems good.
The alternative would be to not have such a norm, in which case authors will sometimes get challenged in the comments for subgroup claims, giving them the opportunity to clarify that yes, they only meant to refer to the subgroup not the whole group (as you’ve done). This seems like it would cause authors some occasional frustration, but given the general climate on LW, if they just clarify that “no I really did only mean the subgroup, the larger group is fine” to everyone who objects, then that should mostly settle it.
My current feeling is that, in large part due to the existing no-politics norm, it’s currently very rare for authors to really discuss subgroups in a manner which would necessitate any objections. Thus the cost for authors of not-having-a-proposed-norm seems quite low to me; symmetrically, the extra value had from establishing the norm would be low also. I find it difficult to guess what the actual cost of having the norm would be, so based on my heuristic of “be careful about the magnitude of costs you don’t have a good model for” and in light of the seemingly limited value, I would feel hesitant about endorsing the norm.
This is not at all what I’m proposing; your post is way more fixated on the particular example than I expected. The radical norm that I am proposing is simply “read the words that people say, and process them attentively, and respond to those words.” The political subset doesn’t need to be considered separately, because if you have a community that supports and reinforces actually reading the words and processing them and responding to them, that’s sufficient.
I don’t think cost/benefit analysis is the appropriate frame, here. I think this is the sole purpose, the sole mission. You don’t walk into a martial arts academy and say, let’s do a cost/benefit analysis on whether this whole kicking and punching thing is even worthwhile in a world full of guns. The frame is set—if you don’t like martial arts, don’t show up. Similarly, we shouldn’t be evaluating whether or not to hold ourselves to a standard of rationality, even if doing so is very difficult in some subsets of cases. That question should be answered before one decides whether or not to show up on LESS WRONG. If a person thinks it’s too costly, they shouldn’t be here.
Cost/benefit analyses can help us choose which of several different strategic paths toward the goal to take, and they can help us prioritize among multiple operationalizations of that goal, but they shouldn’t be used to let ourselves off the hook in exactly the areas where rationality is most missing or difficult, and therefore improvements are most needed.
I’m not going to add any further responses to this subthread, because I’ve said all I have to say. Either LW will agree that this is something worth coordinating to all-choose-stag on, or it won’t. It looks like, given the attitudes of most of the mods, it’ll probably be “won’t,” but there’s still room for hope.
Since you expressed a desire to disengage from the conversation, I’ll just briefly note for the benefit of others that this excerpt seems like the biggest crux and point of disagreement. To me, coalitional instincts are something that are always active in every group, and whose influence needs to be actively fought back, or they will subvert the goals of the group; just deciding to ignore the political aspects of things, without considering in detail the effect that this change will have on social dynamics, is never sufficient.
In tne interest of the general norm of “trying to identify cruxes and make them explicit”, I’d like to endorse this—except that to me, the issue goes well beyond “human coalitions” and also encompasses many other things that would generally fall under the rubric of ‘politics’ in a broad sense—or for that matter, of ‘ethics’ or ‘morality’! When people, plausibly, were ‘politically’ mindkilled by Duncan’s Dragon Army proposal, this was not necessarily due to their belonging to an “anti-Duncan”, “anti-rationality” or whatever-coalition; instead, the proposal itself may have been aversive to them in a rather deep sense, involving what they regarded as their basic values. This impacts the proposed solution as well, of course; it may not be sufficient to “actively fight back” a narrow coalitional instinct, but a need may arise for addressing “the political [or for that matter, moral, ethical etc.] aspects of things” at a somewhat deeper level, that goes beyond a conventional “arguments and evidence” structure to seek for ‘cruxes’ in our far more fundamental attitudes, and addresses them with meaningful and creative compromises.
Yeah, agreed. It’s not just “political instincts”, it’s that humans are always operating in what’s a fundamentally social reality, of which coalitional instincts are a very large part but not the entirety.
I kinda dislike the “actively fight back” framing too, since it feels like a “treating your own fundamental humanity as an enemy” kind of thing that’s by itself something that we should be trying to get out of; but the easiest links that I had available that concisely expressed the point used that language, so I went with that.
I actually thought the “coalitional” part did deserve a mention, precisely because it is one of the few facets of the problem that we can just fight (which is not to say that coalitions don’t have a social and formal role to play in any actual political system!) Again, I think Crick would also agree with this, and ISTM that he did grapple with these issues at a pretty deep level. If we’re going to go beyond our traditional “no politics!” attitude, I really have to wonder why he’s not considered a trusted reference here, on a par w/ the Sequences and whatever the latest AI textbook is.
Do you have reading recommendations on him?
Loren ipsum
Yeah, you’re probably correct; I don’t feel like I’d have a good enough handle of your model to even attempt your ITT. (If this is your way of subtly pointing out that the thing I identified as a crux is likely wrong, correction accepted.)
Look, I hate to break the news to you, but just like Soylent Green, Less Wrong is people! Your goals and aspirations are extremely worthwhile and I entirely agree with them, but to whatever extent they succeed, it will NOT be because “LessWrong is not just humans doing business as usual”! Rather, it will be bdcause the very definition of “business as usual”—and, crucially, “politics as usual”! - will have been successfully modified and perfected to make it more in line with both human values (humaneness) as they actually exist out there, in the real world, and a general norm of truth seeking and deliberation (which is however, i claim, not a sufficient condition to achieving this goal, other ‘norms of engagement’ being just as important). This is what it actually means to “raise the sanity waterline”! Making us less human and perhaps more Clippy-like (that is, with the entirety of accepted discourse being “Hey, it looks like you might be having problem X! Would you like me to use advanced Bayesian inference techniques to help you assess this problem and provide you with a helpful, canned solution to it? [OK]/[CANCEL]”) is not a sensible or feasible goal, and it is indeed somewhat puzzling that you as a CFAR instructor yourself do not immediately notice and engage with this important point.