I think you’re underestimating the degree of social intelligence required. To pull that off while still keeping the rationalistic habits that such people find offensive, you’d have to:
Recognize the problem, which is nontrivial,
Find a way of figuring out who falls on which side of the line, without tipping people off,
Determine all of the rationalistic habits that are likely to offend people who are not trying to become more rational,
Find non-offensive ways of achieving those goals, or find ways of avoiding those situations entirely,
Find a way not to slip up in conversation and apply the habits anyway—again, nontrivial. Keeping this degree of focus in realtime is hard.
You’d also probably have to at least to some degree integrate the idea that it’s ‘okay’ (not correct, just acceptable) to be irrational into your general thought process, to avoid unintentional signaling that you think poorly of them. If anything, irrational people are more likely to notice such subtle signals, since so much of their communication is based on them.
You’d also probably have to at least to some degree integrate the idea that it’s ‘okay’ (not correct, just acceptable) to be irrational into your general thought process, to avoid unintentional signaling that you think poorly of them. If anything, irrational people are more likely to notice such subtle signals, since so much of their communication is based on them.
Or, you could just treat the existence of irrationality as a mere fact, like the fact that water freezes or runs downhill. Facts are not a matter of correctness or acceptability, they just are.
In fact (no pun intended), assigning “should-ness” to facts or their opposites in our brains is a significant force in our own irrationality. To say that people “should” be rational is like saying that water “should” run uphill—it says more about your value system than about the thing supposedly being pointed to.
Functionally, beliefs about “should” and “should not” assign aversive consequences to current reality—if I say water “should” run uphill, then I am saying that is is bad that it does not. The practical result of this is to incur an aversive emotional response every time I am exposed to the fact that water runs downhill—a response which does not benefit me in any way.
A saner, E-prime-like translation of “water should run uphill” might be, “I would prefer that water ran uphill”. My preference is just as unlikely to be met in that case, but I do not experience any aversion to the fact that reality does not currently match my preference. And I can still experience a positive emotional response from, say, crafting nice fountains that pump water uphill.
It seems to me that a rationalist would experience better results in life if he or she did not experience aversive emotions from exposure to common facts… such as the fact that human beings run on hardware that’s poorly designed for rationality.
Without such aversions, it would be unnecessary to craft complex strategies to avoid signaling them to others. And, equally important, having aversive responses to impersonal facts is a strong driver of motivated reasoning that’s hard to detect in ourselves!
Good summary; the confusion of treating natural mindless phenomena with intentional stance was addressed in the Three Fallacies of Teleology post.
When it is possible to change the situation, emotion directed the right way acts as reinforcement signal, and helps to learn the correct behavior (and generally to focus on figuring out a way of improving the situation). Attaching the right amount of right emotions to the right situations is an indispensable tool, good for efficiency and comfort.
When it is possible to change the situation, emotion directed the right way acts as reinforcement signal, and helps to learn the correct behavior (and generally to focus on figuring out a way of improving the situation). Attaching the right amount of right emotions to the right situations is an indispensable tool, good for efficiency and comfort.
The piece you may have missed is that even if the situation can be changed, it is still sufficient to use a positive reinforcement to motivate action, and in human beings, it is generally most useful to use positive reinforcement to motivate positive action.
This is because, on the human platform at least, positive reinforcement leads to exploratory, creative, and risk-taking behaviors, whereas negative reinforcement leads to defensive, risk-avoidance, and passive behaviors. So if the best way to change a situation is to avoid it, then by all means, use negative reinforcement.
However, if the best way to change the situation is to engage with it, then negative emotions and “shoulds” are your enemy, not your friend, as they will cause your mind and body to suggest less-useful behaviors (and signals to others).
IAWYC, modulo the use of “should”: at least with connotations assumed on Less Wrong, it isn’t associated with compulsion or emotional load, it merely denotes preference. “Ought” would be closer.
IAWYC, modulo the use of “should”: at least with connotations assumed on Less Wrong, it isn’t associated with compulsion or emotional load, it merely denotes preference. “Ought” would be closer.
It’s true that in technical contexts “should” has less emotional connotation; however even in say, standards documents, one capitalizes SHOULD and MUST to highlight the technical, rather than colloquial sense of these words. Banishing them from one’s personal vocabulary greatly reduces suffering, and is the central theme of “The Work” of Byron Katie (who teaches a simple 4-question model for turning “shoulds” into facts and felt-preferences).
Among a community of rationalists striving for better communication, it would be helpful to either taboo the words or create alternatives. As it is, a lot of “shoulds” get thrown around here without reference to what goal or preference the shoulds are supposed to serve.
“One should X” conveys no information about what positive or negative consequences are being asserted to stem from doing or not-doing X—and that’s precisely the sort of information that we would like to have if we are to understand each other.
Agreed. Even innocuous-looking exceptions, like phrases of the form, “if your goal is to X, then you should Y”, have to make not-necessarily-obvious assumptions about what exactly Y is optimizing.
Avoiding existing words is in many cases a counterproductive injunction, it’s a normal practice when words get stolen for terms of art. Should refers to a sum total of ideal preference, the top level terminal goal, over all of the details (consequences) together.
Should may require a consequentialist explanation for instrumental actions, or a moral argument for preference over consequences.
The problems you cite in bullets are only nontrivial if you don’t sufficiently value social cohesion. My biggest faux pas have sufficiently conditioned me to make them less often because I put a high premium on that cohesion. So I think it’s less a question of social intelligence and more one of priorities. I don’t have to keep “constant focus”—after a few faux pas it becomes plainly apparent which subjects are controversial and which aren’t, and when we do come around to touchy ones I watch myself a little more.
I thought I would get away with that simplification. Heh.
Those skills do come naturally to some people, but not everyone. They certainly don’t come naturally to me. Even if I’m in a social group with rules that allow me to notice that a faux pas has occurred (not all do; some groups consider it normal to obscure such things to the point where I’ll find out weeks or months later, if at all), it’s still not usually obvious what I did wrong or what else I could do instead, and I have to intentionally sit down and come up with theories that I may or may not even have a chance to test.
Right, I get that people fare differently when it comes to this stuff, but I do think it’s a matter of practice and attention more than innate ability (for most people). And this is really my point, that the sort of monastic rationality frequently espoused on these boards can have politically antirational effects. It’s way easier to influence others if you first establish a decent rapport with them.
I don’t at all disagree that the skills are good to learn, especially if you’re going to be focusing on tasks that involve dealing with non-rationalists. I think it may be a bit of an over generalization to say that they should be a high priority for everyone, but probably not much of one.
I do have a problem with judging people for not having already mastered those skills, or for having higher priorities than tackling those skills immediately with all their energy, though, which seems to be what you’re doing. Am I inferring too much when I come to that conclusion?
Look, this whole thread started because of Annoyance’s judgment of people who have higher priorities than rationality, right? Did you have a problem with that?
All I’m saying is that this community in general gives way too short shrift to the utility of social cohesion. Sorry if that bothers you.
Most of what he said condenses to “people who are not practicing rationality are irrational”, which is only an insult if you consider ‘irrational’ to be an insult, which I didn’t see any evidence of. I saw frustration at the difficulty in dealing with them without social awkwardness, but that’s not the same.
Yes, and most of what I said reduces to “Annoyance is not practicing rationality with statements like “‘social cohesion is one of the enemies of rationality.’” You said you had a “problem” with my contention and then I pointed out that Annoyance had made a qualitatively similar claim that hadn’t bothered you. Aside from our apparent disagreement on the point I don’t get how my claim could be a problem for you.
I think I’ve made myself clear and this is getting tiresome so I’ll invite you to have the last word.
I hope I’m not the only one who sees the irony in you refusing to answer my question about your reasoning, given where this thread started.
I guess the best option now is to sum this disagreement up in condensations. For simplicity’s sake, I’m only going to do comments on the branch that leads directly here. I’m starting with this comment.
Annoyance: counterargument: “Most people are not interested enough in being rational for that suggestion to work; they’ll find a way around it, instead”
Me: disagreement with Annoyance—I was wrong
Annoyance: Pointed out my mistake
Me: “Oh, right”
Annoyance: “That is a common mistake, and one that I haven’t fully overcome yet, which means I still have trouble communicating with people who are not practicing rationality” (probably intended to make me feel better)
You: “I object to the above exchange; you’re just masking your prejudice against irrational people by refusing to communicate clearly with them”
Me: “Actually, it’s not a refusal, it’s just hard.”
You: “No, it’s not hard, and refusal to do it means that you don’t value social cohesion.” with a personal example of it not being hard.
Me: “Okay, you got me. It’s only hard for some people.”
You: “Okay, it is hard for some people, but it’s still learnable, and harmful to the cause of rationality if you present yourself as a rationalist without having those skills.”
Me: “They’re good to learn, but I think you’re over-valuing them, and judging people for not sharing your values.”
You: “Why are you complaining about me being judgmental when you didn’t complain about Annoyance being judgmental?”, plus what appears to be some social-signaling stuff intended to indicate that I’m a bad person because I don’t care about social cohesion. I don’t know enough about what you mean by “social cohesion” to make sense of that part of the thread, but I suspect that your assertion that I don’t value it is correct.
Me: “Where was Annoyance judgmental? I didn’t see him being judgmental anywhere.”
This brings us to your comment directly above, which doesn’t condense well. You didn’t answer my question (and I don’t take this as proof that there is no instance of Annoyance being judgmental—I may have missed something somewhere—but I consider it pretty unlikely that you’d refuse to defend your assertion if there was a clear one, so it’s at least strong evidence that there isn’t), accused Annoyance of being irrational, and claimed that I should be accepting your claim even though you refuse to actually defend it.
I do agree with you that the skills involved in dealing with irrational people are useful to learn. But we obviously disagree in many, many ways on what kinds of support should be necessary for an argument to be taken seriously here.
That’s not a judgment against less intelligent people; it’s a judgment against all of us, himself included. I recognize it as being the more rational decision in the situation I mentioned here as one that I’m failing at from a rationalist standpoint, and am not going to bother challenging his rational view on a rational forum when the best defense I can think of is “yes, but you shouldn’t say that to the muggles”.
Social cohesion is one of the enemies of rationality.
It’s not necessarily so in that it’s not always opposed to it, but it is incompatible with the mechanisms that bring it about and permit it to error-correct. It tends to reinforce error. When it happens to reinforce correctness, it’s not needed, and when it doesn’t, it makes it significantly harder to correct the errors.
“When it happens to reinforce correctness, it’s not needed”
Can you elaborate?
I’ll note that rationality isn’t an end. My ideal world state would involve a healthy serving of both rationality and social cohesion. There are many situations in which these forces work in tandem and many where they’re at odds.
A perfect example is this site. There are rules the community follows to maintain a certain level of social cohesion, which in turn aides us in the pursuit of rationality. Or are the rules not needed?
It’s demonstrated by the fact that you can up/down vote and report anyone’s posts, and that you need a certain number of upvotes to write articles. This is a method of policing the discourse on the site so that social cohesion doesn’t break down to an extent which impairs our discussion. These mechanisms “reinforce correctness,” in your terms. So I’ll ask again, can we do away with them?
I don’t think humanity follows obviously from rationality, which is what I meant about rationality being a means rather than an end.
There are rules the community follows to maintain a certain level of social cohesion, which in turn aides us in the pursuit of rationality.
How is that demonstrated?
Those rules are rarely discussed outright, at least not comprehensively.
I’m pretty sure if I started posting half of my comments in pig-Latin or French or something, for no apparent reason, and refused to explain or stop, I’d be asked to leave fairly quickly, though. That all communication will be in plain English unless there’s a reason for it not to be is one example. I’m sure there are others.
I disagree. It is rational to exploit interpersonal communication for clarity between persons and comfortable use. If the ‘language of rationality’ can’t be understood by the ‘irrational people’, it is rational to translate best you can, and that can include utilizing societal norms. (For clarity and lubrication of the general process.)
Oh, I’m sorry I misunderstood you. Yeah, it can be tiring. I’m a fairly introverted person and need a good amount of downtime between socialization. I guess I was projecting a little—I use to think social norms were garbage and useless, until I realized neglecting their utility was irrational and it was primarily an emotional bias against them in never feeling like I ‘fit in’. Sometimes it feels like you never stop discovering unfortunate things about yourself...
I think you’re underestimating the degree of social intelligence required. To pull that off while still keeping the rationalistic habits that such people find offensive, you’d have to:
Recognize the problem, which is nontrivial,
Find a way of figuring out who falls on which side of the line, without tipping people off,
Determine all of the rationalistic habits that are likely to offend people who are not trying to become more rational,
Find non-offensive ways of achieving those goals, or find ways of avoiding those situations entirely,
Find a way not to slip up in conversation and apply the habits anyway—again, nontrivial. Keeping this degree of focus in realtime is hard.
You’d also probably have to at least to some degree integrate the idea that it’s ‘okay’ (not correct, just acceptable) to be irrational into your general thought process, to avoid unintentional signaling that you think poorly of them. If anything, irrational people are more likely to notice such subtle signals, since so much of their communication is based on them.
Or, you could just treat the existence of irrationality as a mere fact, like the fact that water freezes or runs downhill. Facts are not a matter of correctness or acceptability, they just are.
In fact (no pun intended), assigning “should-ness” to facts or their opposites in our brains is a significant force in our own irrationality. To say that people “should” be rational is like saying that water “should” run uphill—it says more about your value system than about the thing supposedly being pointed to.
Functionally, beliefs about “should” and “should not” assign aversive consequences to current reality—if I say water “should” run uphill, then I am saying that is is bad that it does not. The practical result of this is to incur an aversive emotional response every time I am exposed to the fact that water runs downhill—a response which does not benefit me in any way.
A saner, E-prime-like translation of “water should run uphill” might be, “I would prefer that water ran uphill”. My preference is just as unlikely to be met in that case, but I do not experience any aversion to the fact that reality does not currently match my preference. And I can still experience a positive emotional response from, say, crafting nice fountains that pump water uphill.
It seems to me that a rationalist would experience better results in life if he or she did not experience aversive emotions from exposure to common facts… such as the fact that human beings run on hardware that’s poorly designed for rationality.
Without such aversions, it would be unnecessary to craft complex strategies to avoid signaling them to others. And, equally important, having aversive responses to impersonal facts is a strong driver of motivated reasoning that’s hard to detect in ourselves!
Good summary; the confusion of treating natural mindless phenomena with intentional stance was addressed in the Three Fallacies of Teleology post.
When it is possible to change the situation, emotion directed the right way acts as reinforcement signal, and helps to learn the correct behavior (and generally to focus on figuring out a way of improving the situation). Attaching the right amount of right emotions to the right situations is an indispensable tool, good for efficiency and comfort.
The piece you may have missed is that even if the situation can be changed, it is still sufficient to use a positive reinforcement to motivate action, and in human beings, it is generally most useful to use positive reinforcement to motivate positive action.
This is because, on the human platform at least, positive reinforcement leads to exploratory, creative, and risk-taking behaviors, whereas negative reinforcement leads to defensive, risk-avoidance, and passive behaviors. So if the best way to change a situation is to avoid it, then by all means, use negative reinforcement.
However, if the best way to change the situation is to engage with it, then negative emotions and “shoulds” are your enemy, not your friend, as they will cause your mind and body to suggest less-useful behaviors (and signals to others).
IAWYC, modulo the use of “should”: at least with connotations assumed on Less Wrong, it isn’t associated with compulsion or emotional load, it merely denotes preference. “Ought” would be closer.
It’s true that in technical contexts “should” has less emotional connotation; however even in say, standards documents, one capitalizes SHOULD and MUST to highlight the technical, rather than colloquial sense of these words. Banishing them from one’s personal vocabulary greatly reduces suffering, and is the central theme of “The Work” of Byron Katie (who teaches a simple 4-question model for turning “shoulds” into facts and felt-preferences).
Among a community of rationalists striving for better communication, it would be helpful to either taboo the words or create alternatives. As it is, a lot of “shoulds” get thrown around here without reference to what goal or preference the shoulds are supposed to serve.
“One should X” conveys no information about what positive or negative consequences are being asserted to stem from doing or not-doing X—and that’s precisely the sort of information that we would like to have if we are to understand each other.
Agreed. Even innocuous-looking exceptions, like phrases of the form, “if your goal is to X, then you should Y”, have to make not-necessarily-obvious assumptions about what exactly Y is optimizing.
Avoiding existing words is in many cases a counterproductive injunction, it’s a normal practice when words get stolen for terms of art. Should refers to a sum total of ideal preference, the top level terminal goal, over all of the details (consequences) together.
Should may require a consequentialist explanation for instrumental actions, or a moral argument for preference over consequences.
Agreed. This is one of the major themes of some (most?) meditation practices and seems to be one of the most useful.
I seriously doubt we’re capable of not associating it with those things, though.
I think of “should” and “ought” as exactly synonymous, btw.
Thanks to both of you for expressing so clearly what I failed to, and with links!
That’s just what I was trying to get at. Thanks for the clarification.
The problems you cite in bullets are only nontrivial if you don’t sufficiently value social cohesion. My biggest faux pas have sufficiently conditioned me to make them less often because I put a high premium on that cohesion. So I think it’s less a question of social intelligence and more one of priorities. I don’t have to keep “constant focus”—after a few faux pas it becomes plainly apparent which subjects are controversial and which aren’t, and when we do come around to touchy ones I watch myself a little more.
I thought I would get away with that simplification. Heh.
Those skills do come naturally to some people, but not everyone. They certainly don’t come naturally to me. Even if I’m in a social group with rules that allow me to notice that a faux pas has occurred (not all do; some groups consider it normal to obscure such things to the point where I’ll find out weeks or months later, if at all), it’s still not usually obvious what I did wrong or what else I could do instead, and I have to intentionally sit down and come up with theories that I may or may not even have a chance to test.
Right, I get that people fare differently when it comes to this stuff, but I do think it’s a matter of practice and attention more than innate ability (for most people). And this is really my point, that the sort of monastic rationality frequently espoused on these boards can have politically antirational effects. It’s way easier to influence others if you first establish a decent rapport with them.
I don’t at all disagree that the skills are good to learn, especially if you’re going to be focusing on tasks that involve dealing with non-rationalists. I think it may be a bit of an over generalization to say that they should be a high priority for everyone, but probably not much of one.
I do have a problem with judging people for not having already mastered those skills, or for having higher priorities than tackling those skills immediately with all their energy, though, which seems to be what you’re doing. Am I inferring too much when I come to that conclusion?
Look, this whole thread started because of Annoyance’s judgment of people who have higher priorities than rationality, right? Did you have a problem with that?
All I’m saying is that this community in general gives way too short shrift to the utility of social cohesion. Sorry if that bothers you.
Quote, please?
Most of what he said condenses to “people who are not practicing rationality are irrational”, which is only an insult if you consider ‘irrational’ to be an insult, which I didn’t see any evidence of. I saw frustration at the difficulty in dealing with them without social awkwardness, but that’s not the same.
Have I missed something?
Yes, and most of what I said reduces to “Annoyance is not practicing rationality with statements like “‘social cohesion is one of the enemies of rationality.’” You said you had a “problem” with my contention and then I pointed out that Annoyance had made a qualitatively similar claim that hadn’t bothered you. Aside from our apparent disagreement on the point I don’t get how my claim could be a problem for you.
I think I’ve made myself clear and this is getting tiresome so I’ll invite you to have the last word.
I hope I’m not the only one who sees the irony in you refusing to answer my question about your reasoning, given where this thread started.
I guess the best option now is to sum this disagreement up in condensations. For simplicity’s sake, I’m only going to do comments on the branch that leads directly here. I’m starting with this comment.
JamesCole: Quoted hypothetical social-norm suggestion, disagreed, offered altenate suggestion suggestion, offered supporting logic.
JamesCole: Restated supporting logic.
Me: Agreed, offered more support.
Annoyance: counterargument: “Most people are not interested enough in being rational for that suggestion to work; they’ll find a way around it, instead”
Me: disagreement with Annoyance—I was wrong
Annoyance: Pointed out my mistake
Me: “Oh, right”
Annoyance: “That is a common mistake, and one that I haven’t fully overcome yet, which means I still have trouble communicating with people who are not practicing rationality” (probably intended to make me feel better)
You: “I object to the above exchange; you’re just masking your prejudice against irrational people by refusing to communicate clearly with them”
Me: “Actually, it’s not a refusal, it’s just hard.”
You: “No, it’s not hard, and refusal to do it means that you don’t value social cohesion.” with a personal example of it not being hard.
Me: “Okay, you got me. It’s only hard for some people.”
You: “Okay, it is hard for some people, but it’s still learnable, and harmful to the cause of rationality if you present yourself as a rationalist without having those skills.”
Me: “They’re good to learn, but I think you’re over-valuing them, and judging people for not sharing your values.”
You: “Why are you complaining about me being judgmental when you didn’t complain about Annoyance being judgmental?”, plus what appears to be some social-signaling stuff intended to indicate that I’m a bad person because I don’t care about social cohesion. I don’t know enough about what you mean by “social cohesion” to make sense of that part of the thread, but I suspect that your assertion that I don’t value it is correct.
Me: “Where was Annoyance judgmental? I didn’t see him being judgmental anywhere.”
This brings us to your comment directly above, which doesn’t condense well. You didn’t answer my question (and I don’t take this as proof that there is no instance of Annoyance being judgmental—I may have missed something somewhere—but I consider it pretty unlikely that you’d refuse to defend your assertion if there was a clear one, so it’s at least strong evidence that there isn’t), accused Annoyance of being irrational, and claimed that I should be accepting your claim even though you refuse to actually defend it.
I do agree with you that the skills involved in dealing with irrational people are useful to learn. But we obviously disagree in many, many ways on what kinds of support should be necessary for an argument to be taken seriously here.
Hmm, might you have been referring to this?
That’s not a judgment against less intelligent people; it’s a judgment against all of us, himself included. I recognize it as being the more rational decision in the situation I mentioned here as one that I’m failing at from a rationalist standpoint, and am not going to bother challenging his rational view on a rational forum when the best defense I can think of is “yes, but you shouldn’t say that to the muggles”.
Social cohesion is one of the enemies of rationality.
It’s not necessarily so in that it’s not always opposed to it, but it is incompatible with the mechanisms that bring it about and permit it to error-correct. It tends to reinforce error. When it happens to reinforce correctness, it’s not needed, and when it doesn’t, it makes it significantly harder to correct the errors.
“When it happens to reinforce correctness, it’s not needed”
Can you elaborate?
I’ll note that rationality isn’t an end. My ideal world state would involve a healthy serving of both rationality and social cohesion. There are many situations in which these forces work in tandem and many where they’re at odds.
A perfect example is this site. There are rules the community follows to maintain a certain level of social cohesion, which in turn aides us in the pursuit of rationality. Or are the rules not needed?
Why can’t it be?
How is that demonstrated?
It’s demonstrated by the fact that you can up/down vote and report anyone’s posts, and that you need a certain number of upvotes to write articles. This is a method of policing the discourse on the site so that social cohesion doesn’t break down to an extent which impairs our discussion. These mechanisms “reinforce correctness,” in your terms. So I’ll ask again, can we do away with them?
I don’t think humanity follows obviously from rationality, which is what I meant about rationality being a means rather than an end.
You’re assuming a fact not in evidence.
So you tell me what you think they’re for, then.
Those rules are rarely discussed outright, at least not comprehensively.
I’m pretty sure if I started posting half of my comments in pig-Latin or French or something, for no apparent reason, and refused to explain or stop, I’d be asked to leave fairly quickly, though. That all communication will be in plain English unless there’s a reason for it not to be is one example. I’m sure there are others.
I disagree. It is rational to exploit interpersonal communication for clarity between persons and comfortable use. If the ‘language of rationality’ can’t be understood by the ‘irrational people’, it is rational to translate best you can, and that can include utilizing societal norms. (For clarity and lubrication of the general process.)
Yes, I agree—my point was that the skill of translating is a difficult one to acquire, not that it’s irrational to acquire it.
Oh, I’m sorry I misunderstood you. Yeah, it can be tiring. I’m a fairly introverted person and need a good amount of downtime between socialization. I guess I was projecting a little—I use to think social norms were garbage and useless, until I realized neglecting their utility was irrational and it was primarily an emotional bias against them in never feeling like I ‘fit in’. Sometimes it feels like you never stop discovering unfortunate things about yourself...