Speaking of precommitment to be offended. Would a perfectly rational B be offended at all by an incorrect guess? Granted, humans aren’t perfectly rational, nor do they exist in a vacuum.
I’m rarely genuinely offended by stereotyping—I prefer to just politely point out the mistake. Sometimes, though, I prefer to act as if I was offended if it’s socially acceptable to be offended in the situation and I believe it’s in my interest to further my goals.
Would a perfectly rational B be offended at all by an incorrect guess?
I eventually come out with a contingent “yes” to this question, but it took me a while to get there, and I don’t entirely trust my reasoning.
As stated, I wasn’t sure how to go about answering that question.
But when A guesses about B, this reveals facts about A’s priors with respect to B. So this question seemed isomorphic to “Would B be offended by A believing certain things about B?” which seemed a little more accessible.
But I wasn’t exactly sure what “offended” means, at this level of description. The best unpacking I could come up with was that I’m offended by an expressed belief when I subconsciously or instinctively choose to signal my strong rejection of that belief.
If that’s true, then I can rephrase the question as “Would a perfectly rational B subconsciously or instinctively choose to signal strong rejection of certain beliefs about B?”
If B has saliently limited conscious processing ability (limited either by speed or capacity) then my answer is a contingent “yes.”
For example, a perfectly rational B might reason as follows: “Consider the proposition P1: ‘B is willing to cheat’. Within a community that lends weight to my signaling, there is value to my signaling a strong rejection of P1. Expressing offense at P1 signals that rejection. Expressing offense successfully depends on very rapid response; if I am seen as taking time to think about it first, my offense won’t signal as effectively. So I do better to not think about it first, but instead instinctively express offense without thinking. In other words, I do better to be offended by the suggestion of P1. OK, let me go implement that.”
In this example, B’s conscious processing speed forms the salient limitation, but what’s important here is the general condition that an unconscious result has value relative to a conscious one.
The specific value provided in this example is less important; there are lots of different equivalent examples.
Depends on B’s preferences. If B is selfish or cares about their group disproportionately, then yeah, it may be perfectly rational to take offense.
I’m not an aggregate utilitarian—I believe it’s okay to be selfish at the expense of everyone else (within limits, e.g. I wouldn’t nuke New York to get an ice cream). But from the viewpoint of total utility, you may or may not be hurting the world overall by adding “friction”. I’m kind of on the fence about this—please don’t take my grandparent comment as claiming that taking offense is always negative-sum. I was just pointing out the possibility, but to get the right answer you have to imagine the two worlds and calculate.
I wonder, then, if it’s logically coherent to talk about a world where nobody gets offended by stereotyping without changing something fundamental about humans. Are we sure we’ll be talking about a human world in that case?
Whaaaaa? Humans started getting offended by gender stereotypes at all only very recently, and not everywhere. If you’re an American, you need to realize that your country is far and away the #1 in the world when it comes to taking offense at stereotyping, and most of the world’s population considers that notion very weird even today.
Using that conditional pretty much signalised that your prior on “lucidfox is American” was higher than your prior on any other nationality, which reflects the stereotype of a typical Lesswronger. Speech does not magically become impregnated against offensiveness by including “if”. But if you are an evil idiot, you would not understand that :)
Hmm. If I don’t even need to do stuff to offend someone—if just possessing accurate priors is sufficient—then I guess I’ll go ahead and take the hit. If LW, of all places, begins to punish people for having accurate priors, then as a longtime contributor I feel a sense of duty to introduce a little friction. Throw some sand into the gears. And you should do the same, IMO.
The whole business about stereotypes is about possessing accurate priors, and behaving in a way that reveals them. Which you did.
Of course, I do not suggest that LW should punish people for having accurate priors, and I don’t probably disagree with you about offense (or offence?) in general. I actively try to not take offense because of stereotypes. But a lot of users here are doing their best to conceal their priors, e.g. about gender distribution among scientists by diligently balancing the use of male and female characters in their stories. I have no strong opinion about that. I only wanted to emphasise that people take offense from revealing some priors.
The whole business about stereotypes is about possessing accurate priors, and behaving in a way that reveals them. Which you did.
I had thought, perhaps idealistically, that you’d have to actually hurt someone. Like refuse to hire them because they have blue skin. If my behavior isn’t hurting anyone, then I object to your calling it a derogatory name (“stereotyping”). This also extends to the case where people choose, consciously or subconsciously, to get offended at my non-hurtful behavior just to teach the world a lesson or something. That’s about as well-founded as getting offended at gay people doing their gay thing.
I’m not sure what you think the difference between “people choose, consciously or subconsciously, to get offended” and “people get offended” is.
Regardless: some people get upset when they think I believe, based on their group membership G, that they have an attribute A. Sometimes this happens even when A is more common in G than in the general population.
Perhaps this is unreasonable when A is “is American” and G is “LessWrong”.
Perhaps it’s also unreasonable when A is “has a criminal record” and G is “American black man.”
But the fact remains that people do get upset by this sort of thing..
If we want to establish the explicit social norm on LessWrong that these sorts of assumptions are acceptable, that’s our choice, but let’s at least try not to be surprised when outsiders are upset by it.
Edit: Actually, on thinking about it, I realize I’m being a doofus. You almost undoubtedly meant, not inferring A from G when A is more common in G than in the general population, but inferring A from G when A is more common than -A in G, which is a far more unreasonable thing to be upset about. My apologies.
Edit: Actually, on thinking about it, I realize I’m being a doofus. You almost undoubtedly meant, not inferring A from G when A is more common in G than in the general population, but inferring A from G when A is more common than -A in G, which is a far more unreasonable thing to be upset about. My apologies.
It’s very interesting that you made this mistake (and I didn’t notice it until you pointed it out, and would maybe have made the same).
It seems that the human mind doesn’t make a sufficiently good distinction between the two, between “blacks are more likely than non-blacks to have a criminal record” and “blacks are more likely than not to have a criminal record”. Maybe by default the non-verbal part of the brain stores the simpler version (the second one), and uses that part to constrain expectations and behavior.
I don’t think it’s a question of what gets stored so much as what gets activated.
That is, if I have three nodes that “represent” inferring A from G when A is more common in G than in the general population (N1), inferring A from G when A is more common than -A in G (N2), and the word “stereotyping” (N3), and my N1->N3 and N2->N3 links are stronger than N1 and N2′s links to any other word, and the N3->N1 link is much stronger than the N3->N2 link, then lexical operations are going to make this sort of mistake… I might start out thinking about N2, decide to talk about it, therefore use the word “stereotyping,” which in turn strongly activates N1, which displaces N2.
This is why having distinct words for minor variations in meaning can be awfully useful, sometimes. I’m willing to bet that if we agreed to use different words for N1 and N2, and we had enough conversations about stereotyping to reinforce that agreement, we’d find this error far less tempting, easier to notice, and easier to correct.
What you strike me is the human tendency to mark one option as the default and the other as a special case.
However, it makes me wonder: if the person making the judgement belongs to the category commonly considered “a special case”, will they mentally mark either category as the default? Judging by myself (yes, yes, generalizing from one example), among the intersection of social partitionings that define me, I tend to skip ones where I’m in the majority category (for example, white, or specifically on LW, atheist), and in cases where I’m a minority, treat neither option as the implicit default.
As I recall, for some categories this turns out, surprisingly, not to be the case. Women are as likely as men to consider a person of unspecified gender male, for example, and blacks are as likely as whites to consider a person of unspecified color white… at least, in some contexts, for some questions, etc. (I would very much expect this to change radically depending on, for example, where the study is being performed; also I would expect it to be more true of implicit association tests than explicit ones.)
I have no citations, though, and could easily be misremembering (or remembering inconclusive studies).
Edit: Actually, on thinking about it, I realize I’m being a doofus. You almost undoubtedly meant, not inferring A from G when A is more common in G than in the general population, but inferring A from G when A is more common than -A in G, which is a far more unreasonable thing to be upset about. My apologies.
Strictly speaking you should adjust your probability estimate of the person having attribute A either way. How you then act depends on the consequences of making either error., e.g., the consequences of falsely assuming someone isn’t a violent criminal can be more serious then the reverse.
Yes; it would have been more precise to say “inferring a inappropriately high probability of A from G”, rather than “inferring A from G.”
And you’re right that what I do based on my derived probability of A is independent of how I derived that probability, as long as I’m deriving it correctly. (This is related to cousin it’s original complaint that inferring unflattering beliefs about people when I don’t actually hurt them based on those beliefs ought not be labeled “stereotyping”, so in some sense we’ve closed a loop here.)
This sounds very like you are defining your behaviour as non-hurtful such that anyone objecting to it is then axiomatically in the wrong. If that’s not what you meant, do please elaborate.
If there’s no substance to their objections beyond “I am offended at this general pattern of behavior”, then it sounds like they are in the wrong, no? When a commoner crosses a noble’s path without proper kowtowing, the noble may feel very offended indeed, and even have the commoner whipped; but in our enlightened times we know better than to agree with the noble, because the commoner hasn’t hurt the noble in any way. That’s the moral standard I’m applying here.
Also consider the analogy with gays. What is it that tells you people shouldn’t get offended by others’ homosexuality? Would you be sympathetic to someone claiming gays should change their behavior in public because he’s genuinely hurt by it, or would you consider that person “axiomatically in the wrong”? If the latter, didn’t you just apply an instance of the general standard that actually non-hurtful behavior is okay even though some people may complain—and even be sincere in their complaints?
I haven’t meant about the word “stereotype” as derogatory, but you are possibly right that it bears negative connotations, so let’s use another one. “Accurate priors”, maybe?
I thought that people get offended usually because of words, and less often because of deeds. “Offended” associates in my mind with Muslims burnig Danish flags or men in rage after they were called cowards, rather than with unsuccessful job applicants. The applicant may be disappointed, angry, sad, maybe desperate, but I would be surprised if he said he is offended.
But let not this be a dispute about semantics. I suppose there is no real disagreement.
Well, the point of me including an emoticon there was that my feelings on this are confused at best. It likely indicates me being upset with the underlying cause (ideally I would prefer if Less Wrong was a truly international website where the prior for any “user X is nationality Y” is low, and it’s something I see worth striving for) than with the specific hypothesis that I’m American, which can be easily discarded before it’s used to draw any conclusions.
Speaking of precommitment to be offended. Would a perfectly rational B be offended at all by an incorrect guess? Granted, humans aren’t perfectly rational, nor do they exist in a vacuum.
I’m rarely genuinely offended by stereotyping—I prefer to just politely point out the mistake. Sometimes, though, I prefer to act as if I was offended if it’s socially acceptable to be offended in the situation and I believe it’s in my interest to further my goals.
I eventually come out with a contingent “yes” to this question, but it took me a while to get there, and I don’t entirely trust my reasoning.
As stated, I wasn’t sure how to go about answering that question.
But when A guesses about B, this reveals facts about A’s priors with respect to B. So this question seemed isomorphic to “Would B be offended by A believing certain things about B?” which seemed a little more accessible.
But I wasn’t exactly sure what “offended” means, at this level of description. The best unpacking I could come up with was that I’m offended by an expressed belief when I subconsciously or instinctively choose to signal my strong rejection of that belief.
If that’s true, then I can rephrase the question as “Would a perfectly rational B subconsciously or instinctively choose to signal strong rejection of certain beliefs about B?”
If B has saliently limited conscious processing ability (limited either by speed or capacity) then my answer is a contingent “yes.”
For example, a perfectly rational B might reason as follows: “Consider the proposition P1: ‘B is willing to cheat’. Within a community that lends weight to my signaling, there is value to my signaling a strong rejection of P1. Expressing offense at P1 signals that rejection. Expressing offense successfully depends on very rapid response; if I am seen as taking time to think about it first, my offense won’t signal as effectively. So I do better to not think about it first, but instead instinctively express offense without thinking. In other words, I do better to be offended by the suggestion of P1. OK, let me go implement that.”
In this example, B’s conscious processing speed forms the salient limitation, but what’s important here is the general condition that an unconscious result has value relative to a conscious one.
The specific value provided in this example is less important; there are lots of different equivalent examples.
Depends on B’s preferences. If B is selfish or cares about their group disproportionately, then yeah, it may be perfectly rational to take offense.
I’m not an aggregate utilitarian—I believe it’s okay to be selfish at the expense of everyone else (within limits, e.g. I wouldn’t nuke New York to get an ice cream). But from the viewpoint of total utility, you may or may not be hurting the world overall by adding “friction”. I’m kind of on the fence about this—please don’t take my grandparent comment as claiming that taking offense is always negative-sum. I was just pointing out the possibility, but to get the right answer you have to imagine the two worlds and calculate.
Hmmm.
I wonder, then, if it’s logically coherent to talk about a world where nobody gets offended by stereotyping without changing something fundamental about humans. Are we sure we’ll be talking about a human world in that case?
Whaaaaa? Humans started getting offended by gender stereotypes at all only very recently, and not everywhere. If you’re an American, you need to realize that your country is far and away the #1 in the world when it comes to taking offense at stereotyping, and most of the world’s population considers that notion very weird even today.
I’m a Russian, actually.
Now I’m wondering if I got offended by the above post for a falling-back-to-defaults assumption… :)
Getting offended by conditionals is an idea I can’t quite get my head around, but okay. I’m Russian too, by the way. Hiyah :-)
Using that conditional pretty much signalised that your prior on “lucidfox is American” was higher than your prior on any other nationality, which reflects the stereotype of a typical Lesswronger. Speech does not magically become impregnated against offensiveness by including “if”. But if you are an evil idiot, you would not understand that :)
Hmm. If I don’t even need to do stuff to offend someone—if just possessing accurate priors is sufficient—then I guess I’ll go ahead and take the hit. If LW, of all places, begins to punish people for having accurate priors, then as a longtime contributor I feel a sense of duty to introduce a little friction. Throw some sand into the gears. And you should do the same, IMO.
The whole business about stereotypes is about possessing accurate priors, and behaving in a way that reveals them. Which you did.
Of course, I do not suggest that LW should punish people for having accurate priors, and I don’t probably disagree with you about offense (or offence?) in general. I actively try to not take offense because of stereotypes. But a lot of users here are doing their best to conceal their priors, e.g. about gender distribution among scientists by diligently balancing the use of male and female characters in their stories. I have no strong opinion about that. I only wanted to emphasise that people take offense from revealing some priors.
I had thought, perhaps idealistically, that you’d have to actually hurt someone. Like refuse to hire them because they have blue skin. If my behavior isn’t hurting anyone, then I object to your calling it a derogatory name (“stereotyping”). This also extends to the case where people choose, consciously or subconsciously, to get offended at my non-hurtful behavior just to teach the world a lesson or something. That’s about as well-founded as getting offended at gay people doing their gay thing.
I’m not sure what you think the difference between “people choose, consciously or subconsciously, to get offended” and “people get offended” is.
Regardless: some people get upset when they think I believe, based on their group membership G, that they have an attribute A. Sometimes this happens even when A is more common in G than in the general population.
Perhaps this is unreasonable when A is “is American” and G is “LessWrong”.
Perhaps it’s also unreasonable when A is “has a criminal record” and G is “American black man.”
But the fact remains that people do get upset by this sort of thing..
If we want to establish the explicit social norm on LessWrong that these sorts of assumptions are acceptable, that’s our choice, but let’s at least try not to be surprised when outsiders are upset by it.
Edit: Actually, on thinking about it, I realize I’m being a doofus. You almost undoubtedly meant, not inferring A from G when A is more common in G than in the general population, but inferring A from G when A is more common than -A in G, which is a far more unreasonable thing to be upset about. My apologies.
It’s very interesting that you made this mistake (and I didn’t notice it until you pointed it out, and would maybe have made the same).
It seems that the human mind doesn’t make a sufficiently good distinction between the two, between “blacks are more likely than non-blacks to have a criminal record” and “blacks are more likely than not to have a criminal record”. Maybe by default the non-verbal part of the brain stores the simpler version (the second one), and uses that part to constrain expectations and behavior.
I don’t think it’s a question of what gets stored so much as what gets activated.
That is, if I have three nodes that “represent” inferring A from G when A is more common in G than in the general population (N1), inferring A from G when A is more common than -A in G (N2), and the word “stereotyping” (N3), and my N1->N3 and N2->N3 links are stronger than N1 and N2′s links to any other word, and the N3->N1 link is much stronger than the N3->N2 link, then lexical operations are going to make this sort of mistake… I might start out thinking about N2, decide to talk about it, therefore use the word “stereotyping,” which in turn strongly activates N1, which displaces N2.
This is why having distinct words for minor variations in meaning can be awfully useful, sometimes. I’m willing to bet that if we agreed to use different words for N1 and N2, and we had enough conversations about stereotyping to reinforce that agreement, we’d find this error far less tempting, easier to notice, and easier to correct.
See the sequence on A Human’s Guide to Words for more on this subject.
Cool! I’d read at least most of these, and the ideas aren’t new, but I hadn’t realized they were all linked in one place. Thanks for the pointer.
What you strike me is the human tendency to mark one option as the default and the other as a special case.
However, it makes me wonder: if the person making the judgement belongs to the category commonly considered “a special case”, will they mentally mark either category as the default? Judging by myself (yes, yes, generalizing from one example), among the intersection of social partitionings that define me, I tend to skip ones where I’m in the majority category (for example, white, or specifically on LW, atheist), and in cases where I’m a minority, treat neither option as the implicit default.
Efficiency of encoding, perhaps?
As I recall, for some categories this turns out, surprisingly, not to be the case. Women are as likely as men to consider a person of unspecified gender male, for example, and blacks are as likely as whites to consider a person of unspecified color white… at least, in some contexts, for some questions, etc. (I would very much expect this to change radically depending on, for example, where the study is being performed; also I would expect it to be more true of implicit association tests than explicit ones.)
I have no citations, though, and could easily be misremembering (or remembering inconclusive studies).
Strictly speaking you should adjust your probability estimate of the person having attribute A either way. How you then act depends on the consequences of making either error., e.g., the consequences of falsely assuming someone isn’t a violent criminal can be more serious then the reverse.
Yes; it would have been more precise to say “inferring a inappropriately high probability of A from G”, rather than “inferring A from G.”
And you’re right that what I do based on my derived probability of A is independent of how I derived that probability, as long as I’m deriving it correctly. (This is related to cousin it’s original complaint that inferring unflattering beliefs about people when I don’t actually hurt them based on those beliefs ought not be labeled “stereotyping”, so in some sense we’ve closed a loop here.)
Thank you for pointing out the distinction between the two kinds of stereotyping—I didn’t see it quite so clearly before.
What does “actually hurt” cover? Would it include not feeling comfortable around some people, and therefore being quietly non-friendly towards them?
This sounds very like you are defining your behaviour as non-hurtful such that anyone objecting to it is then axiomatically in the wrong. If that’s not what you meant, do please elaborate.
If there’s no substance to their objections beyond “I am offended at this general pattern of behavior”, then it sounds like they are in the wrong, no? When a commoner crosses a noble’s path without proper kowtowing, the noble may feel very offended indeed, and even have the commoner whipped; but in our enlightened times we know better than to agree with the noble, because the commoner hasn’t hurt the noble in any way. That’s the moral standard I’m applying here.
Also consider the analogy with gays. What is it that tells you people shouldn’t get offended by others’ homosexuality? Would you be sympathetic to someone claiming gays should change their behavior in public because he’s genuinely hurt by it, or would you consider that person “axiomatically in the wrong”? If the latter, didn’t you just apply an instance of the general standard that actually non-hurtful behavior is okay even though some people may complain—and even be sincere in their complaints?
I haven’t meant about the word “stereotype” as derogatory, but you are possibly right that it bears negative connotations, so let’s use another one. “Accurate priors”, maybe?
I thought that people get offended usually because of words, and less often because of deeds. “Offended” associates in my mind with Muslims burnig Danish flags or men in rage after they were called cowards, rather than with unsuccessful job applicants. The applicant may be disappointed, angry, sad, maybe desperate, but I would be surprised if he said he is offended.
But let not this be a dispute about semantics. I suppose there is no real disagreement.
Well, the point of me including an emoticon there was that my feelings on this are confused at best. It likely indicates me being upset with the underlying cause (ideally I would prefer if Less Wrong was a truly international website where the prior for any “user X is nationality Y” is low, and it’s something I see worth striving for) than with the specific hypothesis that I’m American, which can be easily discarded before it’s used to draw any conclusions.