Don’t Get Offended
Related to: Politics is the Mind-Killer, Keep Your Identity Small
Followed By: How to Not Get Offended
One oft-underestimated threat to epistemic rationality is getting offended. While getting offended by something sometimes feels good and can help you assert moral superiority, in most cases it doesn’t help you figure out what the world looks like. In fact, getting offended usually makes it harder to figure out what the world looks like, since it means you won’t be evaluating evidence very well. In Politics is the Mind-Killer, Eliezer writes that “people who would be level-headed about evenhandedly weighing all sides of an issue in their professional life as scientists, can suddenly turn into slogan-chanting zombies when there’s a Blue or Green position on an issue.” Don’t let yourself become one of those zombies—all of your skills, training, and useful habits can be shut down when your brain kicks into offended mode!
One might point out that getting offended is a two-way street and that it might be more appropriate to make a post called “Don’t Be Offensive.” That feels like a just thing to say—as if you are targeting the aggressor rather than the victim. And on a certain level, it’s true—you shouldn’t try to offend people, and if you do in the course of a normal conversation it’s probably your fault. But you can’t always rely on others around you being able to avoid doing this. After all, what’s offensive to one person may not be so to another, and they may end up offending you by mistake. And even in those unpleasant cases when you are interacting with people who are deliberately trying to offend you, isn’t staying calm desirable anyway?
The other problem I have with the concept of being offended as victimization is that, when you find yourself getting offended, you may be a victim, but you’re being victimized by yourself. Again, that’s not to say that offending people on purpose is acceptable—it obviously isn’t. But you’re the one who gets to decide whether or not to be offended by something. If you find yourself getting offended to things as an automatic reaction, you should seriously evaluate why that is your response.
There is nothing inherent in a set of words that makes them offensive or inoffensive—your reaction is an internal, personal process. I’ve seen some people stay cool in the face of others literally screaming racial slurs in their faces and I’ve seen other people get offended by the slightest implication or slip of the tongue. What type of reaction you have is largely up to you, and if you don’t like your current reactions you can train better ones—this is a core principle of the extremely useful philosophy known as Stoicism.
Of course, one (perhaps Robin Hanson) might also point out that getting offended can be socially useful. While true—quickly responding in an offended fashion can be a strong signal of your commitment to group identity and values[1]-- that doesn’t really relate to what this post is talking about. This post is talking about the best way to acquire correct beliefs, not the best way to manipulate people. And while getting offended can be a very effective way to manipulate people—and hence a tactic that is unfortunately often reinforced—it is usually actively detrimental for acquiring correct beliefs. Besides, the signalling value of offense should be no excuse for not knowing how not to be offended. After all, if you find it socially necessary to pretend that you are offended, doing so is not exactly difficult.
Personally, I have found that the cognitive effort required to build a habit of not getting offended pays immense dividends. Getting offended tends to shut down other mental processes and constrain you in ways that are often undesirable. In many situations, misunderstandings and arguments can be diminished or avoided completely if one is unwilling to become offended and practiced in the art of avoiding offense. Further, some of those situations are ones in which thinking clearly is very important indeed! All in all, while getting offended does often feel good (in a certain crude way), it is a reaction that I have no regrets about relinquishing.
[1] In Keep Your Identity Small, Paul Graham rightly points out that one way to prevent yourself from getting offended is to let as few things into your identity as possible.
- A Call for Constant Vigilance by 3 Apr 2013 9:52 UTC; 40 points) (
- LessWrong analytics (February 2009 to January 2017) by 16 Apr 2017 22:45 UTC; 32 points) (
- How to Not Get Offended by 23 Mar 2013 23:12 UTC; 16 points) (
- 21 Apr 2017 15:09 UTC; 7 points) 's comment on LessWrong analytics (February 2009 to January 2017) by (
A corollary:
If someone asks you to do or say things a little differently, in order to not scare, upset, or worry them … don’t get offended.
A lot of people turn into “slogan-chanting zombies” when someone informs them that they’ve scared, upset, or worried someone, when they had no intention to do so. They react with protestations of innocence; with accusations that the unhappy person is “oversensitive”, or that their plea is a cynical ploy for a political cult; with baseless cries of censorship; or with biting, dismissive mockery.
This is unfortunate … and it’s bad rationality, too.
If someone goes to the trouble of informing you that the way you’re acting or speaking leaves them feeling unsafe, unhappy, or uncomfortable around you … they are giving you information. If you are interested in not having those effects on others, getting offended and defensive when someone informs you of them is a very bad idea.
It’s a fact of the world that some ways of expressing yourself will come across as threatening, triggering, or creepifying to others. This is not because those people want to feel threatened, triggered, or creepified, or even necessarily that they have unusual predilections to those epistemic and emotional reactions. It also doesn’t mean that you meant to do those things, or even that they believe you meant to do them. After all, if I thought you wanted to upset me, I sure wouldn’t bother telling you that you had done so; I’d just stay away from you.
We’re living in a world full of humans, not impersonal data-processors, and getting along with those humans is a vital skill. It is useful to your goals to obtain accurate information about their emotional and epistemic states (“How do they feel? What do they believe?”), especially their feelings and beliefs toward you. “Bad news is good news” — if your words or actions are having negative effects on others, don’t push people away for informing you of that fact … or else they might stop telling you, or stop talking to you at all.
(Also, for any incipient-FAI readers in the audience: Having emotional states and caring about others’ emotional states, is something that humans value. Even though they are inconvenient sometimes. Please do not rewrite us to not be offended at anything at all, nor to not care if we offend others.)
If you find yourself responding with cached thoughts or catch-phrases such as “Everything will offend someone”, or “Some people just want to be offended”, or “If we let ourselves care about offending those people, next thing they’ll be taking away our freedom of speech” … yeah, that’s the defensiveness I’m talking about. Stifle it; it’s weakening you.
Intentionally being offensive (e.g. in protest) is sometimes worthwhile; and sometimes you simply can’t do certain vital tasks (e.g. teaching biology) without offending someone (e.g. creationists). But finding out that you have been accidentally offensive is nothing worth defending or rationalizing — it’s something to say “oops, sorry!” and update about.
Hmm, why does this sound familiar? =]
Also, I just want to point out that the best way I can think of testing whether someone wants to be offended is by apologizing and not doing it again … and then seeing if they’re still following me around and pointing out how I offended them that one time.
Upvoted for proposing a useful test.
You should be happy that they are helping you create a better argument for this type of readers.
Of course there is a difference between “saying things differently” and “not saying things”. Sometimes the offensive thing is not how you present the information, but the information itself. For example, you can speak about atheism without using ad-hominem arguments about Pope, or without mentioning child abuse in churches. Those parts are not the core of your argument, and are actively harmful if your goal is to make an “Atheism 101” presentation for religious people. On the other hand, if someone is offended by the mere fact that someone could not believe in their God, there are limits about what you can do about it. You could make the argument longer and slower to reduce the impact of the shock; use an analogy about Christians not believing in Hindu deities; perhaps quote some important religious guy saying something tolerant about nonbelievers… but at the end, you are going to say that nonbelievers exist, without immediately adding that they should be killed or converted. And someone could be offended by that, too.
Also, sometimes there are limited-resources consideration. Sometimes your argument is inoffensive for 90% of your audience, and the harm done by offending the remaining 10% may be smaller that either the cost of improving your argument for them or the cost of not presenting the argument. -- On the other hand, we should be suspicious of ourselves when we have this impression, because we are likely to overestimate the positive impact of presenting the imperfect argument, and underestimate the offence caused.
FYI, this pattern matches on “disagreeing with my complaint makes you part of the problem,” at least to me, with all the problems that implies. The first two statements in particular are quite true, although insufficient in themselves to defeat your point.
For the record, I don’t think that’s how you meant it.
This, on the other hand, I wholly agree with. Getting offended in such a case is silly. I think it may arise from the perception that it grants others power over you if you have to change your behavior to suit them. I think the cure is to realize that you don’t have to change anything—but might choose to based on the extra information they were kind enough to give you.
I for one wish people would tell me about such impressions more often. I’ve alienated a few people in my time because I was doing something irritating, lacked the social skills to realize it, and was never informed. (and therefore could not correct the problem)
Interesting. I see what you mean, but I don’t see a clearer way of pointing to a particular cached-thought reaction that I anticipate some readers having. Any ideas?
Honestly, I’m not sure. Your intent seemed to be to pre-empt certain arguments you believe to be bogus. Doing that without appearing to discredit dissent in itself, may be difficult.
“Catch-phrases such as....” implies that all similar arguments are presumed bogus, and ”...yeah, that’s the defensiveness” appears to discredit them based on the mindset of the arguer rather than the merit of the argument.
Listing specific arguments along with why each of them is wrong (edit:or insufficient to reach conclusion) probably would not have given me the same impression. I am thinking of some religious figure who, writing to argue for God’s existence, gave a series of statements along the lines of “these are the objections to my argument that are known to me; I answer each of them thusly....” I think it might have been Aquinas. I remember being impressed by the honesty of the approach even though I don’t believe in the conclusion.
(I am not sure who downvoted you or why. Responding to honest criticism with a request for suggestions seems laudable to me.)
On reflection, I think this statement specifically is my problem, and not because of what it’s saying about the argument, but about the arguer. My reaction is something like “well, damn, now if I object I’ll appear to be an unnecessarily defensive jerk, even if I’m right.”
It feels like “God will send you to hell if you question his existence”; where that one exacts penalties for the act of figuring out if there really are penalties, yours socially censures the act of questioning the justification of censure. Such double binds always strike me as intellectually dishonest.
Again, I don’t think you actually meant it that way; it just pattern matched on certain similar arguments (which I’ll leave unstated to avoid a mindkiller subthread) by people who actually do mean it that way.
The problem with caching is just that sometimes the cache falls out of sync; you want to evaluate some complex problem f(x), and if you’ve previously evaluated some similar f(y) it’s faster to evaluate “if (resembles_y(x)) then cached_f(y) else f(x)”, but if resembles_y(x) isn’t precise enough, then you’ve overgeneralized.
But the correction “Stifle it” doesn’t seem to be pinpoint-precise either, does it? It’s an overgeneralization that just generalizes to the opposite conclusion.
If you don’t want people to overgeneralize, then you have to be specific—“in cases where A, B, or C hold, then you want to avoid giving offense; if D, E, or F hold then giving offense may have higher utility”, etc—and just trying to begin nailing down this kind of precision is likely to require hundreds of comments, not just a couple sentences.
Agreed on basically all points. Did you feel this post was attempting to defend or rationalize offending people?
No. I did think it was likely to be used as a source of rationalizations by people who do offend people, though, without some caveat that, well, offending people is ceteris paribus bad; and that a lot of common responses people have (especially online) to complaints of offense are actually rather weak rationalizations.
My response was intended in the spirit of Eliezer’s “Knowing About Biases Can Hurt People” — that this sort of thing may unintentionally provide ammunition for bad behavior.
I used to feel that getting offended was useless and counterproductive, but a friend pointed out that if people are not treating you with respect, that can be a genuinely problematic situation.
Wei Dai suggests that offense is experienced when people feel they are being treated as being low status. So if you feel offended, a good first question might be “do I care that this person is treating me as low status?” If there is no one else around, and you don’t expect to see the person again, then your answer may be no. If there are others around, or you expect to see the person again, then things may be more difficult. Yes, you can politely ask people to be more considerate of you, but that’s not exactly a high-status move.
So, I don’t feel that “never act offended” passes the “rationalists should win” test as a group norm. It might actually be good that “That’s offensive” represents a high-status way to say “You’re treating me as low status. Stop.”
It might even be worthwhile to expand the concept of offense. Currently it’s only acceptable to be offended when people treat you as low status in certain narrow ways. If someone says something nasty about your nose, “That’s offensive” is not nearly as high-status a response as it would be if someone said something about your race. (Theory: “That’s racist” works as a high-status response because you’re implicitly invoking the coalition of all the people who think racist statements are bad.) But nasty statements about your nose can still be pretty nasty.
To expand the concept of offense to all nasty statements, you might have to create a widespread social norm against nasty statements in general, to give people a coalition to invoke. Though, perhaps “Gee, you sound like someone who has a lot of friends” or similar would act as an effective stand-in.
(As you point out, it’s not too hard to fake offense, so we don’t necessarily disagree on anything.)
I would generalize this and say that offense is experienced when people feel they are being treated as being lower status than they feel they are/deserve.
The reason for the generalization: some people get offended by just about everything, it seems, and one way to explain it is a blatant grab for status. It’s not that they think they’re being treated as low status in an absolute sense necessarily, they just think they should be treated as higher status relative to however they’re being treated.
I think that’s much closer (and upvoted), but you don’t need to invoke such an extreme example to demonstrate it; you just need to notice that offense thresholds are different in different contexts. Treating your boss as if she’s your drinking buddy is likely to provoke offense. So’s treating your drinking buddy as if he’s a child. Yet you’re generally safe treating boss as boss, buddy as buddy, and child as child—in other words, giving people the status they contextually expect.
I agree
Machiavelli wrote in “The Prince” about the similar dilemma of advice. If you let everyone give you advice, you seem like a pushover, but if you don’t take any advice, you’ll probably do something stupid. His recommendation was to have a circle of people who you take advice from, and to ignore everyone else.
A similar system could work well for offense. If you want to be high-status, when most people lower your status, get offended. But for a select few (probably the people who you work with when you’re seeking truth in some form or another) practice never taking offense, as the original post suggests. Ideally, these people would know they could offend you, so they wouldn’t censor potentially helpful ideas.
If someone lowers your status and you act offended, then it makes you look weak because it confirms that they’ve successfully hurt you. What you could do in those situations is to offer them advice on how not to offend you—promote their communication skills. That way you reassert your authority by determining the standard of dialogue you allow around you and potentially improve them as a resource.
That said. There’s a difference between discounting a certain noise level of offence—I’ll try not to be offended if I believe you’re honestly offering criticism and don’t meant to offend me. And discounting all offence.
I would suggest that the former is vastly preferable. If someone rocks up to you and starts tearing a strip off it seems worth getting offended over.
#
I’m not sure I buy into Machiavelli’s idea that accepting advice lowers your status though. Well, not entirely anyway.
In old Chinese courts courtiers used to advise their Emperors by means of heavenly prophecy. In—I forget which king’s reign it was but in France—a king was famous for having two people on opposing sides of an issue debate it and then saying ‘we shall see’, and that was how he got his advice, he was one of the most powerful, at least in political terms, kings that France ever had IIRC.
So there are ways of mitigating it without having to shut yourself off completely from advice, as Machiavelli supposes.
And then a lot of it depends on how you react to the advice, and how it’s given—even if it’s given directly in person. If someone comes up to you and is all pro-social: “Hey we could do this too and it might work even better!” “Fantastic, why don’t you come on board and head up that part of it?” That’s potentially something that’s been good for both of you and made more people want to work with you and share their ideas—and that gives you more power, not less.
Working with others is a very complicated sort of thing. I think the most important lesson of it may well be that power tends to be lent by others, for their own purposes. If you have ‘power’ over a bunch of people but none of them want to work with you your effective power is often very close to 0.
I don’t think Machiavelli would actually disagree with you to any large extent (although he does not consider delegation here). He writes:
There’s also an OB discussion about why taking advice might lower your status.
If you take that Machiavellian idea one step further… How about shifting the view of who to take advice from. People can only make you more like themselves… so only take advice, in any particular area, from someone who has ALREADY ACHIEVED the results you are seeking… otherwise, what is the point?
Also, though, your boss should not ‘become offended’ if you treated them as a drinking buddy… that would be unprofessional on HIS part… he should have a discussion with you, in a rational manner, about the roles that you both are to play in your working environment. I do not agree that people have certain areas where they would ‘benefit’ from being offended. Being offended often simply validates the person being offended, not matter what the offended person retorts with. If someone is rude to you in some way, and then you are rude back, it just makes the first rude person feel justified in their being rude to you in the first place.
If someone is INTENDING to be insulting to you, then rising to the occasion only proves to them that they have power over you. If they were NOT intending to be insulting to you, then all you have done is proven how much of a boob you are.
Learning from other people’s mistakes.
Feeling offended is a mental lever that causes status-restoring behavior. If you can recognize when you need to restore your status and do so without feeling offended, it’s simply better for you.
The best approach is to be conscious of what will advance your goals and act accordingly.
You may think a cop is not recognizing your status but you may be best served by letting it pass and getting out of the situation more quickly.
That’s a good point. I can easily imagine places where the nose comment would be cause for justified offense though. Saying it to anyone in a context where high status treatment is expected—in a professional context, or an elder.
Does any of that help locate truth in the search space, other than maneuvering into a position of social power?
No, but social power + respect can be useful for achieving your goals (especially if one of your goals is social power and respect, which seem to be true for a lot of people).
Yes, but on lesswrong, at least, we’ve been exposed to enough social psychology to understand why that’s a dangerous intrinsic goal to have. It’s certainly seductive, but aren’t there better things to do with increased agency than to seek to dominate other potential agents?
Last I checked (which was admittedly a while ago), there are a decent number of Less Wrong users who act obnoxiously high status in real life (including some who are quite prominent). I’d love to have the egalitarian norms you describe, but I think first we’d have to convince them to stop.
This may not be trivial. I’ve noticed that my high status behaviors often seem pretty instinctual, and I’ve also noticed that I have a fair amount of mental resistance to giving up status even if I’d like to in theory (ex.: apologizing).
The way you worded this makes it sound as if there are a few people ruining it for everyone. If this is actually the case, then the solution is, when these people begin acting obnoxiously high status, say “You’re being obnoxious. Stop.” Bystander effect, etc. If you try this and it doesn’t work, let me know so I can update.
Without identifying the people involved, can you describe in more concrete terms the behaviours you are talking about?
Name three?
You are asking John to do something that is clearly unwise for him to do in a form typically used with the connotation that if the person does not comply it is because they can not. This is disingenuous.
Good point, though he might reply to him in a private message.
If John wants a problem to stop, it would be nice to first identify more clearly the source of the problem. Otherwise he’s just doing the LW analogue of vaguebooking.
It’s not obvious to me that “the source of the problem” and “the people most saliently exhibiting the symptoms” are the same thing. It’s also not obvious to me that “the source of the problem” necessarily refers to any particular set of individuals.
Hahahahahahaha. I’m going to get negative points for this, but it’s worth it to me. I so enjoy pointing out Karma. I love the irony.
you
I should point out that I obviously have that problem as well, given this response.
On the basis of this comment, I have recognized this user as a possible troll and may delete comments from them (downvoted or otherwise) which seem to be attention-seeking.
[I.e. our local equivalent of “User was banned for this comment.”]
Have you ever met me in real life?
The utility function is not up for grabs.
In the abstract, sure.
But we exist within a particular social context here—specifically, people supposedly come to this website, and participate in this forum, to attempt to be less wrong.
Instead, it appears that many people are engaging in (as someone else put it) obnoxious status displays, playing “look how edgy and selfish and status-motivated I am”, rather than actually attempting to aid each other in being less wrong.
And that’s fine if that’s an indicated maximum of your utility function, but I would think that other people would act to collectively punish that behavior rather than reward it, lest we turn into the kind of obnoxious circle-jerk/dickwaving contest that most of the internet tends to devolve into.
That is why status is a dangerous goal to pursue—because it tends produce an affective death-spiral until all other goals subordinate to gaining status.
Agreed—Less Wrong is a particularly bad place to pursue the goal of social power.
I also agree, especially if one is trying to look high-status to the average person in the general population. Science and rationality is still looked at as nerdy, unfortunately.
Oddly, I tend to feel like having high status among nerdy types is the only time it actually “counts.” I get a rush when something I say here or within other nerd and geek communities is well received, or if I’m treated as an authority on X, etc...wheras, say, people calling me “sir” or otherwise treating me as higher-status at work makes me extremely uncomfortable. So do compliments from normals in general.
[Edit: “Status granted by a tribe I don’t identify with feels like a status hit instead” might be a good way to put it.]
Well, why would I care what status people who don’t regularly non-trivially interact with me assign to me?
Same here, but I think the main reason for that is that it makes me feel ‘old’. (Teenagers and people in the early twenties aren’t usually treated that way (no matter how cool they are in the eyes in their peers), and I don’t exactly revel in being reminded that I’m no longer one.) ETA: I do like the fact that I’m now economically independent, though.
(Edited to add scare quotes around “old”, lest thirtysomethings resent me, as they usually do when I say I feel old.)
It is totally okay to want social power and respect. You want social power and respect. If you believe that you don’t want social power and respect, then you will be motivated to lie to yourself about the actual causes of your actions.
“better” according to whom? The only one who can set a different standard for yourself is yourself, yet if you already do have that “dangerous intrinsic goal”, then, well, you already do have that goal (yay tautology). You can weigh it against other goals and duly modify it, but presumably if other goals outweighed your need to dominate, that would already have happened. Since it has not (for those who have that goal), that is reason to surmise that from the point of view of those agents there isn’t in fact anything better to do, even if they’d like to think that they think there was.
Not if it is, in fact, your intrinsic goal!
Of course there are occasions where having goals makes it less likely to actualize them, and so incentives exist roughly isomorphic to the ones which collapse CDT to TDT. The advice in How To Win Friends and Influence People is of this type—it advises you that in order to achieve social dominance and manipulate others you should become genuinely interested in them. But this is orthogonal to mindkilling.
Then maybe this is a deontological question rather than an ontological one. I would very much appreciate any help understanding why people seek to dominate other potential agents as an intrinsic goal.
If I was particularly interested in the question why people have the terminal values they have, I’d look into evolutionary psychology (start from Thou Art Godshatter) -- but if one doesn’t clearly keep in mind the evolutionary-cognitive boundary, or the is-ought distinction (committing the naturalistic fallacy), then one will risk being mind-killed by evo-psy (in a way similar to this—witness the dicks on the Internet who use evo-psy as a weapon against feminism), and if one does keep these distinctions in mind, then that question may become much less interesting.
meta: I find it interesting that your post got voted down.
I didn’t downvote but I was ambivalent. The main point was good but that was offset by the unnecessary inflammatory crap that was tacked on.
What was inflammatory? Also: I find it wryly interesting that a post with a good point and informative links would be judged inflammatory in an article about not getting offended.
I insulted anti-feminist amateur evolutionary psychologists.
;-)
(Actually, I hadn’t noticed that, but that’s a great reason (excuse?) to not edit my comment.)
I think both you and ialdabaoth have missed the point of the post. It is definitely not an invitation to be more inflammatory. It encourages not taking offense because taking offense has negative side effects. To the extent those side effects matter provoking them in others would also seem undesirable.
What do you mean “use as a weapon against” and why is it obviously a bad thing? Would you say it’s a fair complaint against EY that he uses Bayesianism as a “weapon against” religion?
I believe what army means is that some people mistakenly use evo-psy to make claims along the lines of “we have evolved to have [some characteristic], therefore it is morally right for us to have [aforementioned characteristic]”.
I’d add that many people appear to exercise motivated cognition in their use of ev-psych explanations; they want to justify a particular conclusion, so they write the bottom line and craft an argument from evolutionary psychology to work their way down to it. Although it would be hard for me to recall a precise example off the top of my head, I’ve certainly seen cases where people used evolutionary just-so stories to justify a sexual status quo, where I could easily see ways that the argument could have led to a completely different conclusion.
It’s not evolutionary psychology so much, but I’ve seen quite a volume of evolutionary just-so stories in the field of diet and nutrition: everyone from raw vegans to proponents of diets based on meat and animal fats seems eager to justify their findings by reference to the EEA. Generally, the more vegetarian a diet, the more its proponents will focus on our living hominid relatives; the more carnivorous, the more the focus is on the recent evolutionary environment.
Which aren’t exactly vegetarian.
Which serves as a reminder that those who tend to craft evolutionary arguments are not only those who can do so accurately.
Remember, it all adds up to normality. Thus we should not be surprised that the conclusion of evo-psych agree with the traditional ideas.
When people claim that they’re final argument tends to be a lot less convincing and involve a lot more mental gymnastics than the original.
We should expect a perfected biology to predict our cultural data, not to agree with our cultural beliefs. ‘Normality’ doesn’t mean our expectations. ‘Normality’ doesn’t mean common sense or folk wisdom. It means our actual experiences. See Living in Many Worlds.
How strong is that tendency? Try to quantify it. Then test the explanations where possible, after writing down your prediction. Did the first get an unfair advantage from status quo bias? Did the rivals seem gerrymandered and inelegant because reality is complicated? Did any of the theories tend to turn out to be correct?
Actually, yes it does. The results of the theory should agree with our common sense and folk wisdom when dealing with situations on ordinary human scales (or whatever the appropriate analog of “ordinary human scales” is).
You’re making two claims here. First, you’re making a substantive claim about the general reliability of human intuitions and cultural institutions when it comes to the human realm. Second, you’re making a semantic claim about what ‘It all adds up to normality’ means.
The former doctrine would be extremely difficult to substantiate. What evidence do you have to back it up? And the latter claim is clearly not right in any sense this community uses the term, as the LW posts about Egan’s Law speak of the recreation of the ordinary world of perception, not of the confirmation of folk wisdom or tradition. The LessWrong Wiki explicitly speaks of normality as ‘observed reality’, not as our body of folk theory. Which is a good thing, since otherwise Egan’s Law would directly contradict the principle “Think Like Reality”:
“Quantum physics is not “weird”. You are weird. You have the absolutely bizarre idea that reality ought to consist of little billiard balls bopping around, when in fact reality is a perfectly normal cloud of complex amplitude in configuration space. This is your problem, not reality’s, and you are the one who needs to change.
“Human intuitions were produced by evolution and evolution is a hack.”
Indeed, I would say that this claim, that our natural intuitions and common sense and folk wisdom and traditions are wont to be systematically mistaken, is one of the most foundational LessWrong claims. It lies at the very core of the utility of the heuristics/biases literature, which is a laundry list of ways we systematically misconstrue or imperfectly construe the truth. LessWrong is about not trusting your intuitions and cultural traditions (except where they have already been independently confirmed, or where the cost of investigating them exceeds the expected benefit of bothering to confirm them—and in neither case is this concession an affirmation of any intrinsic trustworthiness on the part of ‘common sense’ or ‘intuition’ or ‘folk wisdom’ or ‘tradition’).
It is true that common sense comes from somewhere, and that the existence of intuitions and cultural assumptions is a part of ‘normality’, is part of what a theory must ultimately account for and predict. But the truth of those beliefs is not a part of ‘normality’, is not a part of the data, the explanandum. They may or may not turn out to be correct; but there is no Bayesian reason to think that they must turn out right in the end, or even that they must turn out to at all resemble the right answer.
First let me repeat part of my comment with the phrase you seem to have missed in bold:
In particular had Newton claimed that apples fall up, that would have been reason to reject his theory.
That nevertheless works, and frequently works better than what our System II (conscious reasoning)-based theories can do. And remember our conscious reasoning is itself also a product of evolution.
A program to compute the area of a circle that uses pi=3.14 will be systematically mistaken, it is also likely to be sufficiently close for all practical purposes.
Your intuitions and cultural traditions are evidence. As for possessing “intrinsic trustworthiness” I have no idea what you mean by that phrase.
There is a Bayesian reason to think that our intuitions will in most cases resemble the right answer, at least in the sense that GR resembles Newtonian mechanics.
But this just isn’t so. Humans get things wrong about the human realm all the time, make false generalizations and trust deeply erroneous intuitions and aphorisms every day of their lives. ‘It all adds up to normality’ places a hard constraint on all reasonable theories: They must reproduce exactly the data of ordinary life. In contrast, what you mean by ‘It all adds up to normality’ seems to be more like ‘Our naive beliefs are generally right!’ The former claim is The Law (specifically, Egan’s Law); the latter is a bit of statistical speculation, seems in tension with the historical record and the contemporary psychology literature, and even if not initially implausible would still need a lot of support before it could be treated as established Fact. So conflating these two claims is singularly dangerous and misleading.
You’re conflating three different claims.
Egan’s Law: The correct model of the world must yield the actual data/evidence we observe.
We should generally expect our traditions, intuitions, and folk theories to be correct in their human-scale claims.
Our biases are severe, but not cripplingly so, and they are quite handy given our evolutionary history and resource constraints.
‘It all adds up to normality’ means 1, not 2. And the claim I was criticizing is 2, not 3 (the one you’re now defending).
The evidence shows that in a great many cases, our intuitions and traditions aren’t just useful approximations of the truth, like Newtonian physics; they’re completely off-base. A lot of folk wisdom asserts just the opposite of the truth, not only about metaphysics but about ordinary human history, psychology, and society. So if ‘it all adds up to normality’ means ‘it all (in the human realm) confirms our folk expectations and intuitions’, then ‘it all adds up to normality’ is false. (But, as noted, this isn’t what ‘normality’ means here.)
Sure, they’re evidence; but they’re not very strong evidence, without external support. And they’re data; but the data in question is that something is intuitive, not that the intuition itself is correct. The claims made by our scientifically uncultivated intuitions and culture are just models like any other, and can be confirmed or disconfirmed like any scientific model, no matter how down-to-earth and human-scaled they are. They do not have the special status of ‘normality’ assigned to the data—data, not theory—of everyday life, that Egan’s Law draws our attention to.
When Newton(?) claimed that objects of different mass but negligible air friction fell at the same rate, that theory was rejected.
Copernicus.
Natural intuitions, common sense, and folk wisdom have consistently shown that they cannot identify a theory which explains the actual observations better than they can.
Common sense and folk wisdom say that has changed, and we will now accept a new, more correct theory without challenging it.
So the Aztec add up to normal? Because I’m not seeing how a culture that thought human sacrifice was a virtue has much folk wisdom in common with the modern era.
Why was this down-voted? This is an on-point and sane response. If ‘normality’ is defined as tradition, then history should be a sequence of confirmations of past traditions and common sense (e.g., the common sense claim that the Earth is stationary and the Sun goes around it), as opposed to disconfirming our beliefs and providing alternative explanations of our perceptual data. The epistemic disagreement between cultures, and the epistemic change within cultures, both refute this idea.
‘It all adds up to normality’ in the original sense of ‘The right theory must predict our ordinary experience’ is a correct generalization 100% of the time. (It’s The Law.) ‘It all adds up to normality’ in Eugine’s sense of ‘The right theory must agree with folk wisdom, common sense, and traditional doctrine’ is a generalization that, historically, has almost never been right strictly, and has only sometimes been right approximately.
Well, I didn’t downvote, but saying the Aztecs viewed human sacrifice as a virtue is at best an oversimplification. They sacrificed a lot of people and believed they were virtuous in doing so, but my understanding is that sacrifice within the Aztec belief system was instrumental to worship, not virtuous in itself; you wouldn’t be lauded for sacrificing your neighbor to Homer Simpson, only for sacrifices serving established religious goals.
The broader point, though, seems to be that the appeal to societal normality equally well justifies norms that call for (e.g.) sacrificing children to the rain god Tlaloc, if sufficiently entrenched. That logic seems sound to me.
I think it would be missing Tim’s point to suppose that he’s ascribing some sort of quasi-Kantian value-in-itself to Aztec meta-ethics, when all he seems to be noting is that the Aztecs got torture wrong. If you want to reserve ‘virtue’ for a more specific idea and historically bound idea in Western ethics, I doubt he’d mind your paraphrasing his point in your preferred idiom. It takes a pretty wild imagination to read Tim’s comment and think he’s saying that the Aztecs considered human sacrifice a summum bonum or unconditionally and in-all-contexts good. That’s just not what the conversation is about.
Yeah, you’re right.
Given the way they ran their empire, it would probably have collapsed without the intimidation factor that human sacrifice provided.
If we take as given that a nonspecific criminal act against a specific-but-not-here-identified person is required to sustain an empire, does that mean that drone strikes have virtue?
(Vague and awkward phrasing to avoid discussing a violent act against an identifiable person) (Note the hypothetical statement; in the least convenient world that statement is provably true)
I never said anything about virtue, merely about cause and effect.
How is that consistent with the Aztec adding up to ‘normal’ as used upthread?
I’m not sure I understand quite what this means.
To clarify: if at a given time common sense and folk wisdom are understood to predict a result R1 from experiment E where E involves a situation on ordinary human scales (or some appropriate analog), and at some later time E is performed and gives result R2 instead, would you consider that state of affairs consistent with the rule “the results of the theory should agree with our common sense and folk wisdom”, or in conflict with it?
If my observations are unreliable, I should not expect more rigorous study of the subject to confirm my observations.
Yes; OTOH, if you can already guess in which direction your observations will be moved by more rigorous study, you should move them already.
Not quite true. Say I have a d100 and I have two hypotheses—it is either fair or it is biased to roll ’87′ twice as often as any other number. I can already guess that my observation from rolling the die once will move me in the direction of believing the die is fair (ie. I can reliably guess that I will roll any number except 87). However, if I do happen to roll an 87 then I will update towards ‘biased’ to a greater degree.
It isn’t the direction of the expected evidence that is conserved. It’s the expectation (the directions of all the possibilities multiplied by their respective degrees).
Yup. There is, of course, potentially a big difference between how confident I am that my position will change, and how confident I am that my position will change in some specific direction.
The problem is that adding up to normality, while necessary, is not sufficient. It’s possible to explain the sexual status quo by appealing to patriarchy, sexism and institutionalized male privilege just as easy as by appealing to evo-psych. Any number of mutually-inconsistant theories can each indivually add up to normality; adding up to normality by itself does not tell us which theory is right.
I never said it was sufficient. One common criticism of evolutionary psychology is that “it justifies the sexual status quo”, my point is that this criticism doesn’t hold water.
Not if the “traditional ideas” don’t necessarily reflect how things have been done for much of human history. Some of the gender norms people support with such arguments are genuine human universals, many others are not.
I’ve known people to make evo-psych arguments justifying a sexual status quo which were implausible or even refuted by known anthropology. I think you’re assuming a higher baseline level of credibility among people who ascribe to your own position than is actually the case.
Because anthropology is not at all full of people doing shoddy work and using it to justify pre-concieved beliefs. <\sarcasm>
Edit: added link to Gene Expression.
Right. Many armchair evolutionary psychologists don’t understand the nature of the evolutionary-cognitive boundary.
What I’ve seen tends to be more like, “we have evolved to have [some characteristic], asserting a deontological duty not to have [aforementioned characteristic] is not a good idea”.
I didn’t mean that using a theory as weapon against (i.e., in orter to argue against) a different theory is always obviously a bad thing; in particular, I don’t think that using Bayesianism to argue against religion is bad (so long as you don’t outright insult religious people or similar). But in this particular case, evo-psy is a descriptive theory, feminism is a normative theory, and you cannot derive “ought” from “is” without some meta-ethics, so if someone’s using evo-psy to argue against feminism there’s likely something wrong. (The other replies you’ve got put it better than I could.)
Feminists frequently make “is” assertions, and justify their “ought” assertions on the basis of said “is” assertions.
In any case, you seem to be arguing that feminism will now be joining religion in the trying to survive by claiming to be non-refutable club.
They do, but their “is” assertions are stuff like “women have historically (i.e. in the last several millennia) been, and to a certain extent still are, oppressed by men”, which aren’t actually contradicted by evolutionary psychology, which says stuff like “humans are X because, in the last several hundred millennia, X-er apes have had more offspring in average”. (And the “ought” assertions they justify based on “is” assertions are stuff like “we’re further south than where we want to be, so we ought to move northwards”; IOW, they’re justifying instrumental values, not terminal values.)
That wasn’t my intention, but at the moment I can’t think of a good way to edit my comment to make it clearer.
Another typical feminist claim is “differences between the behavior of boys and girls are due to socialization”. This is, as you’d imagine, the kind of claim that is easily subject to falsification by evolutionary psychology. The related normative claim that “we ought to socialize boys and girls as androgynously as possible”, becomes challenged by the evolutionary psychology claim that “we ought to socialize boys and girls in ways that take into account their inherent differences.
I expect claim C1: “for all differences D between the behavior of boys and girls, D is due solely to socialization” is false, and I expect claim C2: “there exist differences D between the behavior of boys and girls such that D is due solely to socialization” is true.
I expect claim C3: “differences between the behavior of boys and girls are due to socialization” to generate more heat than light, by virtue of being ambiguous between C1 and C2.
If I assume by C3 you mean C1… I expect the claim C4: “there are people who would assert C1, and that the vast majority of such people self-label as feminist” is true, and I expect the claim C5: “the majority of people who self-label as feminist would assert C1″ is false.
I expect the claim C6: “‘differences between the behavior of boys and girls are due to socialization’ is a typical feminist claim” to shed more heat than light, by virtue of being ambiguous between C4 and C5.
I suspect that many of the feminists who are willing to admit C1 is technically false will insist it applies to the particular D under discussion. In any case, claims of the form C1(D) “the difference D between boys and girls is due solely to socialization” work just as well for my point.
I suppose now you’ll claim that most feminists never really believed that the differences in question where solely due to socialization, and this discussion will develop a tone similar to that of debating a theist who gradually dials down what his religion actually claims.
Out of curiosity, Eugine, what sort of background do you have with feminism, feminists, feminist texts, etc.? Many feminists define feminism as ‘gender egalitarianism’, ‘activism for gender equality’, or ‘activism for gender equality plus belief that women are disproportionately disadvantaged by current sociocultural norms relative to men’. How would you define ‘feminism’? What is your view of the importance of specifically anti-sexist intervention and memecraft, and/or on the prevalence of harmful or overapplied gender schemas? I want to get a clearer idea on the background and aims you’re bringing to this conversation, rather than skirting around the heart of the matter.
Feminism has two common definitions
1) Someone who believes in equality of opportunity for women.
2) Someone who accepts the results of feminist critical theory.
A lot of feminists tend to play bait-and-switch games with the above two definitions. In this context I mean something closer to (2).
… Not quite. I’d say that the first definition is somewhat uncontroversial (People opposing it usually deny the continued existence of the problem rather than denying the feminists desire, reactionaries excluded) and the second may be mis-named and is extremely fragmentary with a whole bunch of different schools of thought, a few of which have thin coatings of anti-epistemology.
What would count as evidence that a particular behavior was caused solely by socialization? I’ll admit that evidence of sex-linked behavior among non-human primates is evidence that the similar behavior in humans is sex-linked. But before we start talking about proof, we need to agree what sorts of things count as evidence.
There are many behavior differences that cannot be explained solely on the basis of socialization. The most obvious is that women generally sit to pee, while men generally do not. Or we could look to some pregnancy related behavior that is not performed by men since they generally don’t get pregnant.
Likewise, there are some behaviors that we have strong reason to believe are pure socialization. For example, male preference for blue and female preference for pink is less than a century old.
I think you just answered your own question.
But if I pick a more controversial example from history, shouldn’t I predict that you will blow off that evidence by saying something sarcastic like: Because anthropology is not at all full of people doing shoddy work and using it to justify pre-concieved beliefs. <\sarcasm>.
In short, a reasoned discussion needs a more concrete rule for what counts as evidence than “I know it when I see it.”
Something that is more likely to occur if the theory is true than if it is false. (Given the current state of cultural anthropology, this doesn’t include the writing of modern cultural anthropologists.) As for what an appropriate filter to use in this context is, analogous to the filter of scientific evidence used in the hard sciences, I’m not sure. This is itself a hard problem, which probably deserves to be discussed somewhere more prominent than below the fold on a week old thread.
I read that second link, and I am confused. He criticizes cultural anthropolgy for using concepts he believes are politically infected. On of his examples is “heteronormativity.” As I understand that word, it means something like:
I understand if you don’t think that type of social pressure is bad, but do you deny it exists? What should we call it?
Khan’s complaint is that “heteronormativity” is used as a boo light, just like the other words on his list: “Privilege. Oppression. Colonialism. Patriarchy.”
Yes, he thinks heteronormativity is fine. I don’t.
I asked you if it occurs, not if you disapprove of the phenomena of heteronormtivity.
Because if it occurs, then your argument that anthropology is unconnected to reality needs a better justification. Biased != unconnected with reality. Even biased evidence should have the potential to change a Bayesian reasoner’s probability estimate in the direction that the producer of the evidence suggests.
I mean boo light in the technical LW sense. I don’t know whether Khan approves of heteronormtivity, I’m more sure he doesn’t approve of oppression, which is in the same list.
I’d add that ‘heteronormativity’ and the other words on the list are also sometimes used as spammable boo lights, sometimes leading to rather word-salad-like philosophies of condemnation, and that ‘privilege’ sometimes (but usually does not) forms part of an anti-epistemology.
I’d also add that specifically ‘colonialism’ and ‘patriarchy’ (also ‘capitalism’ when used as a quasi-boo-light) are occasionally treated as almost Platonic properties attached to society at large that automatically color every interaction, even among people who should be able to resist their influence or do not have a concrete reason to care.
That said, I think that all of these things have a very real meaningful existence and all of them are, usually, actually bad.
I think it’s pretty dangerous to describe terms from other fields of study as merely being applause or boo lights. Consider how frequently LW-newbies use “rational” as an applause light until corrected by others. Instead of taking terms from other fields as merely being applause or boo lights, we should consider that the terms might be frequently misused by novices or in popular culture, in just the same way that terms like “rational” or “rationalist” are. (TVTropes link.)
Take “privilege”, for instance.
It seems to be widely assumed that when social critics or activists attribute “privilege” to someone, that they are calling that person evil, arrogant, irresponsible, or something of the like. Because “privilege” is used in sentences that are spoken angrily, it is taken to be not merely a boo light but something akin to a swear word. And indeed it is sometimes used that way, because, well, people get angry sometimes when discussing starvation, rape, police brutality, and other things that activists talk about.
“Privilege” has a pretty specific meaning though. It means “social advantages that are not perceived as advantages but as the normal condition”. In other words: Some people have X, while others don’t; and those who do have X think that having X is unremarkable and normal.
To make up an artificial example:
Suppose that there are blue weasels and red weasels working in an office. For whatever reason, blue weasels are comfortable in a temperature range of 18–28C, while red weasels are comfortable in 22–32C. The office thermostat is set to “room temperature”, the normal temperature, of 20C. So the red weasels are always cold and have to buy expensive sweaters (at their own expense) to avoid shivering, while blue weasels frolic about in the nude.
When anyweasel proposes turning up the heat, they are reminded that running the heater is expensive and that 20C is the established normal room temperature — it even says so on the Wikipedia article “room temperature”! That some weasels complain about the cold and how expensive sweaters are is their own problem — maybe if they frolicked about in the nude more, they would feel better? Besides, if we started turning up the heat, before too long it would be much too hot for anyweasel! 20C is normal, and if some weasels are unhappy with that, well, that’s actually fortunate for the sweater-knitters, isn’t it?
Note that noweasel is doing any cost-benefit analysis here — and they also aren’t really treating all weasels’ interests as equally worthwhile. They’re just assuming that being cold is a fact about red weasels’ deviation from the temperature sense that they should possess (a normative claim targeted at the underprivileged), as opposed to being about the differences between red and blue weasels and the historical control of the thermostat by certain blue weasels (a descriptive claim about the history and structure of weasel society).
Not only does the temperature setting of 20C advantage some weasels over others, but the way that weasels talk about temperature — the discourse — contains assumptions about what is normal that advantage some weasels over others.
It is perhaps worth noting that the word “status” often gets used on LW to describe position in a social structure, with the understanding that individuals with higher status get various benefits (not always obvious ones), and that a lot of human behavior is designed to obtain/challenge/protect status even if the individual performing the behavior doesn’t consciously have that goal. I suspect that talking about the blue weasels as a high-status subgroup would not raise any eyebrows here, and would imply all the patterns you discuss here, even if talking about the “privilege” possessed by blue weasels raised hackles.
Somewhat to my amusement, I’ve gotten chastised for talking about status this way in communities of social activists, who “explained” to me that I was actually talking about privilege, and referring to it as status trivialized it.
When in Rome, I endorse speaking Italian.
That’s an interesting complaint. It suggests that we might understand and talk about social organization in ways that’re denotationally familiar to these communities of social activists, whoever they are, but that certain connotations are customary in that space that aren’t customary here.
As others have noted I don’t think status is all that good a term for what’s going on in the weasel example, but insofar as our understanding of status does overlap with the activist scene’s understanding of privilege, I think this is a good argument for preferring our own framing. At least unless and until we can decide that we actually want those connotations.
I might object since this is an abuse of the concept of status. Status is about how a person is thought of by other people. It is not about who happens to benefit from an established Schelling point, especially if the group benefiting had nothing to do with establishing it.
I’m assuming it’s acceptable to treat weasels as “people” in this example. Can you clarify how, on your account, the way red weasels in this example are thought of by other weasels differs (or doesn’t differ) from the way blue weasels are thought of?
I don’t know, i.e., it’s not at all clear from the example, and that’s my point. Analyzing the example in terms of status doesn’t work.
OK; thanks for the clarification.
It seems clear to me, for example, that red weasels are thought of in the example as possessing an abnormal temperature sense, and blue weasels are thought of as possessing a normal temperature sense. Would you disagree with this?
Well fubarobfusco stipulated they are and it’s his hypothetical situation. Aside from that, I’m not sure what you’re asking.
As I mentioned here, I’d analyze the weasel example in terms of Schelling points. As fubarobfusco referred to the weasels using standard room temperature and citing Wikipedia, I assume the weasels chose their Schelling point based on human norms, most likely they’ve adopted any human norms wholesale in an attempt to emulate the successful human civilization. I realize I’ve just massively expended fubarobfusco’s hypothetical, but that’s the thing about Schelling points, they can make irrelevant aspects of the scenario relevant.
Yes, I agree that in fubarobfusco’s presentation of his hypothetical situation, the red red weasels are thought of in the example as possessing an abnormal temperature sense, and blue weasels are thought of as possessing a normal temperature sense. Which is why fubarobfusco’s presentation of his hypothetical situation seems to me to clearly provide enough information to determine at least some ways in which red weasels are thought of by other weasels differently from the way blue weasels are thought of. Which is why I was puzzled when you claimed fubarobfusco’s presentation didn’t provide enough information to determine that.
Hope that clarifies what I was asking. No further answer is required, though; I think I’ve gotten enough of a response.
WRT Schelling points, if properly understanding your analysis depends on reading Strategy of Conflict, I’ll defer further discussion until I’ve done so. Thanks for the pointer.
I assumed the hypothetical took place in a world only populated by weasels, where Wikipedia was written by weasels, and the room temperature article ultimately reflected the historical thermostat setting standard set by blue weasels.
“Privilege” has two disadvantages vis-à-vis “status”, though. First, it suggests a binary distinction—privilege or no privilege -, as opposed to degrees of status. Second, status can be acquired, while the way I hear “privilege” used seems to exclude that.
Folks who use “privilege” as part of their usual vocabulary often note that people can have (and lack) different sorts of privilege, actually — for instance, racial, religious, or heterosexual privilege — that these are not projected onto a single dimension.
Yes, the usual term for this is “intersectionality,” but I have yet to see a good theory of how intersectionality actually works, that does not consist mainly of just repeating the word (as a teacher’s password).
Status does seem like a superior conceptual tool, for the reasons Creutzer cited. (It helps explain why, say, football players may have obvious ‘privilege’ only in certain situations and with certain people, and may actually be underprivileged in other situations.)
The only thing it is lacking is a widespread understanding & terminology to reflect the fact that people (especially high-status people) are often oblivious to status differentials & treat them as basic features of the universe.
I’m not sure about that, in my experience it’s low status people who are more likely to treat status differentials as basic features of the universe. High status people seem to be more aware of status, as indicated by how much effort they put into fighting for it (mostly against other high status people).
Not sure. That may start to tie into Moldbug’s Cathedral a bit?
Alternatively, you may be ignoring the full scale. I’d say that the people most likely to ignore status or just consider it basic are those who are secure in their own status.
I didn’t say low status people ignore status, rather they tend to treat it as basic features of the universe, specifically I was referring to existentialism in this sense.
As far I can tell, intersectionality is just the observation that if A is worse than Ā and E is worse than Ē, then AE is usually worse than ĀE and AĒ. Which of course isn’t always the case.
EDIT: IOW, intersectionality is the idea that this remotely makes sense.
Thinking about it I think the biggest problem with “intersectionality” and the concept of “privilege” in general is that it groups together many differences that actually have very little else in common and encourages people to apply ideas appropriate to one of these categories to others where they are frequently wildly inappropriate. To take three examples from the chart that are in some sense maximally different consider race, religion, and disability.
Race is an innate property that controversially correlates with certain abilities and behaviors. Disability is an innate property (or practical purposes at least) that perfectly and inherently correlates with ability. One way to see the difference between the two is to notice that a procedure that makes a blind person sighted is an unalloyed good that would more-or-less completely solve the problem, whereas a procedure that turns a black person’s skin white doesn’t solve any of the relevant problems.
Religion is a choice (subject to the person’s tradition). While there are in fact to reasons to avoid discriminating by religion except when its directly relevant they are somewhat different from the reasons for the other traits.
I agree with almost everything, but:
For some value of “choice” and “subject”, it is… OTOH, I think (though I’m extrapolating like hell, so I’m not very confident) that many fewer people convert to a different religion than dye their hair (at least among females), and still saying “hair colour is a choice (subject to the person’s genome)” would sound kind-of weird to me.
I frequently hear “privilege” used in ways that allow for it to be acquired or lost, and I frequently hear it used in ways that allow for different groups to have more or less privilege relative to one another (that is, degrees of privilege). But I’m willing to believe that other linguistic communities exist that behave as you describe.
Remember, just because people are using a term incorrectly does not mean that the term does not represent something empirically useful. In this case, the nuance between “Status” and “Privilege” is that “Privilege” is a special kind of status; it is status acquired based on group identity. There’s an entire branch of study called “Intersectionality” that touches precisely on the ideas that ‘privilege’ exists in degrees, can be gained and lost, and is often situational. Even there, though, there’s a lot of BS and politicking.
But remember that that doesn’t invalidate the usefulness of the concept, any more than “just-so stories” and BS justifications invalidate evolutionary psychology as a discipline. Intersectionality is clearly a fruitful area for cultural research that is in desperate need of a rationalist approach.
Mostly agreed.
Intersectionality does need better rationalism. I’d add that some intersectioanlity has the drawback of fighting a War On Keeping Your Identity Small, and in many cases, when activist groups dedicated to a single purpose absorb the idea of intersectionality, they rapidly assimilate into the the greater Social Justice Bloc, with all the positives and all the negatives that entails. Furthermore, intersectionality sometimes appears to stand against utilitarian strict optimization.
Yes, one thing that bothers me about social justice folks is that they sometimes sound very essentialist (“they assume a homeless white man is more privileged than Oprah Winfrey”, as I’ve seen someone put it).
They do have their explanation there. The essentialim I have noticed usually comes from radical feminism (which is often taken to mean ‘extremist feminism’ but while nearly all radical feminists are extremist, the term when used by radical feminists actually refers to a specific and rather essentialist + one sided view of gender relations).
They have a tendancy to conceptualize patriarchy as a diffuse property of society that colors everything that even slightly involves gender, and tend to be unwilling to slice it up into its component parts. They also tend to ignore how immense the possible gender-relations-space is outside patriarchy/!patriarchy.
The thing I find most frustrating is how learning about intersectionality leads to groups being assimilated by the Equality Borg. It’s almost like an infohazard for progressives.
I would like to point out that you’ve just swapped the definition of “privilege” from the one fubarobfusco gave to the one I mentioned in this comment.
Yup, agreed with all of this. (Not sure if you thought otherwise.)
For a while now, this has been my number-one example of a word that’s useful to taboo. Since the definition is that succinct, and the word “privilege” has a tendency to derail what could have been a good conversation, we’re probably better off simply not using the word.
That makes sense to me — but only for reasons analogous to why one might want to taboo “rationality”, namely that it’s easy to be misunderstood since the listener has heard lots of low-information uses of the word.
Still, if I were to tell someone else that they have to taboo their field’s terminology — and start speaking in novel (albeit succinct) synonyms — in order to convince me that they’re not simply emitting applause or boo lights, that would seem like a hostile move on my part. I’d be telling them that they have to take on the cognitive load of translating from their usual language (with words like “privilege” and “heteronormativity”) into a language that I’ve deigned to accept (with words like “misnormalized advantages” or “opposite-sex assumptions”).
Yeah, pretty much this.
When communication seems to be failing, my instinct is to taboo my own communication-disrupting language and adopt my interlocutor’s language instead. When I don’t understand my interlocutor’s language well enough to adopt it, my instinct is to ask questions about it. When communication has failed so thoroughly that I can’t ask such questions and understand the answers, I pretty much give up.
It seems like a lot of people don’t do this. I wonder why? It seems common for people to resist adopting any of the other party’s terminology. Some possible explanations:
I’m mistaken. It’s actually rare for people to resist adopting the other party’s terminology — I’m just exercising the availability heuristic.
This is like speciation. The cases where people don’t adopt the other’s terminology are those where it’s advantageous (to whom?) to pick one side of a language barrier (and defend it) instead of mixing. When that isn’t the case, people already have mixed their terminology — invisibly.
It’s a way of pushing cognitive costs around as a primate status fight — “I’m too important to bother to understand you; you have to understand me.” (In the extreme case this would lead to the creation of low-status roles that specialize in understanding without expecting to be understood. Do those exist? Some wag would probably say “wife” or “husband”, ha ha.)
It’s a way of defining and defending territories — “Yes, you get to insist on your language in that magisterium, and I get to insist on mine in this one.” (This seems to be more what goes on in academia than #3.)
People are ignorant of the benefits of understanding one another. Only awesome people like you and I have figured out that understanding other people is awesome. (This seems really unlikely, but it seems to be the premise of some schools of communication improvement.)
People are afraid that if they started using the other party’s language they would mess up their command of their current language. (Economists can’t afford to learn to talk like cultural anthropologists because they might slip up and use anthro-jargon in front of their economist buddies and seem ignorant of economics jargon.)
People are afraid that if they started using the other party’s language they would become disloyal to their current affiliations. (Economists can’t afford to learn to talk like cultural anthropologists because they might become convinced cultural anthropology is right and would lose all their economist buddies.)
… ?
In my experience, they’re called “employees”.
9. Language carries with it many framings, assumptions, associations, and implicit value judgments (see: The Noncentral Fallacy). Letting the other side set the language lets them shape the playing field, which gives them a large home field advantage.
10. People are cognitive misers who mostly rely on cached thoughts. For example, the arguments that they make are arguments that they’ve thought about before, not ones that they’re thinking up on the spot. And their thoughts are cached in their own language, not the other party’s language. (Related to #3, but it’s not about status.)
/#1 might certainly be true, but if so it’s true of both of us; it seems common to me as well.
I agree with you about #3 and #4, though I mostly think of #4 as a special case of #3.
I find #5 unlikely, but if we’re going to list it, we should also note the symmetrical possibility that you and I overestimate the benefits of understanding one another.
A variant of #7 is that using the other party’s language is seen as a signal of alliance with the other party’s tribe, which might cost them alliances with their own tribe… e.g., even if the economist isn’t convinced of cultural anthropology, their economist buddies might think they are. (Which arguably is just another special case of #3.)
As long as we’re listing lots of possibilities, I would add #8: They believe their language is superior to their interlocutor’s language, and that the benefits of using superior language exceed the benefits of using shared language.
And, relatedly. #8b: Behaving as though they believe #8 signals the superiority of their language (and more generally of their thinking). Which arguably is just another special case of #3.
I see the difference between #8 and #3 being that using superior language might be positive-sum in the long run, whereas pushing cognitive costs onto someone else is zero- or negative-sum.
I suppose. That said, if your thinking is superior to mine then you gaining status relative to me (whether through cognitive-cost-pushing as in #3, or through some other status-claiming move) might be positive-sum in the long run as well. Regardless, I agree that #8 is distinct from #3.
I should also note that the strategy I describe often fails when people interpret my questions about their language as veiled counterarguments, which they then attempt to decipher and respond to. Since my questions aren’t actually counterarguments, this frequently causes the discussion to fall apart into incoherence, since whatever counterargument they infer and respond to often seems utterly arbitrary to me.
It’s not surprising that people do this, since people do often use questions as a form of veiled counterargument.
I don’t think “understanding without expecting to be understood” is quite it, but there are a number of relatively low-status roles, in domains with specialized vocabularies, whose job is basically to act as a translation layer between specialist output and the general public. Tech support is the obvious example. In medicine, family practice seems to have shades of this, and it’s low-status compared to the specialties. Grad students sometimes pick this up in their TA role. I’m not sure if anything similar happens in law.
(Continuing Unnamed’s list)
11. They suspect (perhaps correctly) that the other side’s terminology has anti-epistemology embedded within it.
11*. Each party suspects this (perhaps correctly) of the other.
(This is the same but with the usual ethical symmetry assumption that the other is at least in theory capable of occupying the same position towards us that we occupy towards them.)
Is that not a special case of #8?
Would you be willing to do this in a discussion with say a theologian or a creationist?
Sure. I’ve done the former many times, the latter a few times.
I’ve occasionally been given definitions of “privilege” by activists, and each time the definition is different. A more common one is “an unfair advantage that people have by virtue of being in certain groups”.
You’re right, and both definitions tend to be used interchangeably. I’ll work on correcting that in my own speech, but I think in the meantime here’s the essence of it:
Privilege is a phenomenon that occurs as the result of a special kind of status, but the term also gets used to describe the form of status that generates the phenomenon.
When a status based on group identity is pervasive enough to be invisible to members of that group, the resulting assumptions lead to a set of behavior called “privilege”. It’s probably even clearer to use the term “privileged status” than mere “privilege”, when talking about the status itself rather than the resulting behaviors; I’m going to try using that myself for the next few weeks and see if I can anchor some critical self-analysis to the process.
That still doesn’t work since the weasel example involves “privilege” in fubarobfusco’s sense but as I pointed out here doesn’t actually involve status.
Upvoted for pretty good description, and I agree that all of these are actually usually used in a meaningful (if not optimal) way.
It might be added that in some privilege cases, the experience of not having the privilege is totally alien and leads to things like lonely men envying sexual harassment.
By the way, if you’re interested in what I consider a better analysis of your weasel example, I recommend looking at Thomas C. Schelling’s Strategy of Conflict, particularly chapter 3. (I don’t think I can do justice to his analysis in this comment.)
Is that about something similar to these posts?
For small values of similar I suppose.
What reaction do you expect to get by pre-emptively putting words in my mouth with this kind of a sneering tone? Because my reaction is to immediately lose all interest in further discussion with you.
If that was the reaction you expected, then you’re successfully predicting the results of your behavior, which is great.
If that wasn’t the reaction you expected, then I hope this helps you calibrate your behavior better in the future.
Tapping out; downvoting.
I don’t mean to criticize you choice here, because you certainly are entitled to set your own boundaries.
But I want to note for any readers of this thread that this is what evaporative cooling of group beliefs can look like on a particular topic.
What group belief does my comment illustrate the evaporative cooling of?
There is dispute in this community (and society as a whole) about whether anything is wrong with gender dynamics, and how to talk about making changes.
Eugine has a fairly hostile position to the current methods of talking about what needs changing. You have a less hostile position to those methods. If he’s the only person who talks about this topic in this venue, he gets to control this venue’s position on reflexive examination of social norms, by moving the position towards more extreme hostility.
I’m not opposed to reflexive examination of social norms, although I do believe it should be done carefully. My objection is to the methods you seem to prefer for examining social norms don’t correspond to reality.
Thanks for the clarification; this is not at all what I’d initially understood you to be saying.
In general it’s worth staying aware of the differences between “nobody talks to X about gender dynamics” and “only X talks about gender dynamics,” as it’s the latter (or approximations thereof) that cause the problem you describe… but I agree that if X is consistent about involving themself in all discussions of gender dynamics, the former starts to approximate the latter.
So yeah, I’d say you’re right, this is one of the ways evaporative cooling works. (And I understand that that’s not meant as a personal criticism, except perhaps in the most technical of senses, and I’m not taking it as one.)
Edit: Hm. Annoyingly, actually, I do seem to be taking it as one. So let me say, rather, that I don’t endorse taking it as one, and will work on getting over it. :-)
To be fair (I’m not sure on who—maybe Dave, maybe everyone here) nothing that has gone on in this backwater of a subthread can be considered at all representative of a group position on anything. From the beginning this has been about slinging mud and taking offense at positions allegedly possessed by various groups of people that presumably exist somewhere on the internet. Most people just wouldn’t touch this with an 11 foot pole.
I’m not sure I agree. This discussion is one example of what seems to me to be a representative pattern of behavior. Obviously, I am at substantial risk of mind-killed biased perception, but it seems to me that the local consensus is basically:
That has the effect of cutting out the extremists on both ends, but also cuts moderate-extremist social change activists out without addressing their counterparts on the other end of the continuum.
Behaviors that punish +5, +4, and −5 (on the continuum of positions) will skew what is said aloud so that it appears to outsiders that the local consensus is different than what is actually is. Much like the complaint about political correctness, that punishing +5, −4, and −5 will change what newcomers see as acceptable.
My position is that the quality of discussion on that particular subject is a disgrace that I don’t want to be associated with and would prefer not to have to put up with here. Years of experience suggest improvement is unlikely and that suppressing the conversation is the least harmful outcome. I don’t think I’m alone in that position (and so challenge your proposed ‘consensus’).
If newcomers were to see no conversation about moralizing sexual dynamics at all then they may be given the impression that this isn’t a good place to moralize about sexual dynamics. That would seem to be the best outcome that is realistically attainable.
You’d like a venue that talks about how to figure out what object-level moral injunctions to put onto a super-intelligent artificial entity, but doesn’t talk about how to talk about how one large group of humans treats another large group of human? I’m sympathetic to your disgust with the quality of discourse, but I think you are asking for the impossible.
Separately, it isn’t that hard to find examples of disparate treatment of various positions on the continuum, independent of how extreme they are. In other words, there are lots of −4 discussion posts and comment that are well received, while there are fewer +4 discussion posts and comments equally well received. So even if the consensus you wanted were possible, I don’t think it is actually being implemented.
I’d expect people’s ideas of where the zero point is to vary considerably, mainly thanks to selection effects: on average, people tend to be exposed mainly to political ideas similar to their own, partly due to political tribalism and partly because of geographical, age, and social class differences. That gives us a skewed local mean, and selection bias research tells us that people are not very good at compensating for that kind of thing even when they know it exists.
On average, therefore, we’d expect people with strong opinions on both sides of the aisle to feel that their side is meeting with a slightly harsher reception on the margins. That seems to explain most perceived political bias in this forum pretty well; taking the last poll results into account, if any mainstream position has an unusually hard time on the margins I’d expect it to be traditionalist conservatism. (Disclaimer: I am not a traditionalist.)
Yes. “Who—whom?” is not the sort of moral question I would like to discuss here.
Ideally I would like a venue where I just prevent people from slinging bullshit. That isn’t an option available to me. An option that is available is to make use of my trivial “downvote” and “make comments” powers to very slightly influence reality in the direction of less bullshit slinging contests.
I was trying to preempt a way the discussion could go. As for how I expected you to react, I’m generally not in the habit of psychoanalyzing my interlocutors. Although here is an example of how I respond to words being put in my mouth without flipping out.
And both claims are wrong- The only correct way of phrasing the normative claim is “We ought to socialize boys and girls in the way that maximizes instrumental value.”
It might have instrumental value to socialize boys and girls differently, even if there is no biological basis for the difference. It might be more valuable to socialize them the same, even if there is a biological reason why they are different.
Citation needed. A more typical claim might be “socialization is the cause of the vast majority (but not the entirety) of the observed difference between boys’ and girls’ behaviors and skills,” and this easily falsifiable claim is borne out by the available data, never mind evo psych just-so stories about what worked in the EEA.
A lot of nitpicky LW discussion could be avoided if we implicitly qualified absolute-sounding claims about relations in real life with “in most cases”. It would be rare that someone would object to e.g. a claim such as “differences between the behavior of boys and girls are due to socialization” being amended by “in the vast majority of case”, or by ”… but there are exceptions.”
We can default to claims as absolute when they refer to theoretical frameworks, for which absolute claims typically work out more, and are intended more often.
I’ve danced this dance before, with Robin Hanson no less.
Let me side with your youthful incarnation from five years ago:
Beyond just clarifying, you did seem to have taken the initial comment at face value, even though you probably suspected the intended meaning.
I agree with you regarding making the intended meaning as plain as possible as best practice; however, sidetracking the discussion in such a way often leads to “gotcha” continuations of minor details (minor because most people will side with you interpreting claims about human behavior as non-absolute by default, and follow the discussion correctly without such clarifications/rebuttals), which tend to replace other, more substantive discussions.
Sure. But it gets a little more sticky when one is attributing a false absolute claim to some other party, as Eugine did.
Or, you know, a google search. From memory even a google site search would be adequate.
(Which is not to say that such claim is inherent to feminism itself. Merely that the specific observation by Eugine that it is often made by feminists is not worthy of ‘citation needed’ stigma.)
Since the claim that is actually often made by feminists is both weaker and, according to current research, true, Eugine’s “observation” is a strawman. And I snort at the notion that my reply imparts a “stigma”.
That’s a far more complicated claim than it appears, with much of the complexity hiding inside the word “oppressed”.
When I find other people’s motivations mysterious, I find it helps to see if I have anything like that motivation (for dominance, it might be a desire to be in charge of anything at all) and imagine it as much more important in my life.
Unless there’s a friendly AI which has been built in secret somewhere, we’re still all human. With all the weaknesses and foibles of human nature; though we might try to mitigate those weaknesses, one of the biggest weaknesses in human nature is the belief that we have already mitigated those weaknesses, leading us to stop trying.
Status interactions are a big part of the human psyche. We signal in many ways—posture, facial expression, selection of clothing, word choice—and we respond to such signals automatically. If a man steps up to one and asks for directions to the local primary school, one would look at his signals before replying. Is he carrying a container of petrol and a box of matches, does he have a crazed look in his eye? Perhaps better to direct him to the local police station. Is he in a nice suit, smartly dressed, with well-shined shoes, accompanied by a small child in a brand-new school uniform? He probably has legitimate business at the school. And inbetween the two, there’s a whole range of potential sets of signals; and where there are signals, there are those who subvert the signals. Social hackers, I guess one could call them. And where such people exist—well, is it a good thing to pay attention to the signals or not? How much importance should one place on these signals, when the signals themselves could be subverted? How should one signal oneself—for any behaviour is a signal of some sort?
Except that what’s being discussed here is the exploitation of those weaknesses, not their mitigation. And seeking to exploit those weaknesses as an end in and of itself leads to a particular kind of affective death spiral that rationalists claim to want to avoid, so I’m trying to raise a “what’s up with that?” signal before a particular set of adverse cultural values lock in.
Ah, I see; so while I’m saying that I expect that some exploitation will happen with high probability in any sufficiently large social group, you are trying to point out the negative side of said exploitation and thus cut it off, or at least reduce it, at an early stage.
We are humans, and even our truth-seeking activities are influenced by social aspects.
Imagine a situation where someone says that you are wrong, without explaining why. You would like to know why they think so. (They may be right or wrong, but if you don’t know their arguments, you are less likely to find it out.) If they consider you too low status, they may refuse to waste their time explaining you something. If they consider you high status, they will take their time to explain, because they will feel a chance to get a useful ally or at least neutralize a potential enemy.
Generally, your social power may determine your access to information sources.
That’s true, but at the same time it should be mentioned that we do live in the era of the Internet (ridiculously accessible information, no matter how low status and not worth their time anyone thinks you are).
With each passing day, we’re moving closer and closer to a world where trying to build accurate models of the world is a different activity than socializing. For example, it seems plausible to say that emotions are The Enemy in epistemic discussion, but one of the main things to be engaged in and optimized for in a social setting.
I knew and tried to mention that social power has instrumental value; are you saying that signalling offense can lead to someone explaining the reasons why you are wrong often enough to be worth introducing the noise to the technical discussion?
Or, more often, who else thinks so and how much power they have...
Maneuvering into a position of social power is an intrinsic, biologically-mediated goal for many humans (cf. the “Machiavellian Intelligence Hypothesis”). Thus, treating ‘status’ as an intrinsic rather than instrumental goal is a very common trap for humans to fall into, especially particularly clever humans (since ‘cleverness’ probably evolved primarily to serve precisely these purposes, and so the stimulus will activate those modules preferentially).
If locating truth in the search space is a preferable goal to gaining social status, then it might be worthwhile to taboo the word ‘status’ for awhile, especially on lesswrong—it seems to be collecting a lot of unfortunate cached subtext.
Also: be very careful asking questions like that, because they tend to signal low status if you ask them wrong. ;)
Sometimes I find that signaling low status is usefull. Sometimes I don’t intrinsically care about status and signaling low status is more instrumentally valuable. Sometimes I am low status and signal honestly.
And sometimes status is efficiently distributed and what is instrumentally useful to a tribe is also high status.
It’s not clear that everyone can learn not to be offended, and being offended imposes costs on the group in terms of the things they’re going to consider and share.
But assuming that everyone could learn not to be offended:
Everyone can learn not to be offended by the few offensive people, or a few offensive people can learn to be less offensive. The group where the latter holds has made much more efficient use of its time and can work with a wider range of other groups.
So, for all you need people who can discount a certain level of offence in order that they can share differing ideas, I don’t know whether don’t be offended is the most efficient group norm to put in place for dealing with cases of disrespect and/or wilful offence.
A more useful policy would be to exclude people who give willful offense or are willfully offended and apply effort equally to preventing being accidentally offensive and to disregarding accidental offensive events.
I second this! The mental state of being offended is not useful.
However, I want to point out that I believe there’s some typical mind fallacy popping up this post? I think it’s geared at the particular group of people whose knee-jerk impulse is to perform offendedness once they are in the offended mental state because the post doesn’t clearly precisely distinguish between the two. But that’s not the knee-jerk reaction of everyone. For example, I am very conflict-avoidant (to the point of doormat-ness), so I actually had to teach myself to perform offendedness for the social benefit of enforcing boundaries, which is pretty important but is only briefly touched upon in the post. My natural impulse was to tolerate (sometimes deliberately) offensive behavior and do nothing, so because I didn’t get defensive or angry, I would just get hurt and sad and … take it. Until I eventually realized this was a bad strategy. Therefore! I think it would be useful to clean up that distinction and make sure that it’s the offendedness mental state that is a bad habit.
Were you commenting on an earlier version of the post, or something? ISTM that the “not everyone is the same” point is addressed by the fourth paragraph and the “behaving as though you were offended is not always useless” is covered by the fifth paragraph.
I made there be an earlier version. 8)
(Because I asked for edits, so that increased the chances of there being a later version because that’s what edits are. Which then implies that there was an earlier version because you can’t have an later version without an earlier version … yeah.)
katydee has been editing in response to suggestions.
Thanks for the feedback! I have a followup post on the way that I think will clarify some of the issues that you are referring to. I definitely agree that the mental state is what’s really important here. Overall one thing that I think is not discussed enough on LessWrong is how all the thought processes trained by the typical LW canon can be derailed under certain circumstances. You might be the most rational person in the world, but if you’re too angry/sad/joyful to think straight, you may not be as effective as you would hope.
Thank you for your reply! I’m really glad you’re planning to cover this topic more, and I definitely agree that extremely emotional mental states are derailing for rationality.
Unfortunately, I don’t think your reply quite addressed my concern, and I’m starting to see a lot of comments from other people who are also reading this post as “don’t get angry” rather than “detect your offended/victimized mental state, don’t make any decisions and speedily think yourself out of it” because of the conflated language throughout the post. I would really super-appreciate it if you could edit it to be precise, because it seems to use “getting offended”, and “acting angry” and “acting defensive” interchangeably in a lot of places.
Not all offended people act angry. Some people have really peaceful-looking offended states where they’re secretly making mental notes to hold a grudge forever! They might read this post and think it doesn’t apply to them. Some of us don’t get offended at all and need to teach ourselves to socially demonstrate that something is wrong. Some people unfortunately don’t even think they’re allowed to get offended because they think they deserve every bad thing :(, but aren’t in the scope of this post. They might read this and think there isn’t anything to learn here.
Done.
Awesomeness! Thank you.
My concern with your assertions is that there is a serious risk that it with be used as a complete counter-argument to certain disliked social movements (proto-example here).
I’m not saying that all of my putative allies are rational, or have terminal values that can be reasonably implemented. Clearly, that is not the case. But a great deal of that problem is caused by the general low sanity line across the political spectrum.
In short, I’m concerned that your message will be interpreted as narrowly focusing that criticism to only one part of the spectrum.
EDIT: Fubarobfusco said it better
As a special case of a special case, I’ve been taking note whenever someone makes a comment about me that I find offensive. More often than not, it’s because I’ve just been called out for a negative characteristic that I do, in fact, possess. Litany of Gendlin applies, and it’s almost certainly more productive to deal with the issue at its core than to waste time actually being offended.
Getting offended is a way of discouraging antisocial behavior, perhaps even the primary way. Because this is a public good, it is probably underprovided. (And yet you go on to recommend against it! Frankly, I’m shocked.)
Getting offended for one’s own sake, alternatively, is probably a Pavlovian learned behavior because criticism feels bad. Being able to distinguish between different causes of offense seems like a useful skill, due to the costs of being offended that you point out.
More generally, one can better callibrate one’s offense-giving by training to be offended at antisocial actions iff your offense actually has the deterrent effect. There is little utility in being offended by someone who is not in front of your face. There is also little utility in disapproving of people do not care for your approval. Inasmuch as people care about being disapproved of even when they are not looking, however, you may wish to cultivate offense even then.
… I think this may lead to a theory of acausal insult.
Personally, it’s my strategy to insult anyone who could have contributed to my being born, but didn’t.
That’s kind of the opposite approach to the one most people take vis-a-vis the set of people who may or may not have copulated with their mother.
If someone other than my father had copulated with my mother sometime in late 1986, a person other than me would have been born.
I believe that it is both possible and desirable to discourage antisocial behavior without becoming (or even acting) offended. Further, in many cases “calling people out” serves to derail conversations into a nonproductive or semiproductive state where the offense (or lack thereof) becomes the focus of the conversation. This seems necessary only in the most extreme cases.
Personally, I find that allowing such things to pass and then talking them over with the offender after the fact seems a better method of handling things. “Praise in public, criticize in private.”
I realize this is possible, but is it actually effective ? Entire social movements have been built on the basis of acting offended; and some of them, f.ex. the Civil Rights movement, have been spectacularly successful (comparatively speaking). Of course, one could argue that their success wasn’t worth the cost...
This seems like a pretty big oversimplification.
(Counterexample: Any act of civil disobedience under risk of violence seems to be ill-characterized as “acting offended”.)
In addition to calling for edits, I’m going to be a proactive human and type out my procedure for dealing with an offended mental state. Maybe it’ll be helpful to people?
Notice that you are in an offended mental state, which generally feels like being hurt, angry and the victim of an attack. It feels like the person was trying to hurt you, or should have known what they did was going to hurt you.
Make a mental note not to do anything important or make any decisions until you get out of this state, and then start working on getting out of it. Personally, I find it helpful to go over the facts of what happened, but if this makes the feeling worse then you may instead want to distract yourself, do breathing exercises to calm down, cry*, etc. Whatever works for you.
Go over the facts of what happened precisely.
Do you really have evidence that the person was trying to hurt you? Would the thing be hurtful to someone else? If the answer is yes, then you should probably speak up that they have done a hurtful thing in a firm but respectful way. Generate social pressure that doing hurtful things isn’t cool and you’re not going to let them slide. If they get defensive or refuse, disengage.
Do you have evidence that they should have known that what they did was going to hurt you? It may turn out that they had no way of knowing that was hurtful to you and you should tell them! Otherwise, a reminder or reiteration is probably sufficient.
Otherwise, think about precisely why the thing that happened is hurtful to you? Would you want to do the freedom to do the same thing, even if you knew that it was possibly going to be hurtful to someone?
If you find asymmetrical answers, then either you need to stop doing the thing, or you need to acknowledge that the hurt you’re feeling isn’t something that someone did to you, but something that occurred indirectly, which means the hurt feeling is yours to work through yourself. The good thing about this means that the person doesn’t hate you or anything!
It might turn out that the people did X and you’ve determined why it’s hurtful to you, but you also have no idea why they might have done it because X is something you never do, then skip to the next step.
After you’ve figured out why something is hurtful, it helps to think of the situation in terms of requests. What can the other parties involved do to make you feel better? I generally find that then things that come out of an offended state are attempts to make the offender feel bad, which is not productive it all—it’s just going to put them into the state where they want to make you feel even worse! Therefore, if you aren’t in a mental state in which you can generate productive requests, then you have more calming down/processing to do.
Consider how the other parties involved are likely to respond to your requests and try to find a method/situation of conveying them to the other parties in a way that maximizes the chances of the other parties being able/willing to fulfill them. Sometimes none of the expectations are high enough, so maybe don’t bother actually requesting the thing? It is still helpful to know what you would need in a situation.
* Note: Some people react weirdly to the crying (and I don’t know why).
I may be one of the people you’d describe as reacting weirdly to the crying, and my reason for it is this.
In order to not be seen as an Insensitive Person, when someone you know starts crying in your presence, especially if it’s because of you, you’re obligated to Do Something about it.
I do not have a cache of appropriate procedures for Doing Something.
If you’ve ever been in a situation where you say exactly the wrong thing, and find yourself scrambling for a way to rectify the social faux pas (tvtropes link), that’s more or less what it feels like.
I’d like to propose another procedure!
“Are you okay?” This covers the Sensitivity angle by Showing Concern.
“Is there anything I can do to make you feel better?” This allows you to obtain a procedure for Doing Something, but not only are you Doing Something, you are also doing the exact Thing that the crying person wants you to do. Customization! Sometimes, crying people don’t want you to do anything, but also if they tell you something random at step 2 and see you actually do it, they might be more inclined to trust you with the actual Thing they want you to do.
I think this is pretty versatile!
I do tend to give responses like this, but they feel awfully fake to me. I may appear more authentic than I feel when giving them. One time I asked my mother if she would describe me as a warm person (I wouldn’t, but I wanted to know what other people thought,) she said that she generally wouldn’t, but sometimes I am, and gave an example of a time when she was distressed over a cancer scare, and when she started crying, I immediately walked up and hugged her.
But I also remembered that event very well, and to me, hugging her didn’t feel like a natural reaction to consoling someone in distress, it felt like “Crap, I am required to Do Something, what do I do?” and desperately searching for a socially appropriate response.
This probably makes me sound a lot more uncaring than I actually am. It’s certainly not that I don’t empathize with others’ distress, but I’m not nearly as emotive as I am emotional, and I become distressed when I feel like I suddenly have to signal compassion in a way that’s different from my response to actually feeling compassionate.
Not at all. It makes you sound exactly like I feel a lot of the time–as someone who didn’t naturally pick up a lot of social scripts, it just feels frustrating that people have these scripts, and expect you to know when and how follow them even though they’re completely counterintuitive, and that people care about how you appear, not your intentions (or what you actually get accomplished).
Fake it till you make it! And take this as consolation: plenty of people’s natural, instinctive responses to people in distress aren’t helpful. The fact that you’re actually thinking consciously about your response means you can notice over time what works and what doesn’t and adjust accordingly.
I definitely know what that feels like; whenever people come to me with a problem, I immediately start trying to solve it, which probably comes off as awful pushiness if they just wanted someone to signal compassion at them. But it comes from compassion! People with problems need to stop suffering from them as soon as possible! I only feel compelled to give people hugs when the problem is unsolvable, like someone dying. (As a result, I’m really bad at greeting-hugs.)
Here are some questions: What would you want others to do for you if you were crying or upset? How often do people actually do the thing you want? Because if it’s not that often, you may want to let them know. I think actually most people appreciate feeling helpful in situations like that. Like if someone is giving you a hug and you don’t want it, ask them to do something else instead? Eventually, they should condition to always do the other thing when you’re upset. Hopefully.
Personally, it really bothers me when people get distressed if I’m upset or crying, because it feels like they care more about making me stop than actually resolving the problem that caused it. Like the more they let me cry, the less I will like them later or something. Or as if my crying bothers them so much that they just want to shut it off. Whereas I prefer to sit there and cry until I figure out what I need from them. Therefore, I would argue that being distressed at upset people isn’t instrumental, because it sends this weirdly selfish message sometimes. I also think that general non-manipulative, upset people appreciate a stable not-upset person around them? I hope. (Does anyone have a non-manipulative case where they’re upset and want to upset everyone around them?)
Also! I think as an addendum to step 2, I would say find the 10 most common Things To Do that people appreciate, and start listing them if the person’s not articulate enough to given an answer. “Would you like a hug? Would you like a glass of water? Would you like to be left alone?, etc.” Hopefully that will cover most people and you won’t have to worry too much that you’re not Doing the Correct Thing because they will have said yes when you asked!
If I were crying? Not be there. Even if I never got it from my own family, the socialization for men not to cry in front of others is pretty strong. It might seem like a socially unenlightened perspective, but honestly, the embarrassment of having someone see me cry would probably be more acute than whatever comfort they would offer. I think that men are often at a loss dealing with crying people for this reason.
If I were upset, but not crying, then situation could go two ways. They could ask if I want to talk about what’s bothering me, and I say yes, and explain what I’m upset about. Realistically, I’ve already thought about ways to solve the issue, so I’ll be bothered if they try to contribute ways to solve the problem before I relate my thoughts on the matter. After having shared my distress, I’ll tend to feel somewhat better.
The other way it could go is that they ask if I want to talk about it, and I say no. I won’t do this out of a desire to seem tough or bottle things up, but because I honestly don’t trust or feel comfortable enough with the person to want to relate my concerns to them. In this case, I’ll feel worse than if they hadn’t asked at all, because by asking them to leave, I’ve been forced to signal my lack of solidarity with them. In this case, the best thing the person can do is leave without asking me anything, so I can deal with the issue myself without having to tell them that their presence will only make matters worse.
This contributes to my distress in dealing with crying people, because I know that if I were in their place, the same actions could make my mood better or worse depending on something the other person couldn’t be expected to know about.
Yes! I definitely know that feeling. There are some times where people offering hugs is exactly what I need and there are times where hugs are exactly the opposite of what I need. This is kind of why I kind of think asking people stuff and requesting stuff are really the best policies, even if they don’t feel socially sensitive-looking enough sometimes and can be subverted by manipulative people.
I think I understand? It’s like this unthinkable thing you can’t imagine happening to you so you don’t know what to do when it’s happening to someone else. But thinking about unthinkable things is useful and good for your brain! (One day, you might be around some onions or something.) I still think that specific “distress” reaction not useful, and maybe can be helped by working out a specific procedure and sticking to it like a robot even when you feel weird.
I’ve been in these absurd situations where a guy gets so upset that I’m crying, that I have to comfort him even though he did the thing that made me cry in the first place. I’ve also had people assume not crying about something means it’s not important. I think it would be nice to demystify crying as an imperfect physical process that doesn’t always correlate with importance, clarity, sensitivity, etc.
In my own entirely anecdotal experience, some crying people react very negatively to #2; a fewer number react negatively to #1. This procedure is far from universal.
I wish there was some way to prove that my procedure is optimal under uncertainty and we should just train everyone to use it, but I might be drastically overestimating the number of articulate-while-crying people or knowing-what-they-need-while-crying people or expect-you-to-read-mind-while-crying people. =P
Maybe someone could build a model and then we can take a huge poll to fill in the model numbers.
Upvoted for procedure, but there’s something this doesn’t cover: How to deal with an offended mental state when the offender is malevolent and disengagement under 3.a is impossible. That would be useful to know. For bonus points, answer from an epistemic rather than instrumental rationality perspective.
I’ve dealt with sociopaths recently—or, if not sociopaths, at least Babyeaters. My usual offense strategy is very similar to yours and had no decision path for the situation. The gap is kind of on my mind.
So I think I originally had “leave” and I changed it to “disengage” to try to encompass situations where you can’t physically leave and I was referring to checking out mentally from the conversation. I’m not sure how many situations this covers (work?), but you could keep repeating “I’m not going to talk about this anymore,” and force your brain to space out. A handy epistemic perspective here says to think of what they’re saying as “meaningless chunks of wordmeat,” which I’ve personally found to be helpful.
As someone that doesn’t get offended in the typical way, I had a huge breakthrough when I realized that there exist people that don’t mean what they say. (Whaaa?!) From what it looks like to me, some people get mad and then grasp at the closest words in their cache to verbalize their madness feelings and then spout them out at other people. But once they’ve done that, their madness feelings go away! So they can’t really model them anymore and so they don’t even remember what they said, just that they yelled at you. So what would happen to me is that they’d apologize for yelling at me, and I’d say “We have to talk about that thing you accused me of because I have this detailed argument about why it’s wrong,” and they’d be like “I said that? Whatever, I’m sorry.” Even when there are chat logs! (Whaa?!)
So now I’ve gotten better at recognizing that in people, and once I can sorta tell that they’ve checked out, then I can expect to get no closure on the event by continuing to listen to more of what they’re saying because they don’t mean it and won’t remember most of it anyway.
Otherwise, if they’re malevolently trying to hurt you in a systematic way, I think it helps to distinguish between what hurts more: is it the fact that they’re trying to hurt you at all or is it the method by which they go about it? Because if it’s the latter, then you could feed them ammunition that makes them model you wrong. Tell them you really care about stuff that you actually don’t care about?
I know exactly what you mean, and can add a point: The people who don’t mean what they say assume that you don’t mean it, either. It is a personal policy of mine to say exactly what I mean, and only what I mean, whenever possible. Yet I routinely run into people who will take something I said, extrapolate or delete from it until it resembles what they “thought I meant,” and then answer that. Then judge me on it.
Needless to say, the mismatch is both harmful to communication and incredibly frustrating. I have a suspicion they’re making heuristic guesses that are acceptably correct when dealing with other verbally-inaccurate people, but fail when dealing with someone going out of their way to be precise.
I admit I don’t really have evidence for that hypothesis.
When I’m feeling snarky, I will sometimes respond to this sort of thing with some variant of “That response only makes sense to me if I assume what I actually said was something more like X. Is that what you heard?” The sorts of people who skew my output on input frequently respond to that in entertaining ways.
I’ve occasionally done that in text, now that you mention it. My in-person verbal comprehension has such a high latency that I can’t really do it there. (by the time I’ve worked it out the conversation has moved on) Pre-caching expected misinterpretations may help, if I can anticipate them accurately enough.
When it isn’t incredibly frustrating like you described, this works to my advantage because I generally mean it whenever I say something awful. And they assume I was just venting or whatever. =P
I am reminded of the following exchange between two housemates in my youth:
X (to Y): “Don’t take this the wrong way, but: fuck you.”
Y: (laughs)
X: “No, y’see, you’re taking it the wrong way.”
My strategy in situations like that is to try to get rid of all respect for the person. If to be offended you have to care, at least on some level, about what the person thinks then demoting them from “agent” to “complicated part of the environment” should reduce your reaction to them. You don’t get offended when your computer gives you weird error messages.
Now this itself would probably be offensive to the person (just about the ultimate in thinking of them as low status), so it might not work as well when you have to interact with then often enough for them to notice. But especially for infrequent interactions and one time interactions I find this to be a good way to get through potentially offensive situations.
Oddly enough, I get much angrier at my computer for not working than I ever do at other humans. Though I wouldn’t say I often get “offended” by either. I wonder how common this is?
My ambivalent reaction to this post motivates me to make a distinction between two kinds of advice; I will call the first “community-normative” advice and the second “agent-pragmatic” advice.
On one reading of your post (as community-normative advice), you’re basically telling people in general to do what the title says: “Don’t get offended!” My gut reaction to that is along the lines of handoflixue’s comment, only with less profanity. Everything anybody ever says is a speech act, and some speech acts are harmful, and some are intentionally harmful. So telling someone not to get offended is kind of like telling them to stop getting in the way of moving fists. Potentially a sign of moral myopia.
On another reading of your post (as agent-pragmatic), I see sensible advice for any individual thinker in the abstract. Yes, if it’s possible to cultivate a general disposition not to be offended, that might be a good idea, in the same way as cultivating an immunity to arsenic might be a good idea if you live in an Agatha Christie novel.
I think the difference between the two is that if you say “Don’t get offended!” without disclaiming the community-normative implications, you’re imputing blameworthiness to those who are (perhaps maliciously) offended.
To be fair, you did actually disavow those implications.
Yes, telling people not to get offended is like telling them to stop getting in the way of moving fists. And on a case by case basis, it generally is bad to blame people for what other people are doing to them. But on a long term basis, if you find yourself constantly on the recieving end of moving fists, you might want to seriously consider learning to dodge better. Similarly, if you find yourself constantly getting offended to the point that your epistemic rationality becomes impaired, you should seriously consider practicing ways to better manage your emotions.
That really, really depends though. Two different people may find themselves in that situation for completely different reasons. Some folks really just can’t catch a break; others really are ready to see a slight in anything that remotely discomfits them. Some folks need to learn to dodge better, some folks probably won’t get far with any advice that tells them to do something different since all these moving fists are not their idea and they’re taking pains to avoid as it is, and I daresay many folks will encounter both types of situations because moving fists are not a single class of thing...
At the low end of the mind, you’re absolutely right. The options are: take the hit, dodge, hit back, or redirect the punch away, or don’t even get near people in the first place. The best of those options is to redirect the punch away, which is very difficult to do.
At the high end of the mind, where extreme layers of subtlety exist, where most people don’t even have the ability to be aware of at any time during their life, there is another way: realize that the punch is not directed at you. At that level of depth into the mind, the offendee actually entices people to say offensive things in order to get offended.
One layer deeper than that, the offendee’s subtle body language, and overall “air around them” or “feeling they give off”, is what entices people to say things that person will find offensive. At this level, the method is to realize that the punch is not only not directed at you, but is actually directed at the puncher.
As offensively blunt as it is to say: the reality is, it’s always the fault of the person who gets offended. Of course, most of the time, all people involved are offended, and so it’s everyone’s fault. In the end, what I’m trying to say is not “don’t be offended”, but instead: listen to your feeling of being offended. It knows better than you. It’s not telling you what’s wrong with other people, it’s telling you what’s wrong with yourself. It’s right.
Don’t get hurt. Pain is natural and very easy to experience, but it interferes with your capacity for rational thought, and that’s clearly suboptimal!
Ironically, the text of your post seems unambiguously correct to me.
Yes. That’s because what I’m riffing on is the superficially-reasonable nature of your statements here. That’s kind of the idea behind sarcasm—tone and context alone make the difference between two very different readings of the same utterance.
That being said, I agree with some other commenters that a generalized disposition to not take offense strikes me as problematic and a little Spocklike. I am put in mind of Aristotelian ethics, wherein one is recommended to pursue the virtue of righteous indignation (that term had less baggage in Aristotle’s time) in contrast to the opposite vices of irascibility and complacency.
In certain very specific cases, yelling at the top of your lungs and banging on the table might be the entirely correct thing to do in response to a person’s actions or words, and the sense of offense you feel is useful, because it is what provides you the necessary motivation to do so.
I’m also reminded of, IIRC, Maimonides’ Guide to the Perplexed containing the directive to not allow oneself to become angry, because anger distorts clear thinking, but also observes that sometimes it is necessary to display anger so as to effect desirable change in the world.
Not really. There is a qualitative difference between being harmed and being offended. And as usual the word “offended” can range in meaning from a perceptual experience of distaste or dissatisfaction to a surrender to anger and outrage. It’s clear to me at least which end of that scale katydee is advising us to avoid.
Of course it wouldn’t make sense to advise people to avoid disliking things that are contrary to their values. But it makes perfect sense to advise mindfulness in the face of strong emotional responses. “Keep a cool head under fire” is uncontroversially good advice, and not equivalent to blaming people for being shot at.
Also, katydee’s advice works when applied to itself, because clearly too there would be nothing useful about being emotionally outraged at the idea of advice-as-victim-blaming, and none of the reasonable critical comments here seem to be couched in the form of incoherently angry rants.
I’ve been trying in the back of my mind to summarize something about this discussion since it started, and I think I have something useful:
A lot of people will hear “don’t get offended” as “don’t care about what people say or do; avoid being hurt or upset by becoming numb, cynical, or hyper-relativist.” This is the sense in which, for instance, Internet trolls mock people for being offended by casual use of racial or sexual slurs.
But self-modifying to not care about what people say or do means throwing out some part of your value system; giving up on it — and specifically, giving up on the part of your values that says I prefer to live in a world where people are kinder to each other.
A different take on it, though, is “keep your value system; keep valuing kindness — but notice when your reactions to unkindness are effective at discouraging unkindness and when they are not.” If, in a particular situation, blowing up at someone off for using racial slurs is likely to accomplish the desired result — communicating that you actually give a shit about people of other races and aren’t OK with asshole behavior towards them — then blow up at them. For that matter, if blowing up at them can rally other people to say that asshole behavior is not OK, that’ll be worth it to. But notice when offense works and when it doesn’t — and don’t burn up your own neurotransmitters giving too many fucks when it isn’t going to help.
The preference is not given up. What is given up is attachment to reality being a different than what it is. You give up the notion “if <thing I don’t prefer is the case in the universe> then ”. That leaves you free to be happy, content and socially alert and competent while you go ahead and pursue the things you want.
Right, so I think what is being missed here is the functional role sadness, anxiety and offense play in motivating human action. Sadness has intensionality—it is about something—and its proper role in a human mind is to motivate various complex responses along the lines of “avoid this” or “prevent this from ever happening again.” Lose the sadness and you lose the motivational power it contains. I don’t think this motivational power is actually replaceable by a more generic abstract preference. (Or else charity fundraising would just say “we offer 20 utilitons per dollar” and that would be enough.)
Of course, the kind of emotional distancing recommended here might be necessary if the sadness/anxiety/offense actually becomes, in itself, an obstacle to achieving your goals—which can certainly happen. But it is not the general case.
EDIT: Just to be clear, I am talking descriptively about humans, not prescriptively here. It’s too bad that we aren’t strongly motivated by “20 utilitons per dollar.” We should be! But we aren’t.
The functional role sadness, fear, suffering, and all such emotions plays is the same role pain plays: It is an indicator, telling the mind where the problem is. There are certainly multiple ways to “fix” the problem. In the end, however, the methods that in any way dampen progress are methods that don’t actually fix the problem. (The problem is never external)
Roughly 80% of the time, people are offended by things that they don’t know they do themselves. That’s why it is very important to listen to the emotional pain: to figure that out.
Roughly 20% of the time, people are offended by things that they do the opposite of on purpose, and take pride in. In this case, it is equally important to listen to the emotional pain: to figure out that they are doing the wrong thing. These two things can overlap.
Roughly 50% of the time, people get offended at their own imaginations; and what they are offended by has no bearing in reality. As in: they put words in people’s mouths, or they alter definitions. At these times, there is no reasonable way to avoid offending these people. They alter their understanding of reality so that they can be offended. They’re basically addicted to getting offended. Yes, I really mean 50%.
This does not follow. If the motivational power of sadness is replaceable by a more generic abstract preference, but most people do not perform that replacement, then charity fundraising would appeal to the “most people” baseline.
I’ve read a few gun bloggers’ comments that having a gun available to them made them -less- likely to consider violence, less likely to treat insults seriously; they had implicit knowledge that whatever slurs or insults came their way, that’s all that it would amount to. They couldn’t be bullied, and offense was removed from the equation.
For them, a gun was a stoic focus.
I’ve seen others for whom martial arts provided a similar stoic focus.
For a lot of people in a lot of situations, taking offense is in fact an offensive maneuver, of the train of thought that “The best defense is a good offense.” It’s an opportunity to demonstrate that they will defend themselves. Remove the perceived need for that, and things get considerably simpler. People are at their most easily offended when they feel the most vulnerable.
Thus, I suggest anybody who is easily offended consider and address their sense of vulnerability first. It’s entirely possible the offense taken is at a perceived threat, rather than the words themselves.
It seems that it would be easier to keep one’s identity small the less one deviates from the norms.
Literally screaming racial slurs in a person’s face is an offensive act. Acting cool may be one good defensive strategy, but other strategies are not unwarranted.
Maybe I’m having a problem with ‘offended’ as a mental state as opposed to something like ‘angry’. ‘Angry’ seems more of a mental state or feeling within yourself, while ‘offended’ seems less of a feeling but more a description of an act that you are attributing to the other person.
I read this post more as “Don’t get angry” than as “Don’t get offended” or “Don’t feel attacked”
A large part of one’s identity is acquired by conforming to and identifying with social norms.
Not least because identity isn’t just under one’s own control — it’s also imposed from without by other people. So if person X is unusual in some salient way, other people are likely to end up impressing that fact upon person X, even if person X wants to discount that aspect of their identity.
Not that it’s impossible mind. Just harder. Still, good point.
As jooyous noted, one of the most important skills is to be able to notice when you are offended, or otherwise emotionally hampered. This is not at all trivial, as you can see from the discussion threads here, where people who are clearly emotionally compromised behave as if they were acting rationally. (Yes, I am guilty of this, too.) I am not sure how to develop this skill of noticing being offended while being offended, but surely there is a training for it. Maybe something as simple as a checklist to go through before commenting would be a start. Checklists are generally a good idea.
The training method is called Taoist meditation.
Do this: be aware of yourself. Once you can do this to some extent, do the next step: be aware of your awareness of yourself.
Keep practicing this forever. That is meditation in a nutshell.
A checklist can help as long as it’s used to further the practice of being aware. If you’re having trouble being aware of yourself, practice being aware of other things first.
Oh, and another trick is that the fastest way to improve is to never go beyond 70% of your ability. One should only do so to gauge one’s ability, to better know what 70% is.
There’s an emotional state, and there’s disagreement about terminal values.
How should I communicate disagreement about terminal values? How should I behave to try to change the terminal values that society as a whole implements / tolerates?
Good question. First of all, is it even possible to change an individual’s terminal values ? My guess is that the answer is “no”; that’s why they are called “terminal values”. Or, rather, even if it were technologically possible to change a person’s terminal values, doing so would probably amount to murder. It would be akin to reprogramming Clippy to care about butterflies instead of paperclips.
If changing an individual’s terminal values is impossible, and if you are committed to a very low level of violence, my guess is that you should attempt to instill your desired values in as many young children as possible—and let time take care of the rest.
That’s not what “terminal values” means. It simply means the values from which all of a person’s other values can be derived. It is perfectly possible to change one’s terminal values—for instance, a young child cares only about itself, while almost no adults care only about themselves.
That’s a good point. Another example is going through puberty. Although one could imagine an AI whose terminal values are always the same, and thus make changes to their behavior only due to acquiring new knowledge, it seems that humans are literally built for their terminal values to shift in particular ways.
That’s a good point about children (and puberty, as Crux said); it’s possible (and IMO likely) that some of their terminal values are malleable. But I also agree with what William_Bur said on a sibling thread: issues like racism and segregation are instrumental values, not terminal ones.
I don’t think that’s necessarily true—see for instance Haidt’s work on moral foundations. Plenty of people who opposed interracial marriage framed it as a matter of purity/contamination.
There have been many historical examples that look like changes in terminal values. For example, George Wallace appears to have changed positions on racism in government (going from pro to anti).
As you yourself note, the US Civil Rights movement appears to have successfully changed the US society’s terminal values. Although I’m not sure “getting offended” is an accurate paraphrase of the Southern Christian Leadership Conference’s strategy or tactics.
I don’t believe those were terminal values. If someone changes their position from pro-segregation to anti-segregation, I would expect that in their previous model, segregation seemed better, but later they updated their model, and under the new model, segregation seemed worse. Better and worse in making everyone happy, for example.
Some people sincerely believed that segregation brings better results for everyone. Whether they were right or wrong, segregation was just a means of achieving some other value.
EDIT: I think we should be careful about assuming that our opponents have different terminal values. More likely, they have a different model of reality.
I agree with what Villiam_Bur said. I think our terminal values are more along the lines of “seek pleasure, avoid pain”, with the possible addition of ”...for myself as well as my descendants”. Issues like racism matter to people because, in the long run (and from a very general perspective), they cause pain, or inhibit pleasure.
My first thought here was, “No, we don’t know what our terminal values are yet” and to nitpick the particular ones you proposed.
Then I realized: “Terminal values” are an idea in a mathematical theory. They’re not things out in the world. They’re a data object in a model that is a deliberate simplification of behavior for the purpose of highlighting particular features of it. “Terminal values” exist in the map, next to Homo economicus and markets in equilibrium — not out here in the territory where there’s mud on our boots.
Nobody does something because of a terminal value. Rather, people do things, and some folks (very few) explain those doings in terms of terminal values. Others explain them in terms of dramaturgy) or other maps, each of which highlights different things.
Moreover, if we expand the mathematical map to the level needed to encompass something as complicated as people … well, the sorts of things that people think of and say, “These could be human terminal values!” are usually not even the sorts of things that we should expect could be human terminal values. The difference is as big as the difference between the syntax accepted by a text adventure game’s command parser and the syntax understood by a natural language speaker.
Yes!
I think this is hugely important. I want to add to it though: even as a manipulation it’s usually pretty silly. What can you really accomplish that you’ll feel good about afterwards? The main thing it accomplishes is to get people to stop throwing uncomfortable potential truths at you. Like any other knee-jerk emotional reaction, it tends to be pretty short sighted.
There’s a good substitute too. I try really hard to avoid taking offense, and in the very rare cases where offense seems potentially useful, I’ve gotten good results by merely pointing out that a statement is offensive, without actually taking offense. Sorta like saying “I’m a big boy and I can handle this conversation, but are you really taking that position right now?”. Usually that gets them to be a little more empathetic.
Getting offended gives you a reputation that tends to stop people being rude to you and treating you badly. You punish perpetrators by ditching them. They are less likely to abuse you in the future—and so are onlookers. Being the victim of verbal abuse doesn’t help much with acquiring correct beliefs either.
Responding firmly and effectively to actual attacks may preserve status and discourage others from abusing or taking advantage of you. Being emotionally upset is not an important part of that response.
In fact, responding excessively or inappropriately to perceived but unintended attacks loses you some respect and can discourage others from involving you in social activities or cooperative tasks. We may respect a badass, but we don’t like an arsehole.
Getting angry is easy because we have neural circuitry dedicated to it, whereas thinking clearly under stress is difficult. It’s commonplace, however absurd, to rationalise a failure to think clearly under stress as a wise social signalling strategy :)
I endorse not responding excessively or inappropriately to attacks, and I endorse responding firmly and effectively to attacks.
I agree that if my emotions are preventing me from doing these things, that’s worth correcting. And I agree that this is a common problem.
The solution is not necessarily (or typically, in my experience) to not have the emotion in the first place.
Agreed and voted up. Of course, you don’t get a choice about whether to have an emotion, at the base level.
Not sure “offended” is a primary emotion though. It seems to me (by introspection) to be bundled together with a lot of culture-dependent and habitual behaviours, associations and memes, all of which are sub-optimal for any given situation, and could do with being brought under conscious control before being allowed to influence my actions.
I get a choice about whether and how I experience emotions, in the same sense as I get a choice as to whether and how I run marathons. That is, I can’t decide right now to run a marathon, or to not feel anger, but I can make choices that will reliably eventually get me there. What I’m saying isn’t that the latter is impossible, but rather that I don’t endorse doing it.
I agree that offense is bundled together with and mediated by lots of culture-dependent and habitual stuff. I would say the same about a lot of emotional patterns. And yes, some (though not all) of that stuff is suboptimal for any given situation.
And yes, the ability to choose how I act even when emotional or otherwise experiencing influences on my behavior is valuable.
I agree that offense is not a primary emotion, if I understand what you mean by the term.
I don’t think that’s true. I’ve seen lots of cases of manipulation where being emotionally upset was the whole focus. Genuine distress evokes sympathy from onlookers and persecutors alike.
Attempting to evoke sympathy by displaying distress is another kind of response I suppose. But its success depends entirely on the reactions of your onlookers and persecutors: they may be amused and encouraged, for example. And even where successful, its success still doesn’t depend on actual internal loss of emotional equilibrium.
Whereas being aware of and in charge of your emotions in a stressful situation is always a winning strategy.
Just as for an aggressive offended response, a manipulative whiny offended response can still lose you respect and social advantage if it is misjudged. And you are more likely to misjudge it if you are acting in a poorly controlled way out of emotional upset, rather than in a conscious attempt to communicate clearly.
BTW, I don’t particularly rate the argument that because you’ve seen people easily manipulated by displays of distress, it’s an advantage to be genuinely distressed by things. Obviously it stops being an advantage for the manipulator if the onlookers are able to control their own distress at seeing someone else apparently upset.
But I think we’re wandering away from the topic a little. Being offended isn’t the same as being distressed, except for pathological narcissists.
In theory people could try faking emotional reactions, hoping for a similar effect. In practice, they run into the cheat-detection mechanisms in other people’s brains. If they get found out, there’s a risk of getting a reputation for “faking it”. In many cases, its easier and simpler to be genuinely offended, upset—or whatever.
Easier, simpler, still not a great idea, for all the reasons I gave above.
That only applies among adults, and possibly not even all of them.
“Don’t get mad, get even, or whatever the smart thing you decide is appropriate”
Very nice, I see your point, this is a skill that benefits (actually allows) most cognitive processes (or would make you a fantastic chess player).
I would like you to elaborate more on this idea:
Humans are gregarious, in ancient times it was our greatest asset against most predators, (ironically it made us the most terrifying) therefore it was (is) of utmost importance to be part of the tribe. Apparently we are genetically programmed to follow this tribal behavior.
Many times when I witness human interaction I see “power play” (I am sorry to say this but the first thing that comes to mind is apes throwing feces at each other) eventually this leads one of the participants to be offended.
I wonder if the “loser” is overwhelmed with emotion since being degraded in status means the possibility of becoming a cast out, with the unpleasant effect of being devoured by the tiger. Also without sexual partners there is zero chance of passing your (precious) genetic information.
Cheers!
The extreme scenario need not even be the dominant factor. Even the less drastic effects of status degradation you mention result in less sex with desirable mates, less access to resources (including food) and fewer social consequences for rivals attempting to exploit you.
Also note that if your status is low, higher status people can and frequently do hurt you for the fun of it.
You might as well bow out before busting out the ad homs.
Eugine_Nier never said it was “just” the genes, on the contrary. If you were making the claim that genes are not involved, the onus is on you to show so. Asking for evidence isn’t an argument from ignorance. It would be astounding if there were genetic variations leading to endless variations in everything except cognition. The default assumption (even with no cognition-specific data) is “everything is affected by genetics”. The degree may well be lower than individual variations, it would still shift the mean and lead to an overall difference between groups.
I have heard the term “heritable” used to mean something like: “Parental life outcomes are strong predictors of child life outcomes, and both outcomes will likely be similar.” At that level of abstraction, heritability is a true phenomena. But that says nothing about mechanism. The two obvious candidates are:
genetics
environment
Eugine seems to be rejecting environment, so I’m not sure what he thinks is the causal mechanism for heritability, if not genetics or similar unalterable in-born traits.
I’m not rejecting environmental causes, I’m objecting to whowhowho’s claim that genetic causes aren’t even worth discussing.
I didn’t say that only hard science is worth considering. In fact, the “hard science” / “soft science” distinction functions to marginalize subjects like history, sociology, and anthropology.
What I said was that that rigorous science is worth taking seriously. With the implication that there’s a fair amount of normative pressure for certain fields to sacrifice rigor in order to produce the “correct” results. Particularly in popularization, both evo. psych and anthropology are at terrible risk for this sort of bias.
Social sciences generally cannot run double blind studies, especially in subjects like history, economics, and anthropology. Further, being a participant in the observation adds all sorts of risks of biasing the data. And general motivated cognition creates a risk that one’s data will only reflect one’s preconceived beliefs.
Nonetheless,
(1) The experts are well aware of these difficulties.
(2) The results those fields produce often are sufficiently rigorous to be worth taking seriously. To the extent our interlocutor says differently, he is wrong.
I would add than a narrow focus on quantifiable data can be limiting, especially when you are researching culture and doing content analysis. Coding is a way to convert that content into numbers—counting mentions of words or themes—but that requires a lot of qualitative analysis to begin with and certain aspects are often lost in translation. Any social scientist worth their salt will take into account as many biases as they can and devise an experimental design to control for them as much as possible. But again, you have to be prepared to defend why you did what you did and human nature is complicated.
Having said that, I think a mixed qual/ quant designs are pretty great, especially when you have meta analyses to back you up.
I didn’t say the humanities shouldn’t be taken seriously (I hate the term “soft sciences”). The humanities study fields where scientific evidence is generally not available, thus they have to rely on other kinds of evidence. Unfortunately, this makes it easier to get away with sloppy work, BS, or even outright lies in those fields. This should be taken into account when assigning probability values to various statements.
You’re misunderstanding Julian’s claim, albeit I think for reasons of inferential distance rather than deliberate misreading. The claim was not that anthropology/Sinister Cathedral Orthodoxy endorses inborn gender identity, despite its being wrong, for its political utility to trans rights. Such Orthodoxy is precisely the basis on which he thinks it is wrong. The claim was that activists endorse this false belief for its political utility, and that he and other Sinister Cathedral Agents don’t feel particularly obliged to go out of their way to correct it (although doing so was precisely what he did in that post.) If there was a widespread belief that washing your hands protected you from demons, I would not fault epidemiologists for failing to prioritize disabusing the public of this. Nor does it strike me as an affront to science that epidemiologists, as a general rule, have normative commitments that extend beyond scientific inquiry and on to the belief that health is better than sickness.
My point is that one way or another the claim obtains the official stamp of approval as being the “scientific” and that this is an argument to be highly skeptical of anthropological claims with this approval.
Which claim? The one that anthropologists are endorsing is not the one that’s politically convenient to them.
Of course not. Her claim is the the great and noble anthropologists are deceiving the public for the greater good.
Okay, so at this point we’re basically disagreeing over what someone intended by what they say. Unless Julian wants to clarify I’m going to tap out.
I don’t believe so. At least I can’t see where your position differs from mine. The difference is you object to my formulation her position in a way that doesn’t make anthropology look good.
First, let’s taboo “humanities.” No disrespect for the literature professors, but the type of analysis aimed at determining whether Hamlet is really insane, or pretending so for political / social advantage . . . is not what we are talking about in the conversation.
Instead, we are trying to talk about academic fields that are attempting to examine human beings and human societies in order to make useful and falsifaible statements about them. We are also talking about philosophy-of-science, particularly analytical, conceptual, and methodological considerations for the human-studying field.
Because random controlled trials (RCTs) are impossible or unethical in these fields, the quality of the data is much poorer than in fields where RCTs are practically possible. And quality of data is an important bulwark against nonsense, particularly politically motivated nonsense. That still does not justify claiming that entire fields are filled with liars. To make an equally ridiculous claim, the so-called harder sciences are filled with liars because fraudulent retractions are commonplace.
Anthropology results should be taken with a heavy grain of salt. But you argue the stronger claim that anthropology is worthless, through and through. Regardless of some folks here announcing a willingness to stretch to make a point, that doesn’t prove the experts in the field are doing misleading things. Stick your neck out—tell us what evidence would make you believe that heteronormativity exists. (I can easily list facts that would make me doubt the phenomena of heteronormativity).
I don’t understand your reference to this article. To quote it:
To the extent Eliezar asserts that there is empirical, but non-scientific evidence, I reject the usefulness of the distinction.
My central example of “humanities” are things like history and philosophy.
No, the justification for claiming anthropology is filled with liars is that they don’t make strong attempts to hide this. Or as they’d prefer to call it they pursue goals other than pure truth-seeking.
The difference is that in the hard sciences people who commit fraud or otherwise lie are made pariahs, in the “soft sciences” it is frequently the people who refuse to go along with the official lie of the moment who are made pariahs.
There is no a priori reason why anthropology should be completely worthless, it’s just that the people who are currently in it are more interested in pursuing a political agenda than truth seeking.
It’s not just a few folks doing misleading things. It’s that the official “scientific” results, i.e., the things that will get you dismissed as a crank of you question them, are based on such “stretchings”.
Experience and some theoretical arguments have shown that when humans restrict to using scientific evidence they are less likely to fall into various collective failure modes.
First, no commentator in this venue is a leading researcher in sociology or anthropology, so anything said is incredibly weak evidence for your strong claim that researchers / academics “who are currently in [those fields] are more interested in pursuing a political agenda than truth seeking.”
Second, you have not stated what evidence you could see that would make you believe the sociological / anthropological theories are true. An outside observer could think that your assertions about misconduct in these fields exist to justify your disbelief in the substantive results. If Omega (who is always right) told you that anthropology is not more politically driven than physics, would you accept anthropological theories?
FWIW, I recently saw the thesis talks of a few graduands in social psychology, and they seemed to me qualitatively different from those of physicists: in the former, the professor who introduces the graduand to the audience will spew out lots of applause lights (e.g. “Ms So-and-so is going to speak about $topic, which is such a big problem nowadays that affects so many people”), professors will occasionally interrupt the graduand with comments like “yes, this is a great idea, I hope to see more of this in the next years”, and after the talk they will ask stuff like “why did you choose this particular topic” (trying to elicit applause lights from the graduand themselves); in the latter, the introduction will be limited to “Mr So-and-so is going to speak about $topic”, full stop (even when they could in principle mention how graphene is such a revolutionary material or whatever—they just don’t), no-one will interrupt the graduand unless they say something unclear, questions at the end will be strictly technical (or occasionally “what applications can this have”), and there are hardly any applause lights except trivial ones such as “thank you for your attention”.
BTW, while I’m not familiar with linguistics except through the internet, ISTM that it is seen as a hard science (for the purposes of what’s being discussed in this subthread) by insiders but as a soft science by most outsiders, and as a result once in a while a non-linguist will be disappointed when a linguist refuses to espouse boo lights about non-standard language usage (e.g.).
Just to confirm: you’re proposing that when linguists refuse to condemn non-standard language usage, that’s an expression of different cultural norms between the hard-science and soft-science communities regarding the use of boo-lights, rather than an expression of linguists not negatively valuing non-standard language usage?
Not quite—more like, what the linguists say is “an expression of linguists not negatively valuing non-standard language usage”, but what the non-linguists asked them and what they will think when they hear the answer is “an expression of different cultural norms between the hard-science and soft-science communities regarding the use of boo-lights” to some extent—but for some reason I don’t terribly like this way of putting it.
Ah, OK. Thanks for clarifying.
WRT the second quote… in what way do you dislike it? E.g., does it seem that I’ve factually misrepresented the position, or that I’ve framed it negatively, or...?
Weird… On reading it again it no longer sounds that bad to me, and I can’t quite remember why it did.
If you have any insights as to what caused either the initial reaction or its termination, I’m interested.
I think I might have been primed to think of the phrase “boo light” as a boo light. My inner Hofstadter is laughing.
Consilience, i.e., independent verification of their claims by people from other fields and few instances of refutations of their claims.
I would mostly update in the direction of him not being Omega. Now if he made a claim that was at least plausible, like anthropological theories having some political drivers but not enough to overwhelm the science that would be another story.
Er… why? You wouldn’t expect the proof of the Poincaré conjecture to be independently verified by phoneticians, would you?
I don’t expect every claim to be independently verified. What I do look for is that the claims that can be independently tested will be confirmed rather than refuted.
I wonder how much difference the grammar/typographical error made to perceptions of this comment. For some reason that trivial little ‘n’ acts like a speed bump/interrupt for my brain and it kicks me out of the flow of the argument.
Edited, thanks.
It’s interesting the different tolerances various folks have to typographical mistakes.
This post and some of the comments seem to me to have got the wrong end of the stick. Sometimes offense is used as a rhetorical trick, in which case notions of ‘high status response’ and ‘manipulation’ are appropriate. However it normally occurs when one person—from callousness or ignorance—says or does something that does not accord another person’s the respect and dignity they are entitled to.
When someone says something offensive to you—they’re racist, homophobic, sexist—it seems like you should be offended by that. To a large extent your reaction will be non-rational, emotional, habitual. But to the extent that you can shape your reactions (or character traits), this seems like one you’d want to keep. In addition to the positive social effects, it seems important at a personal level. The offender is disparaging your identity, your dignity, your self-worth—they’re not according you the respect you deserve as a person. How dare they!
By getting offended—and even better telling them off—you’re often reaffirming your self-respect. It’s an important, powerful moment when a wife stands up to her husband, when a gay kid stands up to bullies, when a black person calls out a bigot. When there’s so much contemporary emphasis on challenging everyday misogyny, homophobia and racism whenever it occurs, it seems strange that you would be advocating the exact opposite.
You’re only taking examples from one side. What about when the husband is offended his wife won’t sleep with him, the bullies are offended by the gay kid, and the racists by the black people moving in?
Then the husband shouldn’t rape his wife even though he’s offended, and the bullies shouldn’t assault the kid even though they’re offended, and the racists shouldn’t lynch the black people even though they’re offended.
Offense and harm aren’t the same thing. The OP conflates them senselessly.
Sorry, I’m not sure what you mean—do you mean that HaydnB was (wrongly) conflating being offended, which is not very bad, and being harmed, which is?
Katydee (the OP I meant) and you both seem to be conflating offence, a word that seemds to describe a broad class of possible emotional states and responses to something (two people might as readily say “I’m offended” before respectively starting a loud, angry argument and quietly asking if it’s okay to change the topic), with the subset of offence that deals with what HaydnB was talking about.
The gay kid standing up to peer bullying, or the woman standing up to a husband who’s acting entitled about access to her body for sex for that matter, are not the same thing as the peers’ reaction to someone’s perceived homosexuality, or the husband’s assumption that his wife should put out whenever he wants. There are numerous other factors to take into account; the people bullying the gay kid aren’t harmed by queer folks existing in anything like the way the kid emself is harmed by violent physical assault. The husband feeling frustration over not getting sex on his terms alone is not harmed by this in anything like the way the woman is if he forces himself on her or even just continues to act as though her body is presumptively there for his pleasure.
All of those examples will involve very different emotions, and very different motivations. I daresay even those that take the same “sides” you’ve framed here will be quite different from each other.
HaydnB said
His examples were cases where we might want to keep the reaction. But that doesn’t mean he was talking about “objecting to harm” instead of offence, as you suggest. He was just using the most positive examples for his argument.
And I am trying to elucidate something of his likely sorting algorithm, such that the reasons for favoring one side of those cases might be a little more obvious to you, who seemed to find it suspicious he didn’t take the other side.
The reason for favouring one side of those cases is obvious. If it wasn’t, he wouldn’t have used them. However, the fail to support his point, because “offence” supports both sides of each of his cases.
Would you be fine with the compromise of “we should get offended over genuine harm”? i.e. bullying is offensive, and gay kids are not. Rape is offensive, and the wife having a low sex drive is not.
For different reasons, I think that’s not true. Lots of things hurt me that it doesn’t seem appropriate to get offended over. For example, paying income taxes, getting fired from a job, or being randomly mugged in the street. I might try to prevent these, but the psychological reaction of offence is not the appropriate one.
“Harm that is both genuine and unfair”, then? Income taxes are ‘fair’ (and I would find it baffling to call that ‘harm’ unless they somehow came as a surprise), getting fired is offensive if it’s done solely because your manager doesn’t like you, but fair (and therefor not offensive) if it’s because you failed to do the job. I think getting mugged is a good thing to get outraged about—we want to make that happen less!
I think your claims about income taxes are implausible, but won’t pursue that line of argument, as what I took to be an obvious truth is apparently political.
I might be outraged at being mugged but not offended. I think I would be more likely to be violent than either though.
You can have a debate about when offence is justified. I was making the point that in some cases it definitely is, and we shouldn’t view offence as obfuscation/manipulation or follow the principle ‘Don’t Get Offended’.
I was objecting to your assertion that being offended was in general a good reaction to keep by providing instances where it was not.
This is why living in an advanced society is highly desirable.
I will always admire the Norwegians at how they responded to Breiviks actions.
-3!
nOw i aM oFfenDed!
I am against this, I think it is overused, but there are times when it is justified… ROLF!
No I was not calling my cousin Rolf I meant ROFL...
Question at how many negative points do I get banned?
These aren’t so much a dichotomy as they are different descriptions of the same phenenemon said from the perspective of a (hypothetical) ally instead of a rival.
Point taken, but I think those different perspectives are meaningful.
Indeed. I find it useful to understand just what the social power move being made is and evaluating whether to support, ignore or oppose that move on the merits of the case in question. That keyword you mentioned—‘entitled’—makes all the difference. The person claiming offense clearly believes they are entitled to more power, respect or deference. Sometimes I agree, sometimes I do not. Sometimes it is beneficial for me to support or oppose the social move and sometimes it is not.
Taboo, “racist, homophobic, sexist”. In my experience these words, especially when spoken by the offended, frequently mean “you are making an argument/stating a potential truth that I don’t like”.
For example: is it racist/sexist to point out the differences in average IQ between the people of different races/genders? Does it become racist/sexist if one attempts to speculate on the cause of these differences?
“Gay people shouldn’t marry because it will undermine the very fabric of civilization” “Women shouldn’t vote, because they don’t understand male concepts like War and Empire” “Everyone knows Irish people get drunk on St. Patrick’s day!”
This is the sort of stuff that frequently arises in the world.
I would suggest you probably live in a very filtered environment. It’s cool, most people do. I’ve been trying to re-filter my own environment. But, trust me, these things are all still alive and kicking out there. Following the news, activist blogs, or just having friends who are oppressed in their daily life and talk about it, will quickly draw this sort of racist, homophobic, sexist comments to your attention.
If you really think this qualifies as “stating an unpleasant truth” then… wow.
I don’t think frequently means ‘more than 50% of the time’, so it is possible for both of you to be right.
I’d disagree. The connotations of Eugine’s statement was to dispute HaydnB’s original point, “When someone says something offensive to you—they’re racist, homophobic, sexist—it seems like you should be offended by that. ”
Is your claim that these statements are obviously false or that they’re so offensive that they shouldn’t be stated even if they’re true?
I ADBOC with the last of them (except the “everyone knows” part—my mother didn’t know what the significance of St. Paddy’s was until I told her a few years ago).
The last one should be read as “ALL” Irish people, my bad :)
BTW, this is something I’ve recently noticed—the vast majority of statements I’m offended by is of the form “All [people from some group that comprises a sizeable fraction of the human population, and doesn’t include the speaker] are [something non-tautological and unflattering].” (I am more offended if the group happens to include me, but not very much.) But remove the universal quantifier and, no matter how large the group is and how unflattering the thing is, the statement will lose almost all of its offensiveness in my eyes.
Internally I am generally the same, but I’ve come to realize that a rather sizable portion of the population has trouble distinguishing “all X are Y” and “some X are Y”, both in speaking and in listening. So if someone says “man, women can be so stupid”, I know that might well reflect the internal thought of “all women are idiots”. And equally, someone saying “all women are idiots” might just be upset because his girlfriend broke up with him for some trivial reason.
And the belief in question acts more light “some/most X are Y” then “all X are Y”, i.e., the belief mostly get’s applied to X’s the person doesn’t know, when it makes sense to use the prior for X’s.
Yes, people who say “all X are Y” usually do know at least one person who happens to be an X and whom they don’t actually alieve is Y—but I think that in certain cases what’s going on is that they don’t actually alieve that person is an X, i.e. they’re internally committing a no true Scotsman. Now, I can’t remember anyone ever explicitly saying “All X are Y [they notice that I’m looking at them in an offended way] -- well, you’re not, but you’re not a ‘real’ X so you don’t count” (and if they did, I’d be tremendously offended), but I have heard things that sound very much like a self-censored version of that.
I generally avoid criticizing reasoning that reliably reaches correct conclusions.
I’m not sure what the relevance of that to my comment is.
The reasoning you described reaches valid (object level) conclusions in the different cases under consideration, but you still prefer to analyze it as full of fallacies for some reason.
Huh, no. If an argument has premises “all X are Y” and “John is an X” and conclusion “John is not Y”, it is broken. Whether the conclusion happens to be true because one of the premises is false is irrelevant.
The argument’s stated premises were “X are Y”, you decided to interpret the ambiguous statement as “all X are Y” and then complain that it makes the argument formally false.
Re-read the fifth word of this comment. (Or am I missing something?)
You may want to (re)read this comment to see/remember how this discussion started.
What exactly do you count as a self-censored version of that? Pointing out that you’re an exceptional X, that you have characteristic Z, which correlates negatively with Y, or some such thing? If so, the answer is: well, of course, what do you expect?
If people make a generic generalization along the lines of “(all) X are Y”, then naturally, you have to be an exceptional X in order to be Y. One could say that it’s enough that you are Y, because then you are an exceptional X in virtue of that. But that’s not how generic generalizations work. People make such generalization usually not purely on the basis of statistical data, but because in their model, something about X causes Y (or they have a common cause). So if you’re X, but not Y, chances are you have additional characteristic Z, which is rare among Xs, and which counteracts X’s influence on Y.
It’s just like saying “dogs have four legs—well, not Fido, obviously, but he’s had an accident and one of his legs had to be amputated”. This kind of thing might sound a bit like a self-censored version of “but Fido isn’t a true dog”, but what it really says is “but Fido isn’t an ordinary dog”, which is entirely correct!
Maybe you’re aware of all this anyway, but I just thought it’d be worth pointing out.
Perhaps taboo “ordinary” and “true”?
In the context of human groups and human sub-groups, I’m not sure “ordinary” member of the group is used differently than “true” member of the group. Witness those who claim the community organizer is not “really” black because he did not live the ordinary life experiences of a black male child (i.e. he didn’t live in a poverty stricken inner city while growing up).
I’m inclined to argue, as some linguists would, that tabooing “ordinary” is impossible in this context, because people are intuitive essentialists, and that generic statements make reference to such postulated essences, which define what makes for an “ordinary” X. (Hence a lot of Aristotelian nonsense.)
This does, indeed, fit very well with your observation—with which I agree—that sometimes, the borderline between “ordinary” and “true/real” becomes blurred. However, I think one should still be wary of suspecting mentions of “extraordinary” of being censored no-true-Scotsman-arguments without further evidence.
Obviously false. I just stated them, so they’re not de-facto offensive; they’re offensive when you assert such an obvious falsehood as TRUE.
Can I here the evidence that caused you to assign such low probability to them.
It depends on what relevance it has, and on what is being left out. Someone once told me that GW Bush must be smarter than Obama because he is white. That’s an intellectual fallacy even if it isn’t boo-word racism.
In my experience, references to “human biodiversity” are frequently presented as if they are value neutral, but frequently aren’t because of the factors mentioned above.
The way I’d use the word, it depends on why you’re pointing them out. (Hint: if someone is pointing out that white people are more intelligent than black people in average for non-army1987::racist reasons, they’d most likely point out that East Asians and Ashkenazi Jews are even more intelligent in average.)
The wording is also important—“blacks are idiots” is no more of a reasonable way to put that than “females are midgets” is a reasonable way to state the fact that the average woman is shorter than the average man, so if someone is willing to say the former but not the latter, there’s likely something wrong.
(BTW, AFAIK men and women have the same average IQ (though different types of intelligence are weighed in a way deliberately chosen to make that the case), but the distribution of men’s IQs has a larger standard deviation.)
Yes and yes. We live in a world where people disregard qualifiers, so if you say “on tests of mathematical ability, men have higher variance in test scores, so the most talented mathematicians are disproportionately men” people will hear “men are better at math” and assume that average men are better than average women at math (this might also be true, but is not what you said). Basically, some people don’t distinguish between “most a are b” and “most b are a”, so you end up with people drawing conclusions that hurt other people with no real benefit. So as a general rule, we pretend that there are no between-group differences because if we don’t, people have a tendency to focus exclusively on between group differences and ignore within-group differences, which is worse.
I could make similar argument about a lot of things we do here, e.g., people hear “consequentialism” and think “the ends justify the means”, that doesn’t stop LW from promoting consequentialism.
Intentionally believing false things always carries a cost.
For example, suppose I want to hire the best mathematicians for a project, they’ll likely be disproportionately White or Asian men. Someone who followed your advise looking at the mathematicians I hire would conclude that I was racist and sexist in my hiring and we live in a society where the courts might very well back them. Thus the only way for me to avoid being considered a racist and sexist is to intentionally fudge the numbers based on race and sex, which itself requires that I know the truth about racial and gender differences so I know which way to fudge.
Nope, and some people will express disapproval of LWers who promote consequentialism. Being right doesn’t make you immune to social stigma.
Yes, it does. So does unintentionally believing false things. This is definitely not a one-sided issue, as much as people like to pretend that is it. Anti-discrimination policies reduce one cost at the expense of raising another.
In the case that you both want to hire and are able to hire exceptional mathematicians, anti-discrimination policies are likely to hurt both parties involved. (In theory, laws regarding disparate impact wouldn’t actually affect you if you were hiring based on demonstrable mathematical prowess, but in practice business necessity would be hard to prove). The mathematicians are actually likely to be hurt considerably more, because without anti-discrimination policies, they would probably be in higher demand and thus able to ask for much higher pay.
The real problem comes in when employers decide that they need exceptional people but can’t actually identify these exceptional people. If filtering based on race was allowed, employers would use that (the best mathematicians are disproportionately white and asian, therefore if I hire a white or asian I’ll get an above-average mathematician).
Basically, you’re right except for the problem where humans mix up p(a|b) and p(b|a), which causes people to do stupid things (most of the people who win the lottery buy lots of tickets, so if I buy lots of tickets I’m likely to win the lottery). If you actually know what you’re hiring based on, anti-discrimination policies will prevent you from having 100% of your workforce be the very best, but even if only whites and asians had the required skills, you’re still looking at 77% of the population in the US, so it falls in the category of “annoyance” not “business killer”. In terms of fudging, you can detect statistically significant deviations just as well as someone looking at your hiring data. You don’t need to know beforehand.
Of course, if these things weren’t the case you’d still face social stigma for saying anything that sounds vaguely racist. Because while these two societal tendencies have strong effects in opposite directions, they’re not there by virtue of reasoned argument, and so removing one but not the other is likely to cause more harm than good (probably, I have no idea how one would go about removing either societal tendency to test that hypothesis). If both tendencies could be eliminated, that would be best, and here you probably can talk about it without much social stigma, but if you ask those questions in everyday life, you will be labeled as a racist.
When does that occur? What happened to resume″s, qualifications and tests?
The difference is that if you unintentionally believe something false, you can update when you find new evidence; whereas once you start intentionally believing false things, you’ve declare all truth your enemy.
Depends on the size of the business and your margin. Most small businesses can’t afford to have 23% of there employees be dead weight, especially if they have to pay them the same as the others to avoid looking like they have racist pay policies.
Most small businesses don’t need to hire the top 0.01% in any given skillset. The small businesses that do need to hire that exclusively and the small businesses that are strapped for cash are generally two distinct sets. In any case, without those policies, the top 0.01% could demand more money, and so the business wouldn’t be in much better of a position. It’s really the top 0.01% of workers who bear the majority of the cost of anti-discrimination policies, because they could negotiate better pay if the policies weren’t in place.
It is a tradeoff. Empirically, societies that oppose discrimination tend to do better (though there are obvious confounds and this doesn’t necessarily mean that the anti-discrimination policies improve outcomes—it may just mean that richer people prefer egalitarian policies more). In American culture, at least, you will generally be labeled as a racist if you imply that there might be between-group differences, whether or not you can back that up with good arguments.
By all means, keep in mind that the social fiction of perfect equality in ability across groups is unlikely to be true. But also keep in mind that it’s a polite fiction and you will be stigmatized if you point out that it’s unlikely to be true. The term “racist” usually refers to someone who doesn’t respect that social convention, and both of the statements you were questioning go against that social norm. “Racist” doesn’t mean “factually incorrect”, it means “low status and icky”.
The same logic applies if you want to hire people in the top 10%. Yes, there may very well be enough blacks in the 10% that if you had first choice among them you could hire enough to comply with disparate impact. However, in reality you’re competing for the few blacks in the top 10% with all the other businesses who also need to hire the top 10% and there aren’t enough to go around.
Yes and at LW our goal is to raise the sanity waterline.
Yes, it is.
How about also considering the costs, benefits, and comparative advantages when dealing with various topics? One does not get extra points for doing things the hard way. Instead of dealing with some topics directly, it would be better to discuss more meta, e.g. to teach people about the necessity of doing experiments and evaluating data statistically. This will prepare the way for people who will later try to deal with the problem more directly.
Now it may seem that when I see people doing a mistake, and I don’t immediately jump there and correct them, it is as if I lied by omission. But there are thousands of mistakes humans make, any my resources are limited, so I will end ignoring some mistakes either way.
Make sure you pick your battles because you believe you can win them and the gains will be worth it. Instead of picking the most difficult battle there is, simply because choosing the most difficult battle feels high-status… until you lose it.
It’s worth noting that many people also ignore the smallness of effects. It probably doesn’t end up mattering much, not worth arguing...
Exqueeze me, but since when did “not white or asian” equate to “dead weight”?
Not all of them, it’s just that there aren’t enough non-dead weight non-white non-asians to go around for all the businesses who need competent employees while complying with disparate impact.
So much for “23%”.
How do you know? Not every business is a silicon valley start up that needs to be staffed almost entirely super smart people. The typical company is much more pyramidal. A lot of employers want a lot of employees who will happily work for the minimum wage.
Whatever that means.. If you think US affiirmative action, or something, is the issue, then it cancels within the US. If you think it makes the US less competitive than polities that don’t have AA, then that’s only part of a bigger problem, because, given your assumptions, the US would be at a severe disadvantage compared to any given Asian nation anyway. But it doesn’t appear to , so maybe factors other than DNA are important.. Who knows? We can only try to deduce what you might be saying from your hints and allegations.
Why, is business an entirely zero-sum game within the US?
Some indeces of business performance are, such as relative rank.
Is the False Thing “people are equal” or “it is best for society to carry on as though people are equal”.?
The thing is society doesn’t “carry on as though people are equal”. Society, at least the more functional parts of society, treat things like affirmative action and disparate impact, as things to be routed around as much as possible because that’s necessary to get things done efficiently.
It would have been helpful to answer the question as stated. Not all societies have affirmative action and my polity doesn’t. Depending on ones background assumptions, affirmative action could be seen as restoring equality, or creating inequality. You seem to have assumed a take on that without arguing it. It would have been helpful to argue it, and not to treat “society” as synonymous with “US society”.
Ironically this is a case where p(a|b) is in fact a good proxy for p(b|a) and and the kind of filtering you’re objecting to is in fact the correct thing to do from a Bayesian perspective.
See also: Offended by conditional probability
“The best mathematicians are disproportionately white and asian, therefore if I hire a white or asian I’ll get an above-average mathematician” is Bayesianly correct if the race is the only thing you know about the candidates; but it isn’t (a randomly-chosen white or Asian person is very unlikely to be a decent mathematician), and the other information you have about the candidates most likely mostly screens off the information that race gives you about maths skills.
Read the comment I linked to and possibly subsequent discussion if you’re interested in these things.
Hmm, so E(the Math SAT score that X deserves|the Math SAT score that X got is 800, and X is male) is just 4 points more than E(the Math SAT score that X deserves|the Math SAT score that X got is 800, and X is female). That doesn’t sound like terribly much to me, and I’d guess there are plenty of people who, due to corrupted mindware and stuff, would treat a male who got 800 better than a female who got 800 by a much greater extent than justified by that 4-point difference in the Bayesian posterior expected values. (Cf the person who told whowhowho that Obama must be dumber than Bush—surely we know much more about them than their races?)
I’m not sure if this is correct, but I sometimes wonder given how they’re surrounded by spin-doctors and other image manipulators how much we really know about prominent politicians, especially when the politician in question is new so you can’t look at his record.
99% of projects do not need the top 1%. More than 1% of the world is racist.
Why should I believe you actually need the top 1%, when the statistics say there’s a greater than 50% chance that you’re actually just racist?
My conclusion still holds if you simply need mathematicians in the top 10%, for example, only the analysis is slightly more complicated.
Also, taboo “racist” unless you agree with faul_sname’s definition, in which case whether being a “racist” is a bad thing is precisely the question under discussion.
So you agree that, in the original example, you’re more likely than not just being a racist? Because you certainly seem to be moving the goal post over to “top 10%” …
That link does not appear to point to a definition.
You still haven’t defined what you mean by “racist”.
Racism has three definitions:
1) The belief that there are implicit (read: genetic) differences between races which give rise to behavioral differences.
2) The belief that different races have different worth and/or aught to be treated differently because of these differences.
3) An actual act of treating a race differently which stems from explicit or implicit negative opinions about that race.
Sexism mostly lies only in the domain of (2) and (3) with (1) often seeming like a gray area because believing (1) almost always implies (2) or (3).
So you would be racist (1) if you proposed that the IQ differences are genetic.
The reason people say “you are being racist” is because people often implicitly do (3) and implicitly believe (1) and (2) without explicitly stating the belief. The intent behind telling someone they are racist is to make the underlying belief explicit.
The moral connotations of being racist/sexist continue to be implicitly bad or wrong. So now, if the person wishes to continue justifying the initial belief, they have to defend the moral good or factual correctness of certain types of racism / sexism.
To summarize the point: For the majority of individuals in your culture, System 1 is racist/sexist while System 2 believes racism and sexism are bad. The intent of saying “statement x is racist” is to initiate a shift to system 2.
You didn’t state your views, but if your system 2 holds some racist/sexist beliefs as well (as in, you actually think racial IQ differences are genetic) than you would misinterpret “you are racist” as being analogous to “I don’t like your argument”. What’s really happening is that the person who you are arguing with believes that your racism is coming out of system 1, and wants to notify system 2 of that fact.
(I know this is a bit of an abuse of dual process theory and a horrible oversimplification even otherwise but I’m trying to be at least somewhat succinct—apologies)
The problem is that if someone system 2 does hold the belief that “racism/sexism is bad” this causes them to evaluate arguments related to race/sex differences on the basis of trying to avoid being racist/sexist rather than on the merits of the argument. A lot of people (especially around here) also hold as a system 2 belief that arguments should be evaluated on their merits. My point in asking the question is to help people notice that these two system 2 beliefs are in conflict.
You are quite right. That’s why it is important to separate the various meanings behind racism and sexism.
For example. I spent the better part of high school researching intelligence and the factors that contribute to it, including race. I’ve given serious consideration to the idea that genetic racial differences in behavior might exist, and extensive research has given me a high confidence that they do not.
However, if I had concluded that racial differences did exist, then I would be a racist[1] but I would probably continue to believe that racism[2, 3] are wrong.
Also, I think it is fair to say that I currently am “sexist”[1] but not sexist [2, 3] - that is, I do believe there are behavioral differences between men and women that are genetic in origin, but I do not believe that this means that I want women to have a different set of rights and privileges, nor do I believe that they are inferior.
That’s because group [1] is a statement about reality, whereas [2] [3] have moral connotations. I think it is bad to be racist [2] or racist [3.] I consider racism [1] to simply be a misguided opinion which arises when a person does insufficient research into the topic. I don’t consider racism[1] to be immoral, and might become racist [1] if someone gave me sufficient evidence to accept that hypothesis. Similarly, I am sexist [1] but I think it is wrong to be sexist [2] or [3], and I might stop being sexist[1] given sufficient evidence.
In short. moral attitudes towards racism/sexism [2, 3] need not interfere with epistemic stances on racism/sexism [1], even though they unfortunately often do.
Edit: if you intend to argue the point we can, but it will be a separate discussion unrelated to rationality. The most salient pieces of evidence that settled the issue for me are 1) various adoption / mixed race studies and 2) a genetic analysis indicating that the percentage of European heritage is unrelated to IQ in African Americans. I think the mistake that most amateur researchers make on this topic is not taking maternal factors (in the womb, breastfeeding, etc) into account.
It seems odd to attribute a false belief to insufficient research. Not false, exactly, but odd… like attributing the continued progression of an illness to insufficient medication. If X is a popular false belief, it seems there ought to be something to be said about why X is popular, just like there’s something to be said about why an illness progresses.
Ah, let me clarify.
Doing a little bit of research will lead you to be fairly confident that racial differences are genetic, because the differences 1) do exist and 2) cannot be explained by sociological factors alone. Most people assume that if it is not sociological, it is genetic.
However, if you do a lot of research, which means taking into account maternal factors in the womb, epigenetics, nutrition...and if you further spend time researching how IQ tests work and what contributes to high IQ in general (not just with race), your confidence that racial differences are genetic will drop steeply.
It just happens to be a topic where the first impression upon reading the literature has a particular tendency to lead you to a wrong conclusion.
Ah, I see! “Does insufficient research” != “fails to do sufficient research” in this context.
Neat. Sometimes it’s a miracle we communicate at all.
Thanks for the clarification.
I suspect that a lot of people also come to racism[1] without doing any research at all, but I don’t disagree with anything you say here.
True, but those people don’t generally end up at lesswrong (I hope!)
by “insufficient research” I was trying to convey the difference between cursory research and in depth research. Am I using the word incorrectly? / is there a better fitting word that describes this?
Edit: ooh, you thought I meant “insufficient research” to mean that any amount of research would have helped, hence the analogy to to diseases and medicine—medicines do not cause disease, they cure it. Whereas I actually am saying that in this case, too little “medicine” can cause the disease. Got it :)
No, I meant—reads edit—right.
Hmmm.....
That depends on what you mean by “any research at all”. I suspect most people who come to racism do so via the logic I mentioned in this comment.
Just to clarify the claim, because language can be slippery… if we chose humans at random and until we found 1000 who believe whites are superior to blacks, and we looked at their history, I expect the majority of them came to that position prior to reviewing empirical correlations between race and IQ among a statistically significant population. I understand you to be saying that you expect the majority came to that position only after reviewing empirical correlations between race and IQ among a statistically significant population, either personally or through reading the reports of others.
Have I understood you correctly?
Wait… I took “come to racism” to refer to people who used to be non-racist[1], but become racist[1] as adults. OTOH, many (most?) randomly-chosen racists[1] probably have been so ever since they’ve had any opinion either way on the matter, which they probably uncritically absorbed from their sociocultural environment while growing up and have had it cached ever since. These two groups of racists[1] are probably very different (just like you wouldn’t expect converts to Islam to be representative of Muslims in general—would you?); in particular, I suspect that most racists are the way you describe here, but most “converts to racism” are the way Eugine_Nier says. (See also “Intellectual Hipsters and Meta-Contrarianism” by Yvain.)
Ah!
Yeah, with that unpacking, I find the claim much more plausible.
Yeah, that’s my expectation.
No doubt.
I find that much more plausible than the claim that most racists[1] are the way Eugine_Nier says.
I’m not sure I believe it even so (as compared to, say, converting to racism after a traumatic experience with a member of race X), but at this point I’m just telling just-so stories about hypothetical people I don’t have much experience with, so I don’t put much weight in my own intuitions.
We can get into debates about what constitutes “statistically significant” but yeah I suspect most of the racists[1] around today came to that conclusion after reviewing correlations between race and intelligence (and related behaviors) in most cases from their own experience using their system I.
OK, thanks for clarifying.
For my own part, most of the people I’ve met personally whom I’ve identified as racist[1] with regards to white and black people have not met very many black people at all, so I doubt that’s true of them for any reasonable standard of statistical significance (1).
But of course the racists I’ve knowingly met might not be representative of racists more generally.
(1) Many were also racist[1] with regards to the superiority of whites to other non-white races, such as Native Americans and Asians, as well as with regards to the superiority of “whites” to other identifiable subcultures that include Caucasians, such as gays and Jews. All of which contributes to my sense that they are not arriving at their beliefs based on observation at all.
The south (at least during Jim Crow) wasn’t nearly as segregated as the north in terms of where people lived, so white southerners had many occasions to observe their black neighbors.
In fact it’s not at all hard to notice the correlation between say race and a lot of behavior traits, for example the the black neighborhood is the one where you’re more likely to get mugged. I’m not sure about Asians, as for Jews is their complaint that Jews are stupid or that they’re secretly running the world?
It wasn’t that Jews are stupid. Mostly it seemed to be that Jews are evil, which I suppose one could argue isn’t a question of superiority at all, though it sure felt like one. I actually haven’t run into the secret-world-domination thing in person very often at all, though I’m of course acquainted with the trope.
And sure, I’m perfectly willing to believe that the south during Jim Crow was less geographically segregated than the north, and thus provided more opportunities for inter-group observation.
That’s my point. They’re complaints about different out-groups are limited by what their system I’s would find plausible.
Just to make sure I understand your claim: as I understand it, you would predict that if we raised the people I’m referring to in an environment where “Jews are stupid” was (perhaps artificially) a prevailing social belief, they would tend to reject that belief as they came to observe Jews, because their system Is would find that belief implausible, because Jews are not in fact stupid (relative to people-like-them, as a class). But if we raised them in an environment where “blacks are stupid” was a prevailing social belief, they would not tend to reject that belief as they came to observe blacks, because their system Is would find that belief plausible, because blacks are in fact stupid (relative to people-like-them, as a class).
Yes?
Would you also expect that if we raised them in an environment where “Jews are evil” was a prevailing social belief, they would not reject that belief as they came to observe Jews, because their system Is would find that belief plausible, because Jews are in fact evil (relative to people-like-them, as a class)? Or does the principle not generalize like that?
This is basically correct.
As for Jews, I’m not sure they know many Jews, but they’ve probably noticed that a lot of Jews are in high positions in Academia, Finance and Politics. This is inconsistent with them being stupid but not with them being evil.
For all that such people know, Jews might be conspiring to help each other into high positions even though they aren’t unusually smart compared to gentiles.
What you describe is more or less the standard negative stereotype of Jews (basically being Slytherines), and in any case what you describe is closer to the common notion of ‘evil’ than ‘stupid’.
Again; you are observing correlations between socio-economic status and behaviour, and socio economic status happens to coincide with race in the US. African nations are not inhabited by legions of muggers all mugging each other, and there is no gene for mugging.
Not specifically. There are certainly genes for aggression, impulse control, empathy, violence and sociopathy in general. I make no claims about the distribution thereof by race but this (connoted) argument is terrible. For the intents and purposes used here yes, there are ‘genes for mugging’.
Except that poor white neighborhoods are much safer then poor black neighborhoods.
Um, now that you mention it, this is not a bad description of the politics of a number of African nations.
An intrinsic relation between race and social behavior is in the realm of possibility, but there are highly relevant social factors to take into account here even when you’ve adjusted for economic status. In low income black neighborhoods, law enforcement tends to adopt a much more adversarial relationship with the population than in white neighborhoods, such that black people are much more likely to be arrested and convicted relative to their actual crime rates, and are subject to frequent stops and searches on extremely tenuous bases. Speaking for myself, I suspect I’d have much less respect for the law if I grew up in an environment that reinforced the impression that law enforcement was out to get me from the start.
Indeed, central/southern Italy is not particularly genetically diverse AFAIK and yet certain cities are safer than others by probably several orders of magnitude, for all kinds of reasons.
Or that even successful instances of law enforcement tend to get shut down by self-proclaimed anti-racists. Or the fact that most blacks are raised by single mothers.
This is not the only cause. The problem is that it’s considered taboo to propose any explanation for the difference whether genetic or cultural that doesn’t pin the blame entirely on white “racism”.
That is a problem, but there is in fact quite a lot of racism, such that it does indeed account for quite a lot of problems.
While there are some parts of the book I take issue with (and I suspect you’d take issue with even more,) you might want to take a look at this book for lot of figures on “proactive policing” genuinely resulting in a relative arrest rate highly disproportionate to the crime rate.
As a metaphor, legions of muggers almost fits somewhere dysfunctional like Somalia. But legion of muggers is a metaphor, not an accurate description of warlord-ism. And anyway, Somalia is hardly representative of Africa in general.
Also DR Congo, Zimbabwe, to name two of the more well-known examples.
...in the US.
It’s not at all good. A few rich people exploiting a lot of poor ones is not the same as a few poor people robbing a few wealthier ones. And,it is not as if the politics of most African countries now is so very different from the politics of most European ones up until a few centuries ago; There’s no gene for fair government either.
That’s barely half an argument. You would need to believe that there are significant between-group differences AND that they are significant AND that they should be relevant to policy or decision making in some way. You didn’t argue the second two points there, and you haven’t elsewhere.
I’m with you on the first two, but if the trait is interesting enough to talk about (intelligence, competence, or whatever), isn’t that enough for consideration in policy making? If it isn’t worth considering in making policy, why are we talking about the trait?
Politics isn’t a value-free reflection of nature. The disvalue of reflecting a fact politically might outweigh the value. For instance, people aren’t the same in their political judgement, but everyone gets one vote, for instance.
So if we don’t base our politics on facts, what should we base it on? This isn’t a purely rhetorical question, I can think of several ways to answer it (each of which also has other implications) and am curious what your answer is.
As for your example, that’s because one-man-one-vote is a more workable Schelling point since otherwise you have the problem of who decides which people have better political judgement.
You include a copy of the Cognitive Reflection Test or similar in each ballot and weigh votes by the number of correct answers to the test.
(This idea isn’t original to me, BTW—but I can’t recall anyone expressing it on the public Internet at the moment.)
This doesn’t quite solve the Schelling point problem. You start getting questions about why that particular test and not some other. You will also get problems related to Goodheart’s law.
Well… People might ask that about (say) university admission tests, and yet in practice very few do so with a straight face. (OTOH, more people consider voting a sacrosanct right than studying.)
ETA: now that I think about that, this might be way more problematic in a country less culturally homogeneous than mine—I’m now reminded of complaints in the US that the SAT is culturally biased.
Keeping the choice of questions secret until the election ought to mitigate that.
Also in the US the SAT is only one of the factors effecting admissions.
Only partially. Also what about the people whose design the questions?
High-stakes testing, like the SAT, where voters - I mean, test-takers—have vastly more incentive to cheat, seem to do fine.
Come to think of it, the problem is that the people designing the SAT’s have fewer incentives to bias them then people designing the election tests.
I was arguing against basing policy on (narrowly construed) facts alone.
This is a purely terminological point: A substantial percentage of the folks in this forum think moral propositions are a kind of fact. I think they are wrong, but my usage (moral values are not empirical facts) is an idiosyncratic usage in this venue.
In short, I’m not sure if you are disagreeing with the local consensus, or simply using a different vocabulary. Until you and your interlocutors are using the same vocabulary, continuing disagreement is unlikely to be productive.
In short, I think basically everyone agrees that public policy is the product of the combination of scientific fact (including historical fact and sociological fact) and moral values. But because of disagreements on the meta-ethical and philosophy of science level, there is widespread disagreement on what my applause light sentence means in practice.
Well, I did say “narrowly construed” facts.
Your post is very susceptible to the construction:
You could object that this is not a charitable reading. But in the context of this discussion, it is hard to tell how to read you charitably while ensuring that you would still endorse the interpretation.
I don’t see why anyone would read “not on facts alone” as “not on facts at all”.
You didn’t define what you mean by “narrowly construed” facts, but from context it seems like you’re saying I don’t like these particular facts therefore I want an excuse to ignore them.
I will point out, for a third time, that “not on (narrowly construed) facts alone” does not mean “not on facts at all”.
In that case the correct response is the present the relevant additional facts, not attempt to suppress the facts that are too “narrowly construed”.
“if we implement such-and-such policies, people will riot” is a fact of a sort, but not the sort that is discovered in a laboratory.
Then where did you get the evidence to assert it with such high confidence? (This isn’t meant to be a rhetorical question.)
Also, is this really the best example you could come up with? The problem with this example is that even if the fact in question is true, there are still good game theoretic/decision theoretic reasons not to respond to blackmail.
I am glad that the tyrants of the past did not know of them, or you and I would not now enjoy freedom and democracy.
Yes, and I’m also glad Hitler’s megalomania interfered with the effectiveness of the German army.
Are you also glad the Eisenhower did when he sent the national guard to enforce integration?
[shrugs]. You construed riots in a sweepingly negative way as “blackmail”. The fact that I do not agree does not mean I am construing them in a sweepingly positive way. This is as a pattern you have repeated throughout this discussion, and it illustrates how politics mindkills.
If a policy is good, a riot against it is blackmail. If a policy is bad, you shouldn’t be pursuing it riot or no riot. Thus the hypothetical existence of riots shouldn’t affect which policies one pursues. Frankly, I have hard time believing “leading to riots” is your true rejection of the policies in question.
That is a dangerous belief for a leader to hold. I’d prefer leaders that don’t have that belief. In fact it should be taken as granted that leaders who do not respond to the expectation that the people will oppose their actions will be killed or otherwise rendered harmless through whichever actions are suitable to the political environment.
history
If you want social science to be taken seriously, you do your cause a disservice by asserting social science is different in kind from so-called “hard science.”
Edit: In fact, Eugine_Nier’s argument here is that social science is not rigorous enough to be worth considering. You don’t advance true belief by asserting that social science does not need to be rigorous.
And just in case it isn’t clear, the ability to replicate an experiment is not required for a scientific field to be rigorous. (Just look at astro-physics: It isn’t like we can cue up a supernova on command to test our hypothesis). It is preferable, but not necessary.
No, my argument was that much of modern social science (and especially modern anthropology which is bad even by social science standards) is more concerned with politics than truth. See here for JulianMorrison practically admitting as much and then trying to argue that this is a good thing. And quite frankly the tone of your comment is also not encouraging in that respect.
I think that’s a highly disingenuous reading of JulianMorrison’s statement. JulianMorrison never stated that it was a good thing, only that it was a necessary thing in the face of political realities. In an evolutionary environment where only the Dark Arts are capable of surviving, would you rather win or die?
Essentially, we all need to remember that speaking the truth has a variable utility cost that depends on environment. If the perceived utility of speaking the truth publically is negative, then you invoke the Bayesian Conspiracy and don’t speak the truth except in private.
In this post JulianMorrison was, at least partially, trying to inform you that there is in fact something like a Bayesian Conspiracy within the Social Sciences—that there are social truths that are understood from within the discipline (or at least, from within parts of the discipline) that can’t be discussed with outsiders, because non-rational people will use the knowledge in ways with a highly negative net utility. He was also trying to test you to see if you could be trusted with initiation into that Bayesian Conspiracy. (You failed the test, btw—which is something you might realize with pride or chagrin, depending on your political allegiances.)
I don’t think I’d identify the activist subculture with the social sciences, at least in the case JulianMorrison was talking about. If there’s an academic community whose members publish relatively unfiltered research within their fields but don’t usually talk to the public unless they are also activists, and also an activist community whose members are much more interested in spreading the word but aren’t always too interested in spreading up-to-date science (charitably, because they believe some avenues of research to be suffering from bias or otherwise suspect), then we get the same results without having to invoke a conspiracy. This also has the advantage of explaining why it’s possible to read about ostensibly forbidden social truths by, e.g., querying the right Wikipedia page.
Whether this accurately models any particular controversial subject is probably best left as an exercise.
Hold on a moment. I think the labels are accurate descriptions of the phenomena. There’s hostility to this kind of discussion, so sometimes the only winning move is not to play. But if the labels (heternormativity, privilege, social construction, rape culture) are not describing social phenomena, then we should find accurate labels.
And if experts use the labels right, but [Edit: sympathetic] laypeople do not, then we should chide the laypeople until they use them right. Agreement with my preferred policies does not make you wise, because arguments are not soldiers.
In short, I think I win on the merits, so let’s not get caught up in procedural machinations.
That assumes that we have sufficient status that our chiding the laypeople will win. The problem with social phenomena is that discussions about social phenomena are themselves social phenomena, so your statements have social cost that may be independent of their truth value. If you want to rationally strive towards maximum utility, you need to recognize and deal with the utility costs inherent in discussing facts with agents whose strategies involve manipulating consensus, and who themselves may not care as much about avoiding the Dark Arts as you seem to.
Secondly:
I currently tend to believe that these are somewhat accurate labels—that is, they accurately define semantic boundaries around phenomena that do in fact exist, and that we do in fact have some actual understanding of. But if your audience sees them as fighting words, then they will see your arguments as soldiers. If you want to have a rational discussion about this, you need to be able to identify who else is willing to have a rational discussion about this, and at what level. Remember that on lesswrong, signaling rationality is a status move, so just because someone displays signals that indicate rationality doesn’t mean that they are in fact rational about a particular subject, especially a political one.
Ah. I see all my comments everywhere on the site are getting voted down again. Politics is the mind-killer, indeed.
Ok, serious question, folks:
What would it take to negotiate a truce on lesswrong, such that people could have differing opinions about what is or isn’t appropriate social utility maximization without getting into petty karma wars with each other?
Ah. This got downvoted too. Is there any way for me to stop this death-spiral and flag for empathy? Please?
Mercy? Uncle?
I endorse interpreting net downvotes as information: specifically, the information that more people want less contributions like whatever’s being downvoted than want more contributions like it.
I can then either ignore that stated preference and keep contributing what I want to contribute (and accept any resulting downvotes as ongoing confirmation that of the above), or I can conform to that stated preference. I typically do the latter but I endorse the former in some cases.
The notion of a “truce” whereby I get to contribute whatever I choose and other people don’t use the voting mechanism to express their judgments of it doesn’t quite make sense to me.
All of that said, I agree with you that there exist various social patterns to which labels have been attached in popular culture, where those labels are shibboleths in certain subcultures and anti-shibboleths (“fighting words,” as you put it) in others. I find that if I want to have a useful discussion about those patterns within those subcultures, I often do best to not use those labels.
Except your interpretation is at least partially wrong—people mass downvote comments based on author, so there is no information about the quality of a particular post (it’s more like (S)HE IS A WITCH!). A better theory is that karma is some sort of noisy average between what you said, and ‘internet microaggression,’ and probably some other things—there is no globally enforced usage guidelines for karma.
I personally ignore karma. I generally write two types of posts: technical posts, and posts on which there should be no consensus. For the former, almost no one here is qualified to downvote me. For the latter, if people downvote me, it’s about the social group not correctness.
There are plenty of things to learn on lesswrong, but almost nothing from the karma system.
Oh, I completely agree that the reality is a noisy average as you describe. That said, for someone with the goals ialdabaoth describes themselves as having, I continue to endorse the interpretation strategy I describe. (By contrast, for someone with the goal-structure you describe, ignoring karma is a fine strategy.)
Huh. Are “Do you think it likely that ‘social activism’ and ‘liberalism’ are fighting words in this board’s culture?” fighting words in this board’s culture?
Depends on how they’re used, but yes, there are many contexts where I would probably avoid using those words here and instead state what I mean by them. Why do you ask?
Edit: the question got edited after I answered it into something not-quite-grammatical, so I should perhaps clarify that the words I’m referring to here are ‘social activism’ and ‘liberalism’ .
Because I want to discuss and analyze my beliefs openly, but I don’t want to lose social status on this site if I don’t have to.
A deeper observation and question: I appear to be stupid at the moment. Where can I go to learn to be less socially stupid on this site?
One approach is to identify high-status contributors and look for systematic differences between your way of expressing yourself and theirs, then experiment with adopting theirs.
Ya lol works awsome, look at my awsome bla-bla (lol!) ye mighty, and despair.
Nothing beside remains. Round the decay
Of that colossal wreck, boundless and bare
The lone and level sands stretch far away.
Alas, you and I are not in the same league as Yvain, TheOtherDave, fubarobfusco, or Jack.
Be that as it may (or mayn’t), that’s a clever way of making the intended message more palatable, including yourself in the deprecation. But you’re right. Aren’t we all pathetic, eh?
Look at your most upvoted contributions (ETA: or better, look at contributions with a positive score in general—see replies to this comment). Look at your most downvoted contributions. Compare and contrast.
Most downvoted, yes, but on the positive side I’d instead suggest looking at your comments one or two sigma east of average and no higher: they’re likely to be more reproducible. If they’re anything like mine, your most highly upvoted posts are probably high risk/high reward type comments—jokes, cultural criticism, pithy Deep Wisdom—and it’ll probably be a lot harder to identify and cultivate what made them successful.
A refinement of this is to look at the pattern of votes around the contributions as well, if they are comments. Comparing the absolute ranking of different contributions is tricky, because they frequently reflect the visibility of the thread as much as they do the popularity of the comment. (At one time, my most-upvoted contributions were random observations on the Harry Potter discussion threads, for example.)
Not to mention Rationality Quotes threads...
Rationality quotes might be a helpful way of figuring out how to get upvoted, but it is not particularly helpful in figuring out how to be more competent.
Edit: Oops. Misunderstood the comment.
Actually, I was agreeing with TheOtherDave. (I’ve edited my comment to quote the part of its parent I was elaborating upon; is that clearer now?)
(nods) Then yeah, I’d encourage you to avoid using those words and instead state what you mean by them. Which may also result in downvotes, depending on how people judge your meaning.
And yet you’ve just argued that your beliefs should not be discussed openly with outsiders.
No I didn’t, I argued that in a different context, it’s dangerous to discuss your beliefs openly with outsiders. And I wasn’t even trying to defend that behavior, I was offering an explanation for it.
...and you’re using rhetorical tactics. Why do you consider this a fight? Why is it so important that I lose?
I’ll agree to have lost if that will help. Will it help?
I don’t see the difference in context. (This isn’t rhetoric, I honestly don’t see the difference in context.)
Interesting, so do you disapprove of the behavior in question. If so why do you still identify it’s practitioners as “your side”?
I wasn’t trying to. I was pointing out the problems with basing a movement on ‘pious lies’.
Issues can be complex, you know. They can be simpler than ‘green’ vs. ‘blue’.
Which is still a gross mischaracterization of what was being discussed, but that mischaracterizing process is itself part of the rhetorical tactic being employed. I’m afraid I can no longer trust this communication channel.
How so? Near as I can tell from an outside view my description is a decent summary of your and/or Julian’s position. I realize that from the inside it feels different because the lies feel justified, well ‘pious lies’ always feel justified to those who tell them.
You’re the one who just argued (and/or presented Julian’s case) that I was not to be trusted with the truth. If anything, I’m the one who has a right to complain that this communication channel is untrustworthy.
And yet you’re still using it. What are you attempting to accomplish? What do you think I was attempting to accomplish? (I no longer need to know the answers to these questions, because I’ve already downgraded this channel to barely above the noise threshold; I’m expending the energy in the hopes that you ask yourself these questions in a way that doesn’t involve assuming that all our posts are soldiers fighting a battle.)
Same here with respect to the questions I asked here, here, and here. The fact that you were willing to admit to the lies gave me hope that we might have something resembling a reasonable discussion. Unfortunately it seems you’d rather dismiss my questions as ‘rhetoric’ than question the foundations of your beliefs. I realize the former choice is easier but if you’re serious about wanting to anilyze your beliefs you need to do the latter.
For the sake of others watching, the fact that you continue to use phrases like “willing to admit to the lies” should be a telling signal that something other than truth-seeking is happening here.
Something other than truth-seeking is happening here. But the use of that phrase does not demonstrate that—your argument is highly dubious. Since the subject at the core seems to be about prioritizing between epistemic accuracy and political advocacy it can be an on topic observation of fact.
If a phrase such as “pursuing goals other than pure truth-seeking” were used rather than “noble lies”, I would agree with you. But he appears to deliberately attempt to re-frame any argument that he doesn’t like in the most reprehensible way possible, rather than attempting to give it any credit whatsoever. He’s performing all sorts of emotional “booing” and straw-manning, rather than presenting the strongest possible interpretation of his opponent’s view and then attacking that. And when someone attempts to point that out to him, he immediately turns around and attempts to accuse them of doing it, rather than him.
It’s possible to have discussions about this without either side resorting to “this is how evil you’re being” tactics, or without resorting to “you’re resorting to ‘this is how evil you’re being’ tactics” tactics, or without resorting to “you’re resorting to ‘you’re resorting to {this is how evil you’re being} tactics’ tactics” tactics. Unfortunately, it’s a classic Prisoner’s Dilemma—whoever defects first tends to win, because humans are wired such that rhetoric beats honest debate.
That is approximately how I would summarize the entire conversation.
Theoretically, although those most capable of being sane when it comes to this kind of topic are also less likely to bother.
Often, yes. It would be a gross understatement to observe that I share your lament.
Specifically, the method of pursuing said goals in question is by making and promoting false statements. This is precisely what the phrase ‘noble lie’ means. This is the kind of thing that would be bad enough even if the authority of “Science” weren’t being invoked by the people making said false statements. Yes, the phrase “noble lie” has negative connotations, there are very good reasons for that.
Incidentally, at the time that I write this comments none of your most recent comments are net-negative, most are net-positive, including the one I’m responding to. Does knowing that make t easier for you to contribute without worrying too much about your social status here?
No, and here’s my reasoning:
The net variability is the problem, not merely the bulk downvoting. All this sort of situation does is demonstrate that the karma system is untrustworthy. Since the karma system was the easiest way to determine whether what I’m saying is considered worth listening to by the community, I have to find secondary indicators. Unfortunately, most of those require feedback, and explicitly asking for that feedback often results in bulk downvoting.
I’m one of those people who has to be very careful to modulate my tone so that what I’m trying to say is understood by my audience; if all of the available feedback mechanisms are known to have serious problems, I’m not sure how to proceed.
Does that make any sense?
It does make sense, and the karma system is most assuredly untrustworthy, in the sense you mean it here. (I would say “noisy.:) Asking for feedback is also noisy, as it happens.
At some point, it becomes worthwhile to work out how to proceed given noisy and unreliable feedback.
For example, one useful principle if I think the feedback is net-reliable in the aggregate but has high variability is to damp down sensitivity to individual feedback-items and instead attend to the trend over time once it stabilizes. Conversely, if I think the feedback is unreliable even in the aggregate, it’s best to ignore it altogether.
Yeah, that’s what I try to do in the abstract. In-the-moment, the less rational parts of my brain tend to bump up urgency and try to convince me that I don’t have time to ignore the data and wait for the aggregate, and when I try to pause for reflection, those same parts of my brain tend to ratchet up the perceived urgency again and convince me that I don’t have time to examine whether I have the time to examine whether those parts of my brain are lying to me less or more than the data.
I’m working on a brainhack to mitigate that, but it’s slow going. Once I have something useful I hope to post an article on it.
This is wonderfully put.
Serious question: why even care about karma? Just say what you want.
Because I’ve had enough of my posts voted down below the reply threshold that it destroyed the ability to continue the conversation, and for me having an idea debated back-and-forth is a necessary component of my mental process. Also, karma is used on this site to indicate whether I should be saying what I’m saying, so when everything I said for the past two weeks gets downvoted within 5 minutes of making a statement, even things utterly unrelated to that statement, I feel the need to raise an alarm to ensure that I’m interpreting signals correctly.
If it is, then it is. But from the outside, this sounds like a rationalization for you to choose to do something that you find emotionally harmful. You have no obligation to participate in conversation that you find emotionally harmful.
I intended that to refer only to laypeople who agree with the labels, but are using them wrong. The people who are choosing our side because it seems high status, not because they think it is right. Those folks are dangerous in a lot of ways.
Well, yes. This venue is not safe for these types of discussions—our interlocutor is an important reason why. I do it because I’m trying to dispel the appearance of a silent majority.
I think it is totally understandable to decide that the only winning (socially safe) move is not to engage in the conversation here. It’s not like it will make a huge difference—so choosing yourself first is very appropriate and NOT even a little bit worthy of blame.
In what way?
All I’m doing is criticizing your arguments and providing counter-arguments. Or are you only willing to discuss these things among people who agree with you?
This sounds like an overly convenient excuse to avoid having to confront the implications of said truths. Restrict them to only people who won’t ask awkward questions and tell everyone else pious lies.
You need to establish some truths before worrying about the consequences. Scientific facts need controls, for instance. When have you shown any interest in controlling for the effects of environment?
I never said I knew what caused the racial differences in question. There are certainly policy issues where the cause is relevant (incidentally addressing it requires admitting that the differences exist), there are issues where it’s less relevant.
Incidentally, in the example I sited in the great-grandparent it was the anthropologists who had declared that official policy was to deny all environmental explanations.
How do you and Julian know that you are indeed in the “inner ring” of this conspiracy and/or that it’s actual purpose is what you think it is? How sure are you that this conspiracy even has any clue what it’s doing and hasn’t started to believe its own lies? Do you have an answer to the questions I asked here?
On a case-by-case basis, you do experiments. You double-check them. You entertain alternate hypotheses. You accept that it’s entirely possible that things aren’t the way you think they are. You ask yourself what the likely social consequences of your actions are and if you’re comfortable with them, and then ask yourself how you know that. In short, you act like a rationalist.
(And you certainly don’t just downvote everyone who proposes a model that you don’t like.)
Can you describe some of the experiments you did?
And that’s a bad thing? Trying to translate Hard Science directly into real-world action without considering the ethical, social and political consequences would be disastrous. We need something like social science.
If your goal is having an accurate model of the world, yes. If you’re goal is something else, you’re still better of with an accurate model of them world.
Edit: If you want to do politics, that’s also important, just don’t pretend you’re doing science even “soft science”.
We had this discussion before. You told me that the social activist labels are boo lights. But whether something is an applause light or a boo light in a particular community doesn’t mean it is not an accurate label for a phenomena.
“Democracy” is an applause light in the venues I generally hang out in (and I assume the same for you). That does not mean that democracy is not a real phenomena. And the fact that some folks in this venue don’t approve of democracy does not mean they think that the phenomena “democracy” as defined by relevant experts does not exist. In fact, a serious claim that democracy is bad or good first requires believing it occurs.
I don’t really have a very productive response.
The general rule is that your responsibility is to be clear—it is not your reader’s responsibility to decipher you.
The general rule is that it is both the writer’s responsibility to be clear and the reader’s responsibility to decipher them. You know, responsibility is not a pie to be divided (warning: potentially mind-killing link), Postel’s law, the principle of charity, an’ all that.
In general, yes. In particular, I’m trying to give whowhowho constructive criticism, and he does not seem to think it is constructive.
Lol. That’s almost literally the worst example to use for a de-escalating discussion about shares of responsibility.
Edit: On further reflection, Postel’s law is an engineering maxim not appropriate to social debate, and it should be well established that the principle of charity is polite, but not necessarily truth-enhancing.
ETA: What’s unclear about “not on facts alone”?
It’s the reader’s responsibility to read your words, and read all your words, and not to imagine other words. Recently, someone paraphrased a remark of mine with two “maybe”’s I had used deleted and a “necessarily” I hadn’t used inserted. Was that my fault?
Beware of expecting short inferential distances.
One has to grasp literal meaning before inference even kicks in.
As an attorney, my experience is that the distinction between literal words and communicated meaning is very artificial. One canon of statutory construction is the absurdity principle (between two possible meanings, pick the one that isn’t absurd). But that relies on context beyond the words to figure out what is absurd. Eloquent version of this point here.
If people insist on drawing inferences from what was never intended as a hint...what can you do?
‘On hearing of the death of a Turkish ambassador, Talleyrand is supposed to have said: “I wonder what he meant by that?”’
Now the thing with that logic is that 97% of the world is made up of idiots (Probably a little higher than that, actually.) I do agree that it’s their fault if they misquote it, not your own, but let’s say you put an unclear statement in a self help book. Those books are generally read by the, ah, lower 40th percentile (Or around thereabouts), or just by really sad people- either way, they’re more emotionally unstable than normal. Now that we have the perfect conditions for a blowup, let’s say you said something like ‘It’s your responsibility to be happy’ in that book, meaning that you and only you can make yourself happy. Your emotionally unstable reader, however, read it as it was said and took a huge hit to their self-confidence. Do you see how it isn’t always the reader’s job?
Strangely enough, I never said it was...
For your reference, I have no idea what Lauryn is talking about.
You didn’t answer my question.
If we dont’ base policy on (narrowly construed, laboratory-style) facts alone, we use other things in additiojn. Like ethics and practicallity.
In the great-great-grandparent you make the extremely strong assertion that some facts have such bad implications that reflecting on them causes more harm than good, this raises the question of how can you know which facts have this property without reflecting on them?
Also what do you mean by “ethics”? Do you mean the ethics in the LW-technical sense of ethical injunction or in the non-technical sense of morality?
Try as it might, my system II has yet to see such an argument with non-negligible merit.
Well the fact that race is correlated with things like IQ is pretty well established empirically, and there is no obvious a priori reason to prefer environmental to genetic explanations.
Not a priori, but there has been at least one study performed on black children adopted by white families, this one, which comes to the conclusion that environment plays a key role. In all honesty, I haven’t even read the study, because I can’t find the full text online, but if more studies like it are performed and come to similar conclusions, then that could be taken as evidence of a largely environmental explanation.
Here it is (pdf link).
Many thanks!
Yes, and I have had numerous twin studies cited at me that purport to show that genetics plays a key role. I can’t vouch for the quality of either but it is clear that the research is likely to remain inconclusive for quite so time.
Really? I’ve seen twin studies that purport a genetic explanation for IQ differences between individuals, but never between racial groups. If you’ve saved a link to a study of the latter type, I’d be really interested to read it.
Ok, I’m confused. Under what scenario is it at all plausible for individual IQ differences but not racial IQ differences to be genetic?
Well, Down’s Syndrome, for example, clearly affects IQ. There’s a big genetic IQ difference that is only really relevant to individuals. There aren’t Down’s-magnitude intelligence variations between races.
In general, there is wide variation in intelligence among people within any particular ethnic group. Showing these to be genetic doesn’t seem to be too hard either, since you can find individuals of the same ethnic group having been raised in similar environments. On the other hand, the difference in average IQ between races is quite small, compared to the individual within-race differences. To show how much of this was genetic would require controlling for environment, which one can even now expect to be notably different between races.
To put it simply, it’s easy to demonstrate genetic influence, when the effects are of a magnitude such that one can just rule out environment as being the critical factor. Which is not the case for racial differences.
Obviously, you can expect there to exist on average a non-zero genetic component between races, since their genetic material has had time to drift apart. But that’s neither here nor there when you want to know how much.
I keep hearing people say that and always wanted to ask which statistics are being compared.
Not about intelligence specifically, but I believe this was the first (well-known) paper making the claim: http://www.philbio.org/wp-content/uploads/2010/11/Lewontin-The-Apportionment-of-Human-Diversity.pdf
The point is that even if the heritable component of (say) intelligence among white people formed a bell curve, and the heritable component of intelligence among black people formed a bell curve, a priori you’d expect the two curves to be pretty much the same.
(Lewontin’s other conclusion, that “race” is “biologically meaningless”, is separate and doesn’t work because what small racial differences there are are statistically clustered: http://onlinelibrary.wiley.com/doi/10.1002/bies.10315/abstract;jsessionid=831B49767DB713DADCD9A1199D7ADC49.d02t02)
Circumstances which look arbitrarily contrived and absurd upon examination but should be acknowledged as at least technically possible. ie. The distributions of IQ within each race are miraculously identical because contrary to expectations the universe really is Fair regarding this one complex trait (but not others).
Or one where the differences are small, or trivial. I don’t think this is “miraculous” or “implausible”. Before the invention of agriculture, about seven to twelve thousand years ago, I’m not sure what pressures there could have been on Europeans to develop higher intelligence than Africans, so in contrast to physical differences, many of which have well-established links to specific climates, intellectual genetic differences would probably be attributable to genetic drift and >~10,000 years of natural selection. To be clear, my position isn’t that I have good evidence for this, merely that I don’t know and I don’t assign this scenario as low a prior probability as you seem to.
I know of no situation where the race of an individual is the only factor, or the most significant factor in making a decision. Feel free to counterargue.
Huh? What does that have to do with my argument?
In case it wasn’t clear I was presenting an argument that there exist genetic differences between races that give rise to behavioral differences.
And I was presenting the argument that it doesn’t matter. There is no good reason to base political, social or legal policy on it It’s always overwhelmed by other factors.
And yet we do, the “anti-racists” insist on it.
In the grandparent you said that the race of an individual is rarely the only factor. On the other hand, in aggregate it’s possible for the other factors to wash out and we are left with race as the main factor.
Run that one past me again. Are you arguing for or against public policy based on race?
I’m not sure whether we should or not. However, given that we currently have race-based policies and this is likely to continue for quite some time, they might as well be based on accurate beliefs about race.
IOW, you’re assuming that changing the US’s race-based policies so that they be based on accurate beliefs would be less hard than letting go of them altogether?
Let’s just be clear on where the status quo is. Eugine_Nier has mentioned disparate impact analysis several times. In addition to that is the far more important disparate treatment prohibition.
In the US, a worker can get fired even if it is not for cause. If you boss thinks you are a terrible worker, you can be fired even if you could prove your boss is wrong and you actually are a great worker. But you boss would be liable for wrongful termination if the boss said “I think [blacks / whites / Germans / Russians] are likely to be bad at your job, and you are [black / white / German / Russian], so you’re fired.” Likewise, an employer can’t refuse to hire on that basis.
Proving that is a separate issue, but having no public policy based on race implies repeal of both disparate impact prohibitions and disparate treatment prohibitions (and lots of other stuff, but it’s complicated).
In practice firing a black worker even if he is a terrible worker leaves employers open to wrongful termination suits. Whereas it’s harder to prove that with not hiring a black worker, so frequently the safest route for employers (especially small employers) is to find excuses to avoid hiring black workers rather then risk getting stuck with a bad employee that you can’t fire.
In practice, complying with laws has costs, some of which fall on innocent and semi-innocent third parties. As a lawyer, this is not news to me. The question is whether the benefits of implementing those social policies outweighs the costs. Clarence Thomas, a black Justice of the Supreme Court of the United States thinks the answer is no.
But that is a very different question from asking, as a matter of first principles, whether certain kinds of discrimination are allowed even if the facts don’t support the discrimination. In the United States, most discrimination of this kind is allowed, and restrictions on the factors employers and others may consider are fairly narrow (race, gender, religion, national origin—not ok. youth, poverty, moral character, basically anything else—ok).
Accurate beliefs about what? If a group (however defined) has been subject to negative discrimination, however arbitrary, then there is an argument for treating them to a period of positive discrimination to compensate. That has nothing to do with how jusitfied the original negative discrimination was.
But “black people in 1960” (for example) isn’t the same group as “black people today”, as many of the former are dead now and many of the latter hadn’t been born in 1960, and it’s not obvious to me that it makes sense to treat people according on who their grandparents were.
If someone was wrongfully executed, killed in a medical bliunder, etc, it is typically their families who are compensated.
It is morally right to do so. But society is deeply conflicted about doing so (for reasons good and bad), so I’m not sure that “typical” is an accurate description of how often it happens.
Regardless of the frequency of compensation, you really should address head on why you think society should do so. The fact that society occasionally does provide such compensation is barely the beginning of the discuss of whether it should and says almost nothing about how much compensation should be provided, or who should pay.
To put it slightly differently, Eugine_Nier is not wrong when he asserts anti-discrimination laws impose significant cost on society as a whole. I think the benefits are worth the costs, but that is a fact-bound inquiry, not a statement of first principles.
I am not arguing that Affirmative Action/Positice Discrimination is necessarily right. Just that it doesn’t necessarily have anything at all to do with any facts about DNA.
If actually significant differences in competence have a genetic component, then public policy should reflect that difference. Particularly if the differences are easy / cheap to identify anyway. (I don’t think this is true about race / ethnicity, but that’s a different issue).
Otherwise, our preferred policies won’t work in the Least Convenient Possible World.
Are you assuming all other things are equal? They never are.
If history and practice led to blacks being treated as if the mean IQ was 20 points lower, and the actual difference is 5 points, then the proper public policy is to act as if the difference is 5 points, not zero points to remedy the history and practice.
I suspect that g is not interestingly different between race / ethnicity, and that the IQ test, which seeks to measure g, is culturally biased. But if there is a difference in g that cannot be attributed to environment, then we should consider it in making policy.
In the real world, I think all the important observed difference is culturally driven, so this nod towards facts doesn’t change my policy preferences. I think the facts are in my favor. I just think that we should be explicit about how policy should change if the facts turn out to be different.
Why isn’t the proper public policy to treat people as individuals?
You didn’t answer my question about treating other things as equal. If genetics based discrimination leads to $X million lost in strikes and rioting, shoulnd’t that be taken into account?
*
Staying out of the racial-politics discussion, but my answer to this question generally is that it’s expensive.
For example, we don’t actually evaluate each individual’s level of maturity before judging, for that individual, whether they’re permitted to purchase alcohol, sign contracts, vote in elections, drive cars, etc.… instead we establish age-based cutoffs and allow the occassional outlier. We understand perfectly well that these cutoffs are arbitrary and don’t actually reflect anything about the affected individuals; at best they reflect community averages but often not even that, but we do it anyway because we want to establish some threshold and evaluating individuals costs too much.
But, sure, bring the costs down far enough (or treat costs as distinct from propriety) and the proper public policy is to separately evaluate individuals.
On the other hand, job interviewers judge by individual quaifications, not group membership..
Or at least, they can do so with minimal investment. Agreed.
I’m not sure what the relationship between job interviews and public policy is, though.
Let’s compare two different types of employment discrimination law (in the US). For simplicity, let’s ignore the burden of proof.
Racial Discrimination: It is illegal to consider an employee’s (or potential employee’s) race when making an employment decision. If a person is fired, but would not have been fired if the person were a different race, the employer has committed wrongful termination. (Substitute “hire,” “promote,” or basically any other employment decision—the rule is unchanged).
Disability Discrimination: First, the disabled employee must be able to perform the job, with or without accommodations. Then, the employer must make reasonable (but not unreasonable) accommodations.
I think it is reasonably clear that disability discrimination law is more individualized. If there really were differences based on race / ethnicity, then I think racial discrimination law ought to look more like disability discrimination law. But I think there aren’t such differences, so I think the law as written is basically right.
Seems like this is a question of baseline. Who said we should respect rioting? Or rioting is likely to result from treating people differently based on their actual genetic differences.
Just to be clear, I don’t practice employment law. I practice a very narrow kind of child disability law. Reading this post does not make me your lawyer.
What does this mean in practice? Does this mean employers should be free to hire any individual they choose?
It could mean you don’t translate scientific findings about groups directly into policy without considering ethical and practical implications. It could mean that treating people as individuals should be the default. It could mean there is nonetheless a case for treating people as groups where they were discriminated against as groups in the past.
And are they?
And why do I need to be told “that there exist genetic differences between races that give rise to behavioral difference”. I have said nothing about affirmative action/positive discrimination one way or the other. You raised that issue. But you didn’t say how they two relate.
I’m not sure whether the cause is genetic or cultural but there are most definitely behavioral differences between the races. Furthermore, the fact that it’s politically impossible to talk about this is causing a lot of problems. Consider the state of US cities with large black populations as discussed in this blog post by Walter Mead. The behavior in question is probably purely cultural since historic “white ethnic” political machines lead to the same problems, on the other hand the fact that this political machine is black means most people would rather pretend the problem doesn’t exist than talk about it and risk getting called “racist”.
For another example, consider the campaign to force Rhodesia to accept majority rule. Given the subsequent history of Zimbabwe this campaign almost certainly resulted in a worse situation for everyone involved.
First, off you needed about 5 “in the US”’s above.
Second: you’re part of the problem. If you want to discuss socio-cultural-polical problems in the US, discuss them as such. Say “we have problems with populations of the urban poor”. We have problems with the urban poor too, and they don’t coincide with race. Given the way you have described the problem above, your initial approach of kicking off discussion of the problem by talking about genetic differences is exactly the wrong one -- it will block off sensible discussion, and it isn’t the real issue anyway.
Sure seems possible for Mr Mead.
Your point being what? That democracy is always bad? That Africans can’t ever govern themselves? That liberals are always wrong? You can’t come to any of those sweeping conclusions from the one example of Zimbabwe. It’s an exception.
Mead is just some random blogger. Witness the reaction that occurred when Philadelphia magazine published an article on a similar topic.
No, that’s the problem.
By the way affirmative action is by no means the only race-based policy, just the one simplest to describe.
In what what are they not?
What was the name of that rule where you commit yourself to not getting offended?
I’ve always practiced it, though not always as perfectly as I’ve wanted (when I do slip up, it’s never during an argument though; my stoicism muscle is fully alert at those points in time). An annoying aspect of it is when other people get offended—my emotions are my own problem, why won’t they deal with theirs; do I have to play babysitter with their thought process? You can’t force someone to become a stoic, but you can probably convince them that their reaction is hurting them and show them that it’s desirable for them to ignore offense. To that end, I’m thankful for this post, upvoted.
Crocker’s Rules,
Sounds like you’re thinking of Crocker’s rules, although there’s a bit more to it than that.
I agree. Now I’m off to link this to Tumblr’s social justice movement (except not really).
Also, I think you meant “Besides, the signalling value of offense should be no excuse for not knowing how not to be offended.” So many negatives in that sentence!
Thanks, you’re right! Fixed.
Also related (specifically, getting offended by people who are acting, gasp, irrationally): The problem with too many rational memes
See especially the comments. There are some good strategies in there for dealing with offense in this specific context, some of which may generalize.
I agree with the general point but I don’t think it’s best to contrast getting offended with staying calm. The way I imagine offense happens is that we classify somebody’s actions or beliefs as harmful to us or our group and that makes us annoyed, we then automatically decide that the best way of fixing the situation is to destroy the other person’s ability to cause us harm and if we lack any other option of doing that we default to politics as our means of attack.
And then that switch gets flipped in our heads that puts us in a mode of thinking more adapted to lowering someone’s social standing. This last step specifically is what I think of when I hear about ‘getting offended’. If you resist flipping the switch your unfavorable assessment of the situation will remain. You will still be annoyed at the fact that there’s someone who seems to be a social threat (but at least you won’t feel compelled to exaggerate the threat for the sake of better drama). It seems like dealing with annoyance should be a separate skill from not going off into narratives about how you are inherently more virtuous than the enemy. So rather than trying to stay completely calm as an alternative to getting offended, maybe it’s better to focus on the minimal change that would stop us from turning into raging monkeys while still possibly leaving us annoyed humans. This might also be more palatable to people who get offended upon hearing advice not to.
I’m trying to make the point that we don’t know which is more involved.
I think one key in not being offended is being secure in your own person and position. If you’re not actually worried that someone or their remarks may actually hurt or damage you, then it’s easy to remain objective and not take offense.
In the Old Testament it says, “Great peace have they which love Thy law, and nothing shall offend them.” I’d like to think that I’m secure in my position relating to the very Creator of the universe. So, to the degree that I am truly secure in that position, what can anyone really do or say to upset me?
Force you to abandon that security by bringing it into logical conflict with another position that you feel equally secure in.
I am very new to LW, but this seems like a dangerous position to take for a rationalist! From “What Do We Mean By ‘Rationality’”: [Italics Mine]
It seems that being completely secure in a position makes it impossible to for you to challenge that position, which works against acting in a more rational fashion.
An alternative way to not be offended might be found here. In summary, the author argues that ‘If people can’t think clearly about anything that has become part of their identity, then all other things being equal, the best plan is to let as few things into your identity as possible.’
It is sometimes useful not to artificially exclude the middle when using natural language.
In this case, for example, I suspect it’s possible to have a level of what we’re calling “security” here that is not so high that it precludes updating on evidence (supposing you’re correct that too high a level of security leads to the inability to update), while at the same time being high enough to avoid offense (supposing bobneumann is correct that too low a level of security leads to an increased chance of taking offense).
I do agree that keeping your identity small is also helpful, though.
I’m not sure how to interpret your comment, so I’d like you to clarify. Are you using theists who feel secure in their relationship with god as an example of a way some people avoid being offended? Are you saying you are one such theist? Are you making a recommendation of something?
Would you car to explain which one you meant.
Note: This post now has a followup.
I’d rather have to try and not be as upset by certain things than have to account for everyone else’s diverse baggage. A norm where people are expected to do the first thing instead of the second seems better because if more people had thick skin, less people would be upset. Like if I make a joke that’s hilarious to five people and annoying to the sixth it’s a net positive unless that guy is really REALLY hurt, right? But under usual norms, if I say “relax it was a joke” I’m the insensitive asshole. But it WAS just a joke. And look, I’m totally cool with being on the receiving end of that sometimes. It’s a good trade off because I value not having taboos or feeling like I have to tiptoe.
I feel like maybe people hear “don’t get offended” as “you’re not allowed to express disapproval about something someone says”? If there’s a legitimate problem with something someone said, point out the problem. Maybe just say “not cool because [reason]” and move on? Why get offended? People are probably worried about marginalized groups. None of this means e.g. a gay student has to just put up with homophobic bullying, because he wouldn’t have that problem if the bullies hadn’t gotten so offended by his sexuality in the first place. I feel like more often than not offense isn’t some marginalized group righteously feels righteously righteous anger, but because of hard truths or Things You Can’t Say. In other words, more often offense is like the bullies than like the gay student. Am I missing something?
But this is all from a guy who has a dark sense of humor a hard time empathizing with more sensitive peers, so take I would take this with a small heap of salt
Do you have evidence that it’s not genetic? Most of the evidence I’ve seen for this claim has been laughably bad.
Two typical examples are: attempting to argue that since race as received doesn’t correspond 100% precisely with any genetic definition, race is a pure social construct. The other is siting environmental differences that could just as easily be caused by the differences they purport to explain.
The strongest evidence is that a priori there is not reason to expect populations that have historically been geographically separate to have the same distribution of IQ.
As far as specific evidence: Other groups in the US, e.g., Jews, Irish, Asians, have also been discriminated against but where able to overcome it. The blacks in Africa aren’t doing so well either.
Yes, these aren’t particularly strong evidence, but neither is the evidence against the gentic hypothesis.
I am not sure how much is known about what and how exactly causes IQ (in healthy individuals). With other traits there seem to be some natural limits. For example people don’t have exactly the same average height, but there are some reasonable limits; I don’t know about a population where the average height would be 2 meters, or 1 meter. Various groups of people have huge geographical differences for millenia, there is a variability among individuals of the same population, the nutrition and health contributes to height… and yet, despite all of this, the averages of various populations are within some height interval.
Given this, it does not seem so unlikely to me that there may be similar interval for intelligence. Maybe the interval is more narrow; maybe it is between 95 and 105. Below this interval, higher intelligence is an evolutionary advantage within the population. Above this interval… as I said, I don’t know what causes IQ, but I can imagine that there may be some cost in metabolism, or something similar.
So my prior expectations would be that the distributions are not exactly the same, but there may be some reasonable interval for them. Now the only question is how big that interval is. A difference of 20 points or more would be obvious, a difference of 3 points or less would be hard to notice.
Note: While explaining the genetical differences in IQ, we should also explain why the differences aren’t even greater than they seem. Because the explanation of “different IQ is caused by geographical separation” does not explain e.g. why we don’t have any population with an average IQ of 150. (And if there are reasons why average IQ cannot be greater than 150, maybe there are also reasons why it cannot be greater than 105.)
To add to this, while there have been, and continue to be, pretty significant height differences between populations, those differences tend to decrease sharply when the nutrition levels and lifestyles of those populations become more similar. For instance, while a hundred years ago, Americans tended to tower over the Japanese (American soldier second from the left, Japanese soldier far right,) with an average height difference of about six or seven inches, the average height difference today is only about two and a half inches. Even that remaining difference is likely to be at least partly due to a difference in nutrition and activity (the average Japanese person still has a significantly different diet than the average American, and schools demand much higher levels of physical activity of their students, although many adults become highly sedentary after high school graduation.) Unfortunately, I haven’t been able to find any source for the average height of Japanese Americans today, which would help narrow down how much of the remaining gap is likely to be due to lifestyle.
If you can get past the paywall, this might give you what you’re looking for. Looks like a pretty small sample, though, and adult height might not correlate that well with childhood height depending on what age we’re looking at. Also a pretty old study; it wouldn’t surprise me if nutrition had changed quite a bit since 1995.
Something, but not much.
Generally, one never expects two imperfectly related continuous variables to be exactly identical. This tells us nothing about how peaked about 0 our prior distribution should be. In other words, the existence of a difference is no guarantee of existence of a significant difference.
ISTM that you’re using the word evidence in a weird way.
I would probably translate “The strongest evidence is that a priori there is not reason to expect X” into LW jargon as “My priors for X are low.”
The problem is that (according to Kahneman and Tverksy) losses are felt more strongly than gains. So it requires a good deal of effort to not be offended.
Does anyone know how to get offended? I have never experienced the emotion and am interested to know what it feels like.
As far as I can tell, it’s just a subset of anger, and feels identical emotionally.
The distinction is that in offence, the reason for the anger is roused from what the word or action implies about the state of mind of the offender. In most other forms of anger, the reason for the anger is roused by the direct results of the actions.
For example—if someone physically injured me or another human being or stole valuable, I would be angry, but not the offended kind of angry. The anger is because of the injury itself, and because of the loss of the stolen object.
However—if someone spat in my face, called me a liar, or claimed that some individual deserved to die, I would be offended. I don’t care about the spit on my face itself, nor do I care that the word-sounds caused vibrations in the air. The reason I am offended (angry) is because the action has indicated something about the other person’s mental state which implies that they might do something bad in the future.
The physical strike is an interesting gray area. If someone injures me, I’ll be angry. But if someone slaps me during an argument, I’d be offended—the slap doesn’t really bother me, but the intention behind the slap does hurt and signals future aggression. Same with stolen objects—I would be offended if someone I knew stole my silverware but it’s not because I’ve got one less spoon. I’d just be plain angry if they stole my laptop though, primarily because I need my laptop.
In both cases, the offending party has done something which would mark them as an “enemy”. In one case, the action cause direct harm to me. In the other case, the action indicated that they might cause direct harm to me at some point in the future. In the ancestral environment, anger would illicit the necessary behaviors in both cases.
Edit: Now that I think of it, this post might be better titled “Don’t get angry”. There is nothing particularly different between being offended and being angry, with regards to the extent to which they can cloud your epistemic rationality. However on the instrumental rationality side, if you switch off this form of emotional decision making, you will have to replace it somehow. Do you have the ability to attack something effectively without anger?
Given my definition of “offended”, do you still feel that you haven’t experienced it? Have you also never experienced anger?
Sometimes. Often more relevant are the social implications of someone saying or doing what they do in front of the observers.
I suppose you could call that offended as well, although you could also say that you are angry that they caused offense to other people. Or I guess you could be angry that they have influenced everyone’s opinion a certain way. It’s confusing because being offended doesn’t feel different from other types of anger, we just happen to have a word for anger which comes from that particular source.
I guess what I’m getting at is that being offended isn’t an emotion at all, it’s just one form of anger. The lines concerning what types of anger classify as offended are a bit blurry.
Agree. I am offended if my customer yells and screams at my customer service for something which is not my customer service’s mistakes. I take it as a sign that whenever I’m feeling offended by my customer, they must be treating my company or my employees unfairly.
http://www.huffingtonpost.com/news/raid-of-the-day ← Try these. The righteous anger/indignation you may or may not experience is, AFAIK, the same thing, it’s just labeled differently.
When I want to invoke it for performance reasons, I start by building up a strong sense of entitlement. “I am more important than everyone else, I am special, I am right, I deserve deference, I deserve special treatment, I deserve satisfaction at the expense of others,” that sort of thing. Then I look at the things in my environment that violate that sense of entitlement. Offense (or outrage, if I make the differential high enough) follows naturally for me.
Imagine that something is true.
Observe that it is not true.
Keep imagining it is true.
Listen to someone state that it is not true.
Let the conflict between those two things continue to build up and manifest as a negative emotion directed at the person who stated that it is not true.
An example of imagining that something is true is having the idea that things ought to be a certain way, such as thinking that people ought to be not racist. Observe that people are racist. Continue to think that people ought to be not racist. Hear someone be racist.
The difference between taking offense and being angry is that taking offense is when anger is directed at a concept. It’s okay to be angry at a racist for doing racist things, but it’s a bad idea to be angry at the concept of racism.
Your bullet-points example doesn’t appear to match your paragraph example. “Think people ought not to be racist; observe that they are” is different from “Imagine something is true; observe that it is not.” I can imagine that people ought not be racist (they shouldn’t) but be aware that they are. Then when I observe someone being racist, there’s no conflict between my beliefs and reality. Instead, there’s a conflict between reality and how I think reality ought to be, which I attempt to resolve by calling the racist out in the hope that they’ll behave better next time.
Note that the above says nothing about whether or not I should call out the racist, just that I think epigeios’ example is bad. Also I agree that it’s a bad idea to be angry at concepts rather than the people who believe them.
Right, and when I punch you in the mouth to make you fucking SHUT UP and stop saying such STUPID, IDIOTIC THINGS, it is merely YOUR body victimizing you with sensations of pain. It’s not like I’m going to break your damn jaw, no matter how much that would make my life a joyous place, because then I would never have to listen to you open it again.
Who the hell thought this piece of shit was worth upvoting?!
EDIT: Oh god, you get even more absurd.
Huzzah, yay for you! Way to rub it in, bastard. What the hell sort of asshole goes around all proud of not being offended? Do we also tell women to just suck it up, sexism isn’t that bad, you’re just victimizing yourself? I mean, seriously, how fucking privileged is your life, that you seriously think that this is just a casual skill learnable by anyone? Do you make ANY effort to comprehend the ACTUAL world before opening that ugly mouth of yours?
The sort who prides himself on maintaining an even keel and being able to negotiate tense, emotionally-fraught situations. I have found that not being offended has been extremely useful at many points in my life, and indeed has saved me or others from serious trouble on several occasions.
Ironically, this whole exchange might have been a bit more constructive with less taking of offense.
I gladly pay the troll toll to say this:
Well played, sir. Well played. (Regardless of whether or not you’re a sir. It just works better that way.)
Not often the success of a post is measured in its downvotes. I am ashamed to admit my initial vote was down. It took a second reading to catch it.
Here, have an oreo.
It’s not a real oreo.
Nice! You almost had me going there. Good thing I checked the nick.