You seem to be particularly worried about accidentally becoming a bigot. (I don’t think most of us are in any danger of accidentally becoming supreme dictators.) I think you are safe. Think of it this way: you don’t want to be a bigot. You don’t want your future self to be a bigot either. So don’t behave like one. No matter what you read. Commit your future self to not being an asshole.
He’s probably more motivated by not wanting others to become bigots—right, WrongBot?
My motivation in writing this article was to attempt to dissuade others from courses of action that might lead them to become bigots, among other things.
But I am also personally terrified of exactly the sort of thing I describe, because I can’t see a way to protect against it. If I had enough strong evidence to assign a probability of .99 to the belief that gay men have an average IQ 10 points lower than straight men (I use this example because I have no reason at all to believe it is true, and so there is less risk that someone will try to convince me of it), I don’t think I could prevent that from affecting my behavior in some way. I don’t think it’s possible. And I disvalue such a result very strongly, so I avoid it.
I bring up dangerous thoughts because I am genuinely scared of them.
The fact that you have a core value, important enough to you that you’d deliberately keep yourself ignorant to preserve that value, is evidence that the value is important enough to you that it can withstand the addition of information. Your fear is a good sign that you have nothing to fear.
For real. I have been in those shoes. Regarding this subject, and others. You shouldn’t be worried.
Statistical facts like the ones you cited are not prescriptive. You don’t have to treat anyone badly because of IQ. IQ does not equal worth. You don’t use a battery of statistics on test scores, crime rates, graduation rates, etc. to determine how you will treat individuals. You continue to behave according to your values.
In the past I have largely agreed with the sentiment that truth and information are mostly good, and when they create problems the solution is even more truth.
But on the basis of an interest in knowing more, I sometimes try to seek evidence that supports things I think are false or that I don’t want to be true. Also, I try to notice when something I agree with is asserted without good evidential support. And I don’t think you supported your conclusions there with real evidence.
You don’t have to treat anyone badly because of IQ. IQ does not equal worth. You don’t use a battery of statistics on test scores, crime rates, graduation rates, etc. to determine how you will treat individuals. You continue to behave according to your values.
This reads more to me like prescriptive signaling than like evidence. While it is very likely to be the case that “IQ test results” are not the same as “human worth”, it doesn’t follow that an arbitrary person would not change their behavior towards someone who is “measurably not very smart” in any way that dumb person might not like. And for some specific people (like WrongBot by the admission of his or her own fears) the fear may very well be justified.
When I read Cialdinni’s book Influence, I was struck by the number of times his chapters took the form: (1) describe mental shenanigan, (2) offer evidence that people are easily and generally tricked in this way (3) explain how it functions as a bias when manipulated and a useful heuristic in non-evil environments, (4) offer laboratory evidence that basic warnings to people about the trick offer little protective benefit, (5) exhort the reader to “be careful anyway” with some ad hoc and untested advice.
Advice should be supported with evidence… and sometimes I think a rationalist should know when to shut up and/or bail out of a bad epistemic situation.
Evidence from implicit association tests indicate that people can be biased against other people without even being aware of it. When scientists tried to measure the degree of “cognitive work” it takes to parse racist situations they found that observing overt racism against black people was mentally taxing to white people while observing subtle racism against black people racism was mentally taxing to black people. The whites were oblivious to subtle racism and didn’t even try to process it because it happened below their perceptual awareness, overt racism made them stop and momentarily ponder if maybe (shock!) we don’t live in a colorblind world yet. The blacks knew racism was common (but not universal) and factored it into their model of the situation without lots of trouble when racism was overt—the tricky part was subtle racism where they had to think through the details to understand what was going on.
(I feel safe saying that white people are frequently oblivious to racism, and are sometimes active but unaware perpetrators of subtle forms of racism because I myself am white. When talking about group shortcomings, I find it best to stick to the shortcomings of my own group.)
Based on information like this, I can easily imagine that I might learn a true (relatively general) fact, use it to leap to an unjustifiable conclusion with respect to an individual, have that individual be harmed by my action, and never notice unless “called on it”.
But when called on it, its quite plausible that I’d leap to defend myself and engage in a bunch of motivated cognition to deny that I could possibly ever be biased… and I’d dig myself even deeper into a hole, updating the wrong way when presented with “more evidence”. So it would seem that more information would just leave me more wrong than I started with, unless something unusual happened.
So it seems reasonable to me that if we don’t have the time to drink largelythen maybe we should avoid shallow draughts. And even in that case we should be cautious about any subject that impinges on mind killer territory because more evidence really does seem to make you more biased in such areas.
I upvoted the article (from −2 to −1) because the problems I have with it are minor issues of tone, rather major issues with the the content. The general content seems to be a very fundamental rationalist “public safety message”, with more familiarity assumed than is justified (like assuming everyone automatically agrees with Paul Graham and putting in a joke about violence at the end).
I don’t, unfortunately, know of any experimentally validated method for predicting whether a specific person at a specific time is going to be harmed or helped by a specific piece of “true information” and this is part of what makes it hard to talk with people in a casual manner about important issues and feel justifiably responsible about it. In some sense, I see this community as existing, in part, to try to invent such methods and perhaps even to experimentally validate them. Hence the up vote to encourage the conversation :-)
What I was trying to encourage was a practice of trusting your own strength. I think that morally conscientious people (as I suspect WrongBot is) err too much on the side of thinking they’re cognitively fragile, worrying that they’ll become something they despise. “The best lack all conviction, while the worst are full of passionate intensity.”
Believing in yourself can be a self-fulfilling prophecy; believing in your own ability to resist becoming a racist might also be self-fulfilling. There’s plenty of evidence for cognitive biases, but if we’re too willing to paint humans as enslaved by them, we might actually decrease rationality on average! That’s why I engaged in “prescriptive signaling.” It’s a pep talk. Sometimes it’s better to try to do something than to contemplate excessively whether it’s possible.
Just because I’ll be able to do something doesn’t mean that I will. I can resolve to spend time evaluating people based on their own merits all I like, but that’s no guarantee at all that the resolution will last.
You seem to think that anti-bigots evaluate people on their merits more than bigots do. Why?
If you’re looking for a group of people who are more likely to evaluate people on their merits, you might try looking for a group of people who are committed to believing true things.
Group statistics gives only a prior, and just a few observations of any individual will overwhelm it. And if start discriminating against gays if they have low average intelligence, then you should discriminate even more against low intelligence itself. It is not the gayness that is the important factor in that case, it just has a weak correlation.
I see the problem of bigotry in terms of information and knowledge but I see bigotry as occurring when there is too little knowledge. I have quite an extensive blog post on this subject.
My conceptualization of this may seem contrived, but I give a much more detailed explanation on my blog along with multiple examples.
I see it as essentially the lack of an ability to communicate with someone that triggers xenophobia. As I see it, when two people meet and try to communicate, they do a “Turing test”, where they exchange information and try to see if the person they are communicating with is “human enough”, that is human enough to communicate with, be friends with, trade with, or simply human enough to not kill.
What happens when you try to communicate, is that you both use your “theory of mind”, what I call the communication protocols that translate the mental concepts you have in your brain into the data stream of language that you transmit; sounds, gestures, facial expressions, tone of voice, accents, etc. If the two “theories of mind” are compatible, then communication can proceed at a very high data rate because the two theories of mind do so much data compression to fit the mental concepts into the puny data stream of language and to then extract them from the data stream.
However, if the two theories of mind are not compatible, then the error rate goes up, and then via the uncanny valley effect xenophobia is triggered. This initial xenophobia is a feeling and so is morally neutral. How one then acts is not morally neutral. If one seeks to understand the person who has triggered xenophobia, then your theory of mind will self-modify and eventually you will be able to understand the person and the xenophobia will go away. If you seek to not understand the individual, or block that understanding, then the xenophobia will remain.
It is exactly analogous to Nietzsche’s quote “if you look into the abyss, the abyss looks back into you”. We can only perceive something if we have pattern recognition for that something instantiated in our neural networks. If we don’t have the neuroanatomy to instantiate an idea, we can’t perceive the idea, we can’t even think the idea. To see into the abyss, you have to have a map of the abyss in your visual cortex to decode the image of the abyss that is being received on your retina.
Bigots as a rule are incapable of understanding the objects of their bigotry (I am not including self-loathing here because that is a special case), and it shows, they attribute all kinds of crazy, wild, and completely non-realistic thinking processes to the objects of their bigotry. I think this was the reason why many invader cultures committed genocide on native cultures by taking children away from natives and fostering them with the invader culture (example US, Canada, Australia) (I go into more detail on that). What bigots often do is make up reasons out of pure fantasy to justify the hatred they feel toward the objects of their bigotry. The Blood Libel against the Jews is a good example. This was the lie that Jews used the blood of Christians in Passover rituals. This could not be correct. Passover long predated Christianity, blood is never kosher, human blood is never kosher, no observant Jew could ever use human blood in any religious ceremony. It never happened, it was a total lie. A lie used to justify the hatred that some Christians felt toward Jews. The hate came first, the lie was used to justify the feelings of hatred.
Bigots as a rule are afraid of associating with the objects of their bigotry because they will then come to understand them. The term “xenophobia” is quite correct. There is a fear of associating with the other because then some of “the other” will rub off on you and you will necessarily become more “other-like”. You will have a map that understands “the other” in your neuroanatomy.
In one sense, to the bigot, understanding “the other” is a “dangerous thought” because it changes the bigot’s utility function such that certain individuals are no longer so low on the social hierarchy as to be treated as non-humans.
There are some thoughts that are dangerous to humans. These activate the “fight or flight” state in an uncontrolled manner and that can be lethal. This usually requires a lot of priming (years). There are too many safeties that kick-in for it to happen by accident. I think this is what the Kundalini kindling is. For the most part there isn’t enough direct coupling between the part of the brain that thinks thoughts and the part that controls the stuff that keeps you alive. There is some, and that can be triggered in a heart beat when you are being chased by a bear, but there is lots of feedback via feelings before you get to dangerous levels. I don’t recommend trying to work yourself into that state because it is quite dangerous because the safeties do get turned off (that is unless a bear is actually chasing you).
Drugs of abuse can trigger the same things which is one of the reasons they are so dangerous.
He’s probably more motivated by not wanting others to become bigots—right, WrongBot?
My motivation in writing this article was to attempt to dissuade others from courses of action that might lead them to become bigots, among other things.
But I am also personally terrified of exactly the sort of thing I describe, because I can’t see a way to protect against it. If I had enough strong evidence to assign a probability of .99 to the belief that gay men have an average IQ 10 points lower than straight men (I use this example because I have no reason at all to believe it is true, and so there is less risk that someone will try to convince me of it), I don’t think I could prevent that from affecting my behavior in some way. I don’t think it’s possible. And I disvalue such a result very strongly, so I avoid it.
I bring up dangerous thoughts because I am genuinely scared of them.
The fact that you have a core value, important enough to you that you’d deliberately keep yourself ignorant to preserve that value, is evidence that the value is important enough to you that it can withstand the addition of information. Your fear is a good sign that you have nothing to fear.
For real. I have been in those shoes. Regarding this subject, and others. You shouldn’t be worried.
Statistical facts like the ones you cited are not prescriptive. You don’t have to treat anyone badly because of IQ. IQ does not equal worth. You don’t use a battery of statistics on test scores, crime rates, graduation rates, etc. to determine how you will treat individuals. You continue to behave according to your values.
In the past I have largely agreed with the sentiment that truth and information are mostly good, and when they create problems the solution is even more truth.
But on the basis of an interest in knowing more, I sometimes try to seek evidence that supports things I think are false or that I don’t want to be true. Also, I try to notice when something I agree with is asserted without good evidential support. And I don’t think you supported your conclusions there with real evidence.
This reads more to me like prescriptive signaling than like evidence. While it is very likely to be the case that “IQ test results” are not the same as “human worth”, it doesn’t follow that an arbitrary person would not change their behavior towards someone who is “measurably not very smart” in any way that dumb person might not like. And for some specific people (like WrongBot by the admission of his or her own fears) the fear may very well be justified.
When I read Cialdinni’s book Influence, I was struck by the number of times his chapters took the form: (1) describe mental shenanigan, (2) offer evidence that people are easily and generally tricked in this way (3) explain how it functions as a bias when manipulated and a useful heuristic in non-evil environments, (4) offer laboratory evidence that basic warnings to people about the trick offer little protective benefit, (5) exhort the reader to “be careful anyway” with some ad hoc and untested advice.
Advice should be supported with evidence… and sometimes I think a rationalist should know when to shut up and/or bail out of a bad epistemic situation.
Evidence from implicit association tests indicate that people can be biased against other people without even being aware of it. When scientists tried to measure the degree of “cognitive work” it takes to parse racist situations they found that observing overt racism against black people was mentally taxing to white people while observing subtle racism against black people racism was mentally taxing to black people. The whites were oblivious to subtle racism and didn’t even try to process it because it happened below their perceptual awareness, overt racism made them stop and momentarily ponder if maybe (shock!) we don’t live in a colorblind world yet. The blacks knew racism was common (but not universal) and factored it into their model of the situation without lots of trouble when racism was overt—the tricky part was subtle racism where they had to think through the details to understand what was going on.
(I feel safe saying that white people are frequently oblivious to racism, and are sometimes active but unaware perpetrators of subtle forms of racism because I myself am white. When talking about group shortcomings, I find it best to stick to the shortcomings of my own group.)
Based on information like this, I can easily imagine that I might learn a true (relatively general) fact, use it to leap to an unjustifiable conclusion with respect to an individual, have that individual be harmed by my action, and never notice unless “called on it”.
But when called on it, its quite plausible that I’d leap to defend myself and engage in a bunch of motivated cognition to deny that I could possibly ever be biased… and I’d dig myself even deeper into a hole, updating the wrong way when presented with “more evidence”. So it would seem that more information would just leave me more wrong than I started with, unless something unusual happened.
(Then, to compound my bad luck I might cache defensive views of myself after generating them in the heat of an argument.)
So it seems reasonable to me that if we don’t have the time to drink largely then maybe we should avoid shallow draughts. And even in that case we should be cautious about any subject that impinges on mind killer territory because more evidence really does seem to make you more biased in such areas.
I upvoted the article (from −2 to −1) because the problems I have with it are minor issues of tone, rather major issues with the the content. The general content seems to be a very fundamental rationalist “public safety message”, with more familiarity assumed than is justified (like assuming everyone automatically agrees with Paul Graham and putting in a joke about violence at the end).
I don’t, unfortunately, know of any experimentally validated method for predicting whether a specific person at a specific time is going to be harmed or helped by a specific piece of “true information” and this is part of what makes it hard to talk with people in a casual manner about important issues and feel justifiably responsible about it. In some sense, I see this community as existing, in part, to try to invent such methods and perhaps even to experimentally validate them. Hence the up vote to encourage the conversation :-)
Those are good points.
What I was trying to encourage was a practice of trusting your own strength. I think that morally conscientious people (as I suspect WrongBot is) err too much on the side of thinking they’re cognitively fragile, worrying that they’ll become something they despise. “The best lack all conviction, while the worst are full of passionate intensity.”
Believing in yourself can be a self-fulfilling prophecy; believing in your own ability to resist becoming a racist might also be self-fulfilling. There’s plenty of evidence for cognitive biases, but if we’re too willing to paint humans as enslaved by them, we might actually decrease rationality on average! That’s why I engaged in “prescriptive signaling.” It’s a pep talk. Sometimes it’s better to try to do something than to contemplate excessively whether it’s possible.
Why should your behavior be unaffected? If you want to spend time evaluating a person on their own merits, surely you still can.
Just because I’ll be able to do something doesn’t mean that I will. I can resolve to spend time evaluating people based on their own merits all I like, but that’s no guarantee at all that the resolution will last.
You seem to think that anti-bigots evaluate people on their merits more than bigots do. Why?
If you’re looking for a group of people who are more likely to evaluate people on their merits, you might try looking for a group of people who are committed to believing true things.
Group statistics gives only a prior, and just a few observations of any individual will overwhelm it. And if start discriminating against gays if they have low average intelligence, then you should discriminate even more against low intelligence itself. It is not the gayness that is the important factor in that case, it just has a weak correlation.
I see the problem of bigotry in terms of information and knowledge but I see bigotry as occurring when there is too little knowledge. I have quite an extensive blog post on this subject.
http://daedalus2u.blogspot.com/2010/03/physiology-behind-xenophobia.html
My conceptualization of this may seem contrived, but I give a much more detailed explanation on my blog along with multiple examples.
I see it as essentially the lack of an ability to communicate with someone that triggers xenophobia. As I see it, when two people meet and try to communicate, they do a “Turing test”, where they exchange information and try to see if the person they are communicating with is “human enough”, that is human enough to communicate with, be friends with, trade with, or simply human enough to not kill.
What happens when you try to communicate, is that you both use your “theory of mind”, what I call the communication protocols that translate the mental concepts you have in your brain into the data stream of language that you transmit; sounds, gestures, facial expressions, tone of voice, accents, etc. If the two “theories of mind” are compatible, then communication can proceed at a very high data rate because the two theories of mind do so much data compression to fit the mental concepts into the puny data stream of language and to then extract them from the data stream.
However, if the two theories of mind are not compatible, then the error rate goes up, and then via the uncanny valley effect xenophobia is triggered. This initial xenophobia is a feeling and so is morally neutral. How one then acts is not morally neutral. If one seeks to understand the person who has triggered xenophobia, then your theory of mind will self-modify and eventually you will be able to understand the person and the xenophobia will go away. If you seek to not understand the individual, or block that understanding, then the xenophobia will remain.
It is exactly analogous to Nietzsche’s quote “if you look into the abyss, the abyss looks back into you”. We can only perceive something if we have pattern recognition for that something instantiated in our neural networks. If we don’t have the neuroanatomy to instantiate an idea, we can’t perceive the idea, we can’t even think the idea. To see into the abyss, you have to have a map of the abyss in your visual cortex to decode the image of the abyss that is being received on your retina.
Bigots as a rule are incapable of understanding the objects of their bigotry (I am not including self-loathing here because that is a special case), and it shows, they attribute all kinds of crazy, wild, and completely non-realistic thinking processes to the objects of their bigotry. I think this was the reason why many invader cultures committed genocide on native cultures by taking children away from natives and fostering them with the invader culture (example US, Canada, Australia) (I go into more detail on that). What bigots often do is make up reasons out of pure fantasy to justify the hatred they feel toward the objects of their bigotry. The Blood Libel against the Jews is a good example. This was the lie that Jews used the blood of Christians in Passover rituals. This could not be correct. Passover long predated Christianity, blood is never kosher, human blood is never kosher, no observant Jew could ever use human blood in any religious ceremony. It never happened, it was a total lie. A lie used to justify the hatred that some Christians felt toward Jews. The hate came first, the lie was used to justify the feelings of hatred.
Bigots as a rule are afraid of associating with the objects of their bigotry because they will then come to understand them. The term “xenophobia” is quite correct. There is a fear of associating with the other because then some of “the other” will rub off on you and you will necessarily become more “other-like”. You will have a map that understands “the other” in your neuroanatomy.
In one sense, to the bigot, understanding “the other” is a “dangerous thought” because it changes the bigot’s utility function such that certain individuals are no longer so low on the social hierarchy as to be treated as non-humans.
There are some thoughts that are dangerous to humans. These activate the “fight or flight” state in an uncontrolled manner and that can be lethal. This usually requires a lot of priming (years). There are too many safeties that kick-in for it to happen by accident. I think this is what the Kundalini kindling is. For the most part there isn’t enough direct coupling between the part of the brain that thinks thoughts and the part that controls the stuff that keeps you alive. There is some, and that can be triggered in a heart beat when you are being chased by a bear, but there is lots of feedback via feelings before you get to dangerous levels. I don’t recommend trying to work yourself into that state because it is quite dangerous because the safeties do get turned off (that is unless a bear is actually chasing you).
Drugs of abuse can trigger the same things which is one of the reasons they are so dangerous.