Hating Hitler doesn’t mean you’re biased against Hitler. Likewise, having a belief about a particular ethnic group doesn’t mean you’re biased for or against them.
Then how do you know what score you should get on the IAT? I don’t know what an unbiased score would be, but an equal-for-both-groups score is most likely biased.
In the Israel vs. Palestine case, your answer would depend more on some meta-level decisions than on ironing out another decimal point of bias. For instance: Should a settlement give equal benefits to both sides; should it compensate for historic injustices; should it maximize expected value for the participants; should it maximize expected value for the world?
If you want to maximize expected value for the world, you would end up calculating something like this:
Israelis have given us a hugely disproportionate number of the world’s famous scientists, musicians, artists, writers, producers, bankers, doctors, and lawyers.
Palestinians have given us a hugely disproportionate number of the world’s famous suicide bombers.
The answer you would then arrive at will be more different from the answer you would arrive at if “equitable outcome” is your goal, than the difference made by any bias. (Unless you really, really hate lawyers.)
So I don’t think this approach gets at any of the hard problems.
Aargh. That’s a good point and I clearly need to think about this more. I don’t have a clear theory yet, but I’m going to brain-dump my thoughts on this topic.
Then how do you know what score you should get on the IAT? I don’t know what an unbiased score would be, but an equal-for-both-groups score is most likely biased.
The score you should get on the IAT should be correlated to your conscious opinion. If you consciously think Palestinians are inferior, then you should be happy with an IAT score showing you think Palestinians are inferior. If you consciously think Palestinians are equal to Israelis, you should be trying to get an IAT score reflecting that equality. It’s all about trying to get the unconscious mind to correspond to your rational beliefs.
I consciously assent to the proposition “Palestinians are more likely to be suicide bombers than some other groups, but it’s still only a tiny fraction of their population” My unconscious probably believes something closer to “Palestinians = suicide bombers!” Further, my conscious mind stops way short of the proposition “All Palestinians are bad people.” My unconscious mind probably believes this second proposition. I’d like my unconscious mind to get way closer to my conscious beliefs in both areas.
My conscious, rational brain believes that most Palestinians are probably decent people who have been driven to extremes by their situation. My conscious mind also believes that the best Middle East peace plan is one where everyone, Israeli or Palestinian, is considered equally deserving of happiness simply because they are human. That’s my moral system, and yours may differ. The point is, that is my moral system, and I would like to be able to operate on it. If I’m going around subconsciously thinking that Palestinians are bad and don’t deserve happiness, I can’t enact my goals.
This ties into the halo effect and the horns effect, where people tend to classify others as either all good or all bad. My belief that Palestinians are sometimes suicide bombers probably makes me think that they’re uglier, stupider, and meaner than they really are. A lot of that is mediated through that concept “bad”, which is one reason I’m so interested in getting my link between “Palestinians” and “bad” down.
If you wanted to adjust moral value for the Israelis’ greater economic value, you’d still need a way to make sure you’re not over-adjusting, ie that your unconscious mind doesn’t dislike the Palestinians even more than your conscious mind does. Alternately, you’d want to make sure you weren’t going soft and that your unconscious mind liked the Palestinians more than your conscious mind thought they deserved. I can’t think of an easy way to do that with the IAT, but I bet there’s a complicated one if you imagine a hypothetical IAT with perfect reliability and let me get away with a mere proof-of-concept.
This is an interesting post, but...
Then how do you know what score you should get on the IAT? I don’t know what an unbiased score would be, but an equal-for-both-groups score is most likely biased.
In the Israel vs. Palestine case, your answer would depend more on some meta-level decisions than on ironing out another decimal point of bias. For instance: Should a settlement give equal benefits to both sides; should it compensate for historic injustices; should it maximize expected value for the participants; should it maximize expected value for the world?
If you want to maximize expected value for the world, you would end up calculating something like this:
Israelis have given us a hugely disproportionate number of the world’s famous scientists, musicians, artists, writers, producers, bankers, doctors, and lawyers.
Palestinians have given us a hugely disproportionate number of the world’s famous suicide bombers.
The answer you would then arrive at will be more different from the answer you would arrive at if “equitable outcome” is your goal, than the difference made by any bias. (Unless you really, really hate lawyers.)
So I don’t think this approach gets at any of the hard problems.
Aargh. That’s a good point and I clearly need to think about this more. I don’t have a clear theory yet, but I’m going to brain-dump my thoughts on this topic.
The score you should get on the IAT should be correlated to your conscious opinion. If you consciously think Palestinians are inferior, then you should be happy with an IAT score showing you think Palestinians are inferior. If you consciously think Palestinians are equal to Israelis, you should be trying to get an IAT score reflecting that equality. It’s all about trying to get the unconscious mind to correspond to your rational beliefs.
I consciously assent to the proposition “Palestinians are more likely to be suicide bombers than some other groups, but it’s still only a tiny fraction of their population” My unconscious probably believes something closer to “Palestinians = suicide bombers!” Further, my conscious mind stops way short of the proposition “All Palestinians are bad people.” My unconscious mind probably believes this second proposition. I’d like my unconscious mind to get way closer to my conscious beliefs in both areas.
My conscious, rational brain believes that most Palestinians are probably decent people who have been driven to extremes by their situation. My conscious mind also believes that the best Middle East peace plan is one where everyone, Israeli or Palestinian, is considered equally deserving of happiness simply because they are human. That’s my moral system, and yours may differ. The point is, that is my moral system, and I would like to be able to operate on it. If I’m going around subconsciously thinking that Palestinians are bad and don’t deserve happiness, I can’t enact my goals.
This ties into the halo effect and the horns effect, where people tend to classify others as either all good or all bad. My belief that Palestinians are sometimes suicide bombers probably makes me think that they’re uglier, stupider, and meaner than they really are. A lot of that is mediated through that concept “bad”, which is one reason I’m so interested in getting my link between “Palestinians” and “bad” down.
If you wanted to adjust moral value for the Israelis’ greater economic value, you’d still need a way to make sure you’re not over-adjusting, ie that your unconscious mind doesn’t dislike the Palestinians even more than your conscious mind does. Alternately, you’d want to make sure you weren’t going soft and that your unconscious mind liked the Palestinians more than your conscious mind thought they deserved. I can’t think of an easy way to do that with the IAT, but I bet there’s a complicated one if you imagine a hypothetical IAT with perfect reliability and let me get away with a mere proof-of-concept.
Hope that makes sense.