I was wondering about that. In what sense is this place a graveyard?
LW used to get a lot more traffic in the past but don’t let that stop you from contributing. How about writing up a longer post on your thesis about stereotype accuracy?
That specific thesis is mostly just an example. Much of what I would say would be paraphrasing the work of someone else (Lee Jussim mainly) and explaining its relevance to this community. I could do this if people thought it would be productive, but its just one of many topics that I think are misunderstood on a large scale.
My more general interest is in the less-known fact that many of our hardwired biases and heuristics were designed by natural selection (e.g., negativity bias) to improve accuracy based on goal-relevant criteria. It also seems that the biases formed in response to the environment (e.g., much of the content comprising a stereotype) track reality to a surprising degree. Imagine a marksman who practices shooting at the same firing range everyday and this range generally has a side-wind in the same direction and intensity. The marksman can manually adjust for this by placing his reticle upwind of the target, but he could also adjust his scope’s reticle such that he can aim for the bulls-eye and account for the wind at the same time. Once the adjustment is made to the scope, he many have a “biased” tool, but his shots are still centered on the bullseye (on average) and the only online calculations needed to account for the wind on a shot-by-shot basis are minute. What if the marksman moves to another range? Well, in time, he will see his shots wildly missing and make the proper adjustments. This is probably not a novel analogy, but the surprising thing to me is that social psychology tends to frame any “reticle adjustment” as a bias against which we must fight without testing its performance in the contexts under which the adjustment was made. It’s not that biases and heuristics don’t cause problems, its that we have a much poorer understanding of when they cause problems than our field claims.
This general idea applies to stereotypes, but also:
Negativity Bias
Attribution errors (including the FAE)
Availability heuristic
Clustering bias and other illusory correlation-type biases
Base rate neglect
Confirmation Bias (this claim might get me in trouble here… haha)
In these spheres people generally understand that heuristics optimize for something. Frequently people think they optimize for some ancestral environment that’s quite unlike the world we are living in at the moment.
I think that’s a question where a well written post would be very useful.
This is probably not a novel analogy, but the surprising thing to me is that social psychology tends to frame any “reticle adjustment” as a bias against which we must fight without testing its performance in the contexts under which the adjustment was made.
I would think that many sociologists would say that many people who are racist and look down on Blacks are racists because they don’t interact much with Blacks. If the adjustment was made during a time where the person was at an all-White school, the interesting question isn’t whether the adjustment performs well within the context of the all-White school but whether it also performs well at decisions made later outside of that heterogeneous environment.
It was poor wording on my part when I wrote “the contexts under which the adjustment was made”. The spirit of my point is much better captured by the word “applied” (vs. made). That is, it looks like a balanced reading of stereotype literature shows that people are quite good in their judgments of when to apply a stereotype. My point is therefore a bit more extreme than it might have appeared.
I would think that many sociologists would say that many people who are racist and look down on Blacks are racists because they don’t interact much with Blacks.
I agree with this and would add that such perceptions of superiority could be amplified by other members of the community reinforcing those judgments.
If the adjustment was made during a time where the person was at an all-White school, the interesting question isn’t whether the adjustment performs well within the context of the all-White school but whether it also performs well at decisions made later outside of that heterogeneous environment.
To get a little deeper into this topic, I should mention that our stereotypes are conditional and, therefore, much of the performance of a stereotype depends on applying it in the proper contexts. Of the studies looking at when people apply stereotypes, they tend to show that they are used as a last resort under conditions in which almost no other information about the target is available. We’re surprisingly good at knowing when a stereotype is applicable and seem to have little trouble spontaneously eschewing them when other, more diagnostic information is available.
My off-the-cuff hypothesis about students from an all-white school would be that they would show racial preferences when, say, only shown a picture of a black person. However, ask these students to provide judgments after a 5-minute conversation with a black person or after reviewing a resume (i.e., after giving them loads and loads of information) and race effects will become nearly or entirely undetectable. I don’t know of any studies looking at this exactly and urge you to take my hypothesis with a grain of salt, but my larger point is this: You might be surprised.
From memory without Googling the studies I remember that there are studies that test whether having a “Black name” on a resume will change response rates and it does.
There are also those studies that suggest that blinding of piano players gender is required to remove a gender bias.
So, I’m pretty sure we know that humans have a bias against anyone sufficiently different, and that this evolved before humanity as such. We certainly know that humans will try to rationalize their biases. We also have a great deal of evidence for past failures of scientific racism, which has set my prior for the next such theory very low.
We also have a great deal of evidence for past failures of scientific racism, which has set my prior for the next such theory very low.
I’m not sure what you mean here. How are you defining scientific racism and how is it relevant to what we’re talking about?
As a general query to other readers: Is it bad form to just ignore comments like this? I’m apt to think it unwise to try to talk about this topic here if it is just going to invoke Godwin’s Law.
As a general query to other readers: Is it bad form to just ignore comments like this? I’m apt to think it unwise to try to talk about this topic here if it is just going to invoke Godwin’s Law.
In general you can ignore comments when you don’t like a productive discussion will follow.
LW by it’s nature has people who argue a wide array of positions and in a case like this you will get some criticism like this. Don’t let that turn you off LW or take it as suggestion that your views are unwelcome here.
LW used to get a lot more traffic in the past but don’t let that stop you from contributing. How about writing up a longer post on your thesis about stereotype accuracy?
That specific thesis is mostly just an example. Much of what I would say would be paraphrasing the work of someone else (Lee Jussim mainly) and explaining its relevance to this community. I could do this if people thought it would be productive, but its just one of many topics that I think are misunderstood on a large scale.
My more general interest is in the less-known fact that many of our hardwired biases and heuristics were designed by natural selection (e.g., negativity bias) to improve accuracy based on goal-relevant criteria. It also seems that the biases formed in response to the environment (e.g., much of the content comprising a stereotype) track reality to a surprising degree. Imagine a marksman who practices shooting at the same firing range everyday and this range generally has a side-wind in the same direction and intensity. The marksman can manually adjust for this by placing his reticle upwind of the target, but he could also adjust his scope’s reticle such that he can aim for the bulls-eye and account for the wind at the same time. Once the adjustment is made to the scope, he many have a “biased” tool, but his shots are still centered on the bullseye (on average) and the only online calculations needed to account for the wind on a shot-by-shot basis are minute. What if the marksman moves to another range? Well, in time, he will see his shots wildly missing and make the proper adjustments. This is probably not a novel analogy, but the surprising thing to me is that social psychology tends to frame any “reticle adjustment” as a bias against which we must fight without testing its performance in the contexts under which the adjustment was made. It’s not that biases and heuristics don’t cause problems, its that we have a much poorer understanding of when they cause problems than our field claims.
This general idea applies to stereotypes, but also:
Negativity Bias
Attribution errors (including the FAE)
Availability heuristic
Clustering bias and other illusory correlation-type biases
Base rate neglect
Confirmation Bias (this claim might get me in trouble here… haha)
etc.
In these spheres people generally understand that heuristics optimize for something. Frequently people think they optimize for some ancestral environment that’s quite unlike the world we are living in at the moment. I think that’s a question where a well written post would be very useful.
I would think that many sociologists would say that many people who are racist and look down on Blacks are racists because they don’t interact much with Blacks. If the adjustment was made during a time where the person was at an all-White school, the interesting question isn’t whether the adjustment performs well within the context of the all-White school but whether it also performs well at decisions made later outside of that heterogeneous environment.
It was poor wording on my part when I wrote “the contexts under which the adjustment was made”. The spirit of my point is much better captured by the word “applied” (vs. made). That is, it looks like a balanced reading of stereotype literature shows that people are quite good in their judgments of when to apply a stereotype. My point is therefore a bit more extreme than it might have appeared.
I agree with this and would add that such perceptions of superiority could be amplified by other members of the community reinforcing those judgments.
To get a little deeper into this topic, I should mention that our stereotypes are conditional and, therefore, much of the performance of a stereotype depends on applying it in the proper contexts. Of the studies looking at when people apply stereotypes, they tend to show that they are used as a last resort under conditions in which almost no other information about the target is available. We’re surprisingly good at knowing when a stereotype is applicable and seem to have little trouble spontaneously eschewing them when other, more diagnostic information is available.
My off-the-cuff hypothesis about students from an all-white school would be that they would show racial preferences when, say, only shown a picture of a black person. However, ask these students to provide judgments after a 5-minute conversation with a black person or after reviewing a resume (i.e., after giving them loads and loads of information) and race effects will become nearly or entirely undetectable. I don’t know of any studies looking at this exactly and urge you to take my hypothesis with a grain of salt, but my larger point is this: You might be surprised.
From memory without Googling the studies I remember that there are studies that test whether having a “Black name” on a resume will change response rates and it does.
There are also those studies that suggest that blinding of piano players gender is required to remove a gender bias.
Do you have another read on the literature?
So, I’m pretty sure we know that humans have a bias against anyone sufficiently different, and that this evolved before humanity as such. We certainly know that humans will try to rationalize their biases. We also have a great deal of evidence for past failures of scientific racism, which has set my prior for the next such theory very low.
I’m not sure what you mean here. How are you defining scientific racism and how is it relevant to what we’re talking about?
As a general query to other readers: Is it bad form to just ignore comments like this? I’m apt to think it unwise to try to talk about this topic here if it is just going to invoke Godwin’s Law.
In general you can ignore comments when you don’t like a productive discussion will follow.
LW by it’s nature has people who argue a wide array of positions and in a case like this you will get some criticism like this. Don’t let that turn you off LW or take it as suggestion that your views are unwelcome here.