As opposed to the banishment/disenfranchisement etc of actual convicted criminals?
If you remove the trait, you won’t have criminals. A genetics-caused relationship, logically, would allow you to do this. You’ll know beforehand who will be a criminal. Not only that, since it would assist in establishing likelihood, you should be able factor race into the evidence in criminal trials. This would be a terrible idea.
Whether races exist as useful categories that allow to make predictions about observations is an epistemic question. We have very strong evidence for this claim.
Whether some races, in modern Western countries, are more prone to have certain “bad” traits (e.g. low IQ, high crime rates, etc.) is also an epistemic question. We also have strong evidence for these claims.
You have nothing but correlation, and correlation based on fuzzy and corrupted data. Correlation is not causation, and you seem to struggle mightily with the difference.
Political incorrect as they are, some of these claims, specifically the one about IQ, have some degree of plausibility, due to the high heritability of some of these traits. But the jury is still out.
Whether we should discriminate against these races with “bad” traits is an entirely different kind of question, a moral question. It doesn’t follow from any of the previous claims.
These claims are also the result of you not seeing the distinction between logic based on causation and logic based on correlation.
If you remove the trait, you won’t have criminals. A genetics-caused relationship, logically, would allow you to do this. You’ll know beforehand who will be a criminal.
Only if the correlation was perfect. In any case, even if you were able to identify criminals before the fact, it doesn’t mean that it would be moral to “punish” them beforehand.
Not only that, since it would assist in establishing likelihood, you should be able factor race into the evidence in criminal trials. This would be a terrible idea.
There are good reason not to use profiling in criminal investigations and trials.
Anyway, what evidence would make you accept the claim that one group of easily identifiable people people was more prone to commit crime than the general population? If you were given this evidence, would you consider appropriate to use profiling against this group in criminal trials, or otherwise bannish/disenfranchise or even genocide them?
You have nothing but correlation, and correlation based on fuzzy and corrupted data.
Evidence would be appreciated.
Correlation is not causation, and you seem to struggle mightily with the difference.
I don’t want to come out as rude but I don’t think you know what you are talking about when you say ′ Correlation is not causation’:
Distinguishing causation from correlation is important only when one of the variable is under your control. There is some controversy about whether Evidential Decision Theory or Causal Decision Theory or something else is the ultimately ideal way of making decisions, but in practice the best that you can do in most non-pathological scenarios is to use some approximation of Causal Decision Theory. You can decide to smoke or not smoke, so establishing whether smoking causes cancer or is merely correlated to it via a common cause, is of paramount importance. (*) People’s race, on the other hand, is not a decision variable. You can’t change your neither your own race nor somebody’s else race. Therefore, the ‘correlation vs causation’ issue is irrelevant.
But more generally, nobody in this thread is suggesting that public policy should be predicate public policy on race.
( When using EDT, the relevant question becomes whether, after conditioning on everything you know, including your own preferences, smoking is still positively correlated to cancer. If it is (*), and you value not getting cancer higher than smoking, then you should decide not to smoke. If it not, then you can smoke if you like it.)
(** In case anybody is wondering, we are pretty certain that it is, due to randomized controlled trials on animals and humans.)
You can’t change your neither your own race nor somebody’s else race.
You can change your future children’s race by deciding whom to have children with. More generally, in principle it is possible to change the racial makeup of the next generation by incentivizing certain races to have more children and other races to have fewer children.
More generally, in principle it is possible to change the racial makeup of the next generation by incentivizing certain races to have more children and other races to have fewer children.
Even more generally, just killing off people of certain races works better for changing “the racial makeup of the next generation”.
It risks creating a self-fulfilling propecy: Statistics show that “Martians” are more likely to be convicted, thus you lower the bar to convict them, which makes them even more likely to be convicted, and so on. In principle you could avoid this by properly conditioning not on Martian conviction rate, but on the rate Martians have been observed doing things which are considered prima facie evidence in a trial. In practice, you would likely end up introducing a bias.
it can be exploited: If, by symmetric Bayesian reasoning, you rise the bar to convict “Earthlings”, you create an incentive for Earthlings to commit crime. Even if most Earthlings are very much law-abiding, few bad Earthlings can benefit a lot from the system and cause lots of damage.
It is intrisically poltically controversial: it runs contrary to the interests of the Martians, and Martians can realistically coordinate to lobby against it, and if their lobbying is unsuccesful, this becomes a point of constant friction between Martians and Earthlings. You don’t want to incite this kind of tension.
Only if the correlation was perfect. In any case, even if you were able to identify criminals before the fact, it doesn’t mean that it would be moral to “punish” them beforehand.
If you’re a Bayseian, you might refuse to hire them because of the increased probability that they are a criminal. That would not be punishment, since you are not morally obligated to hire them at all. Likewise, charging them more for insurance, or walking across the street when you see one, is not “punishment”.
While you may have Bayseian grounds to not hire racists, I doubt that you’d have Bayseian grounds to not hire Bayseian racists. There’s little reason to believe that racism based on accurate Bayseian calculation is associated with the same negative traits as racism in general. So if you’re hiring on a Bayseian basis, you should divide the potential racist hires into Bayseian and non-Bayseian racists and refuse to hire only the non-Bayseian ones.
(Of course, the same applies to hiring minorities, if you can divide the minorities into similar subgroups.)
So if you’re hiring on a Bayseian basis, you should divide the potential racist hires into Bayseian and non-Bayseian racists and refuse to hire only the non-Bayseian ones.
But being a Bayesian is a hidden variable, while having a race-based hiring policy is mostly observable. And having a race-based hiring policy also correlates with being a “stupid” racist. Oops.
There’s little reason to believe that racism based on accurate Bayseian calculation is associated with the same negative traits as racism in general
I’d need a bit of clarification of what a racist means when he self describes his racism as based on accurate Bayesian calculation. (Given Dunning-Kruger effect… I may actually expect such person to do worse on math [than people who know the Bayes formula])
So, suppose you are automating your hires by writing an app into which you enter the candidate’s responses to some interview questions and the like. Perhaps 20..50 items total. Does a “Bayesian racist” add a radio group for race, which when set to black lowers the score?
edit: note, I’m using ‘self describes as’ for evidence.
I doubt that you’d have Bayseian grounds to not hire Bayseian racists
Ohh, I do. I know that racists are a worse hires, and I don’t know anything in particular about racists born on Tuesdays or racists who self describe as Bayesian.
Given equal qualifications, I expect (known to me as self-described) Bayesian racists to be overall more stupid. Especially ones who think that adding ‘Bayesian’ to racist should entitle them to less discrimination than they would do with ‘Bayesian’ added to ‘black’.
edit: And if I care about understanding of statistics, I’ll add a math question or two to the interview quiz, which I suspect a lot of self described Bayesians, racist or otherwise, are going to fail. Within those that don’t fail, I can further use racism as evidence.
You seem to be confused between P(A|B) and P(B|A). Your link is to some PR about a paper (do you have a link to the paper itself, by the way?) which claimed that dumb kids in the UK grew up to be more racist than one would otherwise expect. And..?
If there are good Bayesian grounds.… Someone needs to demonstrate a hiring situation where group information masks individual information, AKA the resume.
Uh, the obvious? Treating people badly because one “believes” that physical or intellectual differences imply morlal inferiority. Generally speaking, everyday racists are mostly quite shitty to black people.
A non-Bayseian racist wants to decrease the utility of other people either as a terminal preference (or as something that has the same practical implications as a terminal preference, such as being disgusted by their presence) or because of bias. Bias is irrational and a bad trait, and correlated with bias in other areas as well. Decreasing the utility of other people as a terminal preference is not irrational in the sense of involving bad logic, but it’s something that I want to avoid if I hire someone for any job involving other people.
It’s always possible that they can compartmentalize their racism or that they will be sufficiently deterred by the threat of lawsuits or being fired that they don’t cause any damage, but Bayseianism is about the odds; the odds are that they are more likely to cause problems even if not every one does.
A non-Bayseian racist wants to decrease the utility of other people
Huh? Maybe I don’t understand what make a racist non-Bayesian, but most definitions of racism revolve around believing that there are innate and significant differences between races. That’s an epistemic issue. How do you derive from that the desire to “decrease the utility of other people”?
First, for the world outside of LW that’s a meaningless distinction.
Second, I’m not sure how do you know what the actual differences are. Doesn’t it boil down to the “Bayesian racists” being able to cite some science to support what they believe and the “non-Bayesian racists” not being able to?
Third, you still need to jump the gap between believing the races to be different and wanting to “decrease utility” of other people.
Third, you still need to jump the gap between believing the races to be different and wanting to “decrease utility” of other people.
There is no gap. If you have reason to believe the races to be different, and act differently towards them based solely on this difference, you’re a Bayseian racist, and I do not claim that Bayseian racists want to decrease utility of other people. Non-Bayseian racists do; a non-Bayseian racist is different from a Bayseian racist.
If you have reason to believe the races to be different, and act differently towards them based solely on this difference, you’re a Bayseian racist
So, let’s take some Southern redneck. He interacts with black people on a regular basis and based on his personal experience he came to the conclusion that they are pretty damn dumb, dumber than white rednecks, anyway. Does he have a “reason to believe”? Is he a Bayesian racist?
Or let’s take Alice. Alice knows the statistics about crime rates among black males and, say, Asian males. So on an empty street when she sees a black male she actively avoids him, but when she sees an Asian male she does not. Is she a Bayesian racist?
If there are no cognitive biases involved, sure. In practice, I think that would be unlikely.
On the other hand, someone who says “I find the presence of black people to be disgusting. I would not hire one because I don’t want to be near them” would be a non-Bayseian racist. There’s no Bayseian reason for, for instance, having segregated water fountains.
You’re using the word “Bayesian” here as a synonym for “rational”, right?
Well, Bayseian is a synonym for being rational (or for a subset of being rational), so it amounts to that.
Do you think there’s a “Bayesian reason” for having segregated schools?
I don’t know. If you can come up with a reason that depends on the higher probability that some races have some traits, I suppose there would be. I would of course like to see such a reason first.
If he got his opinion by updating it constantly and is willing to update it in the other direction given further evidence, yes. What he actually ends up doing with it is another matter entirely. I wouldn’t expect a Bayesian redneck to join the KKK, for example.
Is she a Bayesian racist?
I’d think she’s either committing the fallacy of trusting statistics to exactly predict the individual case, or simply not doing proper cost analysis. Even if the statistics say there are no unsolved crimes and none of the crimes are committed by Asians, the expected negative utility of running into the first Asian criminal in history should outweigh the inconvenience of avoiding one person on an otherwise empty street.
Even if the statistics say there are no unsolved crimes and none of the crimes are committed by Asians
In that hypothetical world, which is very different from ours, actively avoiding Asian males would be as weird as actively avoiding harmless old grannies, and doing weird things carries a nonzero social cost.
If you remove the trait, you won’t have criminals. A genetics-caused relationship, logically, would allow you to do this. You’ll know beforehand who will be a criminal. Not only that, since it would assist in establishing likelihood, you should be able factor race into the evidence in criminal trials. This would be a terrible idea.
You have nothing but correlation, and correlation based on fuzzy and corrupted data. Correlation is not causation, and you seem to struggle mightily with the difference.
These claims are also the result of you not seeing the distinction between logic based on causation and logic based on correlation.
Only if the correlation was perfect. In any case, even if you were able to identify criminals before the fact, it doesn’t mean that it would be moral to “punish” them beforehand.
There are good reason not to use profiling in criminal investigations and trials.
Anyway, what evidence would make you accept the claim that one group of easily identifiable people people was more prone to commit crime than the general population? If you were given this evidence, would you consider appropriate to use profiling against this group in criminal trials, or otherwise bannish/disenfranchise or even genocide them?
Evidence would be appreciated.
I don’t want to come out as rude but I don’t think you know what you are talking about when you say ′ Correlation is not causation’:
Distinguishing causation from correlation is important only when one of the variable is under your control.
There is some controversy about whether Evidential Decision Theory or Causal Decision Theory or something else is the ultimately ideal way of making decisions, but in practice the best that you can do in most non-pathological scenarios is to use some approximation of Causal Decision Theory.
You can decide to smoke or not smoke, so establishing whether smoking causes cancer or is merely correlated to it via a common cause, is of paramount importance. (*)
People’s race, on the other hand, is not a decision variable. You can’t change your neither your own race nor somebody’s else race. Therefore, the ‘correlation vs causation’ issue is irrelevant.
But more generally, nobody in this thread is suggesting that public policy should be predicate public policy on race.
( When using EDT, the relevant question becomes whether, after conditioning on everything you know, including your own preferences, smoking is still positively correlated to cancer. If it is (*), and you value not getting cancer higher than smoking, then you should decide not to smoke. If it not, then you can smoke if you like it.)
(** In case anybody is wondering, we are pretty certain that it is, due to randomized controlled trials on animals and humans.)
You can change your future children’s race by deciding whom to have children with. More generally, in principle it is possible to change the racial makeup of the next generation by incentivizing certain races to have more children and other races to have fewer children.
Even more generally, just killing off people of certain races works better for changing “the racial makeup of the next generation”.
Such as?
It risks creating a self-fulfilling propecy: Statistics show that “Martians” are more likely to be convicted, thus you lower the bar to convict them, which makes them even more likely to be convicted, and so on. In principle you could avoid this by properly conditioning not on Martian conviction rate, but on the rate Martians have been observed doing things which are considered prima facie evidence in a trial. In practice, you would likely end up introducing a bias.
it can be exploited: If, by symmetric Bayesian reasoning, you rise the bar to convict “Earthlings”, you create an incentive for Earthlings to commit crime. Even if most Earthlings are very much law-abiding, few bad Earthlings can benefit a lot from the system and cause lots of damage.
It is intrisically poltically controversial: it runs contrary to the interests of the Martians, and Martians can realistically coordinate to lobby against it, and if their lobbying is unsuccesful, this becomes a point of constant friction between Martians and Earthlings. You don’t want to incite this kind of tension.
If you’re a Bayseian, you might refuse to hire them because of the increased probability that they are a criminal. That would not be punishment, since you are not morally obligated to hire them at all. Likewise, charging them more for insurance, or walking across the street when you see one, is not “punishment”.
Well, I may refuse to hire someone I deem racist, on good Bayesian grounds too.
While you may have Bayseian grounds to not hire racists, I doubt that you’d have Bayseian grounds to not hire Bayseian racists. There’s little reason to believe that racism based on accurate Bayseian calculation is associated with the same negative traits as racism in general. So if you’re hiring on a Bayseian basis, you should divide the potential racist hires into Bayseian and non-Bayseian racists and refuse to hire only the non-Bayseian ones.
(Of course, the same applies to hiring minorities, if you can divide the minorities into similar subgroups.)
But being a Bayesian is a hidden variable, while having a race-based hiring policy is mostly observable. And having a race-based hiring policy also correlates with being a “stupid” racist. Oops.
Live by Bayes, die by Bayes :D
To engage the comment some more.
I’d need a bit of clarification of what a racist means when he self describes his racism as based on accurate Bayesian calculation. (Given Dunning-Kruger effect… I may actually expect such person to do worse on math [than people who know the Bayes formula])
So, suppose you are automating your hires by writing an app into which you enter the candidate’s responses to some interview questions and the like. Perhaps 20..50 items total. Does a “Bayesian racist” add a radio group for race, which when set to black lowers the score?
edit: note, I’m using ‘self describes as’ for evidence.
Ohh, I do. I know that racists are a worse hires, and I don’t know anything in particular about racists born on Tuesdays or racists who self describe as Bayesian.
Given equal qualifications, I expect (known to me as self-described) Bayesian racists to be overall more stupid. Especially ones who think that adding ‘Bayesian’ to racist should entitle them to less discrimination than they would do with ‘Bayesian’ added to ‘black’.
edit: And if I care about understanding of statistics, I’ll add a math question or two to the interview quiz, which I suspect a lot of self described Bayesians, racist or otherwise, are going to fail. Within those that don’t fail, I can further use racism as evidence.
Is that the same “I know” as in e.g. “I know that atheists have no morals”..?
http://www.huffingtonpost.com/2012/01/27/intelligence-study-links-prejudice_n_1237796.html
You seem to be confused between P(A|B) and P(B|A). Your link is to some PR about a paper (do you have a link to the paper itself, by the way?) which claimed that dumb kids in the UK grew up to be more racist than one would otherwise expect. And..?
If there are good Bayesian grounds.… Someone needs to demonstrate a hiring situation where group information masks individual information, AKA the resume.
The information in the resume may need to be evaluated taking race into account because of affirmative action.
Adjusted for confidence in the factual accuracy of resumes, it’s a tough call.
You’re allowed to check
I’m not sure HR would approve racial stereotype studies as part of the hiring process.
And those traits are. The only one I negative trait I can think of is “not signaling membership in the high status ideology”.
Uh, the obvious? Treating people badly because one “believes” that physical or intellectual differences imply morlal inferiority. Generally speaking, everyday racists are mostly quite shitty to black people.
A non-Bayseian racist wants to decrease the utility of other people either as a terminal preference (or as something that has the same practical implications as a terminal preference, such as being disgusted by their presence) or because of bias. Bias is irrational and a bad trait, and correlated with bias in other areas as well. Decreasing the utility of other people as a terminal preference is not irrational in the sense of involving bad logic, but it’s something that I want to avoid if I hire someone for any job involving other people.
It’s always possible that they can compartmentalize their racism or that they will be sufficiently deterred by the threat of lawsuits or being fired that they don’t cause any damage, but Bayseianism is about the odds; the odds are that they are more likely to cause problems even if not every one does.
Huh? Maybe I don’t understand what make a racist non-Bayesian, but most definitions of racism revolve around believing that there are innate and significant differences between races. That’s an epistemic issue. How do you derive from that the desire to “decrease the utility of other people”?
Someone who treats races differently only based on actual (probabilistic) differences between races is a Bayseian racist.
First, for the world outside of LW that’s a meaningless distinction.
Second, I’m not sure how do you know what the actual differences are. Doesn’t it boil down to the “Bayesian racists” being able to cite some science to support what they believe and the “non-Bayesian racists” not being able to?
Third, you still need to jump the gap between believing the races to be different and wanting to “decrease utility” of other people.
There is no gap. If you have reason to believe the races to be different, and act differently towards them based solely on this difference, you’re a Bayseian racist, and I do not claim that Bayseian racists want to decrease utility of other people. Non-Bayseian racists do; a non-Bayseian racist is different from a Bayseian racist.
So, let’s take some Southern redneck. He interacts with black people on a regular basis and based on his personal experience he came to the conclusion that they are pretty damn dumb, dumber than white rednecks, anyway. Does he have a “reason to believe”? Is he a Bayesian racist?
Or let’s take Alice. Alice knows the statistics about crime rates among black males and, say, Asian males. So on an empty street when she sees a black male she actively avoids him, but when she sees an Asian male she does not. Is she a Bayesian racist?
If there are no cognitive biases involved, sure. In practice, I think that would be unlikely.
On the other hand, someone who says “I find the presence of black people to be disgusting. I would not hire one because I don’t want to be near them” would be a non-Bayseian racist. There’s no Bayseian reason for, for instance, having segregated water fountains.
Given how much this was moderated down, I wonder how many people think there is a Bayseian reason for having segregated water fountains.
It’s Bayesian not Bayseian.
You’re using the word “Bayesian” here as a synonym for “rational”, right?
Do you think there’s a “Bayesian reason” for having segregated schools?
Well, Bayseian is a synonym for being rational (or for a subset of being rational), so it amounts to that.
I don’t know. If you can come up with a reason that depends on the higher probability that some races have some traits, I suppose there would be. I would of course like to see such a reason first.
8-0 No, it isn’t.
A Bayesian, in this context, is one who practices the Bayesian approach to uncertainty. Rationality is much wider than that.
If he got his opinion by updating it constantly and is willing to update it in the other direction given further evidence, yes. What he actually ends up doing with it is another matter entirely. I wouldn’t expect a Bayesian redneck to join the KKK, for example.
I’d think she’s either committing the fallacy of trusting statistics to exactly predict the individual case, or simply not doing proper cost analysis. Even if the statistics say there are no unsolved crimes and none of the crimes are committed by Asians, the expected negative utility of running into the first Asian criminal in history should outweigh the inconvenience of avoiding one person on an otherwise empty street.
In that hypothetical world, which is very different from ours, actively avoiding Asian males would be as weird as actively avoiding harmless old grannies, and doing weird things carries a nonzero social cost.