I’m not sure what “statistically immoral” means nor have I ever heard the term, which makes me doubt it’s common speech (googling it does not bring up any uses of the phrase).
I think we’re using the term “historical circumstances” differently; I simply mean what’s happened in the past. Isn’t the base rate purely a function of the records of white/black convictions? If so, then the fact that the rates are not the same is the reason that we run into this fairness problem. I agree that this problem can apply in other settings, but in the case where the base rate is a function of history, is it not accurate to say that the cause of the conundrum is historical circumstances? An alternative history with equal, or essentially equal, rates of convictions would not suffer from this problem, right?
I think what people mean when they say things like “machines are biased because they learn from history and history is biased” is precisely this scenario: historically, conviction rates are not equal between racial groups and so any algorithm that learns to predict convictions based on historical data will inevitably suffer from the same inequality (or suffer from some other issue by trying to fix this one, as your analysis has shown).
No. Any decider will be unfair in some way, whether it knows anything about history at all. The decider can be a coin flipper and it would still be biased. One can say that the unfairness is baked into the reality of base-rate difference.
The only way to fix this is not fixing the decider, but to just somehow make the base-rate difference disappear, or to compromise on the definition of fairness so that it’s not so stringent, and satisfiable.
And in common language and common discussion of algorithmic bias, “bias” is decidedly NOT merely a statistical definition. It always contains a moral judgment: violation of a fairness requirement. To say that a decider is biased is to say that the statistical pattern of its decision violates a fairness requirement.
The key message is that, by the common language definition, “bias” is unavoidable. No amount of trying to fix the decider will make it fair. Blinding it to the history will do nothing. The unfairness is in the base rate, and in the definition of fairness.
The base rates in the diagram are not historical but “potential” rates. They show the proportion of current inmates up for parole who would be re-arrested if paroled. In practice this is indeed estimated by looking at historical rates but as long as the true base rates are different in reality, no algorithm can be fair in the two senses described above.
I’m not sure what “statistically immoral” means nor have I ever heard the term, which makes me doubt it’s common speech (googling it does not bring up any uses of the phrase).
I think we’re using the term “historical circumstances” differently; I simply mean what’s happened in the past. Isn’t the base rate purely a function of the records of white/black convictions? If so, then the fact that the rates are not the same is the reason that we run into this fairness problem. I agree that this problem can apply in other settings, but in the case where the base rate is a function of history, is it not accurate to say that the cause of the conundrum is historical circumstances? An alternative history with equal, or essentially equal, rates of convictions would not suffer from this problem, right?
I think what people mean when they say things like “machines are biased because they learn from history and history is biased” is precisely this scenario: historically, conviction rates are not equal between racial groups and so any algorithm that learns to predict convictions based on historical data will inevitably suffer from the same inequality (or suffer from some other issue by trying to fix this one, as your analysis has shown).
No. Any decider will be unfair in some way, whether it knows anything about history at all. The decider can be a coin flipper and it would still be biased. One can say that the unfairness is baked into the reality of base-rate difference.
The only way to fix this is not fixing the decider, but to just somehow make the base-rate difference disappear, or to compromise on the definition of fairness so that it’s not so stringent, and satisfiable.
And in common language and common discussion of algorithmic bias, “bias” is decidedly NOT merely a statistical definition. It always contains a moral judgment: violation of a fairness requirement. To say that a decider is biased is to say that the statistical pattern of its decision violates a fairness requirement.
The key message is that, by the common language definition, “bias” is unavoidable. No amount of trying to fix the decider will make it fair. Blinding it to the history will do nothing. The unfairness is in the base rate, and in the definition of fairness.
The base rates in the diagram are not historical but “potential” rates. They show the proportion of current inmates up for parole who would be re-arrested if paroled. In practice this is indeed estimated by looking at historical rates but as long as the true base rates are different in reality, no algorithm can be fair in the two senses described above.