Your scenarios implicitly assume that anyone whose expected intelligence is bellow median will get treated as dumb
Well, actually, I thought that I made this assumption generously explicit. Evidently, you had implicit assumptions behind your claim that taking correlates into account would always lead to fewer false positives. What were these additional assumptions?
and that this is somehow much much worse then what happens to people whose expected is exactly median.
I did not make any assumption quantifying how much worse it is. It need only be marginally worse.
Furthermore, even under this assumption you will find that your example falls apart if there is any way besides race to obtain information correlated with intelligence.
No. I can construct similar counterexamples where there are two observable properties (which you can think of as black/white, male/female), corresponding to four populations (black males, …, white females). You will need to make your assumptions more explicit if you want to rule out these kinds of counterexamples.
I did not make any assumption quantifying how much worse it is. It need only be marginally worse.
Sorry, I was assuming a utility function that summed over the amount of suffering each person experienced. You seem to be using a Rawls-style utility function base on minimizing the suffering of the worst of individual. (BTW, that’s a very stupid function to use in anything outside a very simple toy model.
No. I can construct similar counterexamples where there are two observable properties (which you can think of as black/white, male/female), corresponding to four populations (black males, …, white females).
Only by fiddling with the parameters very precisely.
If your assumption is that people whose expected intelligence is bellow the median (or really the nth percentile for any n) will be treated as dumb, the only way a counter-example like yours can work is by having lots of people exactly tied for the nth percentile. And the more other information is available the more the numbers in the scenario must be jiggered for that to happen.
You will need to make your assumptions more explicit if you want to rule out these kinds of counterexamples.
Sorry, I was assuming a utility function that summed over the amount of suffering each person experienced.
Your claim was that more correlates means fewer false positives. This is an abstract mathematical claim about epistemic probability. Utility functions don’t enter into it, at least not explicitly. It’s a claim about some class of probability distributions and criteria for categorization (“positives”). I’m just trying to figure out what class of distributions and criteria you’re talking about.
My counterexamples show that your claim doesn’t apply in full generality. You now claim that such counterexamples require “fiddling with the parameters very precisely.” I take this to be the claim that all scenarios satisfy your claim, except for some measure-zero subset (with respect to some natural measure). Can you prove this?
the only way a counter-example like yours can work is by having lots of people exactly tied for the nth percentile.
I’m not sure how to make sense of this. It doesn’t seem to reflect an understanding of my example.
I argued in the continuous limit. A measure-zero subset of people are tied for exactly the nth percentile. Recall that I said that “the proportion of individuals with intelligence between a and b is b − a.” So, the proportion of people whose intelligence is exactly tied for any value x is x − x = 0.
Of course, the continuous limit is only an approximation of the discrete reality. But I can find discrete examples where this proportion is arbitrarily small. It’s never “lots” relative to the size of the entire population, if that population is of any significant size.
I meant lots of people tied for the nth percentile in terms of your estimate of their intelligence, which was happening in your scenarios because the amount of information available was discrete and very small.
What you say is true of the counterexamples I’ve described explicitly so far. But it is just an artifact of their being the simplest representatives of their family. I can construct similar counterexamples where the number of subpopulations in World 2 is arbitrarily large, and each subpopulation has a different expected intelligence. The proportion of people tied for any given expected intelligence can be arbitrarily small.
ETA: Also, these counterexamples work even if we redefine “treating smart people as dumb” to mean, “treating someone in the top 1% as if they were in the bottom 1%”. We still have a World 1 where no one smart is treated as dumb, and a World 2 where some smart people are treated as dumb.
Well, actually, I thought that I made this assumption generously explicit. Evidently, you had implicit assumptions behind your claim that taking correlates into account would always lead to fewer false positives. What were these additional assumptions?
I did not make any assumption quantifying how much worse it is. It need only be marginally worse.
No. I can construct similar counterexamples where there are two observable properties (which you can think of as black/white, male/female), corresponding to four populations (black males, …, white females). You will need to make your assumptions more explicit if you want to rule out these kinds of counterexamples.
Sorry, I was assuming a utility function that summed over the amount of suffering each person experienced. You seem to be using a Rawls-style utility function base on minimizing the suffering of the worst of individual. (BTW, that’s a very stupid function to use in anything outside a very simple toy model.
Only by fiddling with the parameters very precisely.
If your assumption is that people whose expected intelligence is bellow the median (or really the nth percentile for any n) will be treated as dumb, the only way a counter-example like yours can work is by having lots of people exactly tied for the nth percentile. And the more other information is available the more the numbers in the scenario must be jiggered for that to happen.
You will need to make your assumptions more explicit if you want to rule out these kinds of counterexamples.
Your claim was that more correlates means fewer false positives. This is an abstract mathematical claim about epistemic probability. Utility functions don’t enter into it, at least not explicitly. It’s a claim about some class of probability distributions and criteria for categorization (“positives”). I’m just trying to figure out what class of distributions and criteria you’re talking about.
My counterexamples show that your claim doesn’t apply in full generality. You now claim that such counterexamples require “fiddling with the parameters very precisely.” I take this to be the claim that all scenarios satisfy your claim, except for some measure-zero subset (with respect to some natural measure). Can you prove this?
I’m not sure how to make sense of this. It doesn’t seem to reflect an understanding of my example.
I argued in the continuous limit. A measure-zero subset of people are tied for exactly the nth percentile. Recall that I said that “the proportion of individuals with intelligence between a and b is b − a.” So, the proportion of people whose intelligence is exactly tied for any value x is x − x = 0.
Of course, the continuous limit is only an approximation of the discrete reality. But I can find discrete examples where this proportion is arbitrarily small. It’s never “lots” relative to the size of the entire population, if that population is of any significant size.
I meant lots of people tied for the nth percentile in terms of your estimate of their intelligence, which was happening in your scenarios because the amount of information available was discrete and very small.
Okay, good. That makes a lot more sense.
What you say is true of the counterexamples I’ve described explicitly so far. But it is just an artifact of their being the simplest representatives of their family. I can construct similar counterexamples where the number of subpopulations in World 2 is arbitrarily large, and each subpopulation has a different expected intelligence. The proportion of people tied for any given expected intelligence can be arbitrarily small.
ETA: Also, these counterexamples work even if we redefine “treating smart people as dumb” to mean, “treating someone in the top 1% as if they were in the bottom 1%”. We still have a World 1 where no one smart is treated as dumb, and a World 2 where some smart people are treated as dumb.