That seems unlikely to me, for basically the same reason that it seems unlikely that wealthy people being unable to do X wealth-increasing thing when poor people can increases wealth inequality. But sure, if you assume this, you’d reach different conclusions than I do.
Well, that rather depends on whether we define “wealth inequality” as “inequality caused by the wealth distribution” or “inequality in the wealth distribution”. If the world was divided into two different castes, rich and poor, each of whom could only do half the utility-increasing things, it seems to me that they would be unequal because if a poor person wanted to do a rich-person thing, they couldn’t. If you would consider them equal (a similar world could be divided by race or gender) then I guess the term in your utility function you call “equality” is different to mine, even though they have the same labels. Odd, but there you go.
If the “utility-increasing things” the rich and poor groups were capable of doing were equally utility-increasing, yeah, I’d probably say that we’d achieved equality between rich and poor. If you would further require that they be able to do the same things before making that claim, then yes, we’re using the term “equality” differently. Sorry for the confusion; I’ll try to avoid the term in our discussion moving forward.
Rawls has done most of the work here, since this is basically the Rawlsian “veil of ignorance” test for a society—if the system is set up so that I’m genuinely, rationally indifferent between being born into one group and the other, the two groups can be considered equal.
This seems like a pretty good test to me. If we have a big pile of stuff to divide between us, and we can divide it into two piles such that both of us are genuinely indifferent about which one we end up with, it seems natural to say we value the two piles equally… in other words, that they are equal in value.
Granted, I’m really not sure how to argue for caring only about value differences, if that’s a sticking point, other than to stare incredulously and say “well what else would you care about and why?”
So, getting back to your hypothetical… if replacing one set of things-that-I-can-do (S1) with a different set of things-that-I-can-do (S2) doesn’t constitute a utility loss, then I don’t care about the substitution. Why should I? I’m just as well-off along all measurable dimensions of value as I was before.
Similarly, if group 1 has S1 and group 2 has S2, and there’s no utility difference, I don’t care which group I’m assigned to. Again, why should I? I’m just as well-off along all measurable dimensions of value either way. On what grounds would I pick one over the other?
So if, as you posited, rich people had S1 and poor people had S2, then I wouldn’t care whether I was rich or poor. That’s clearly not the way the real world is set up, which is precisely why I’m comfortable saying rich and poor people in the real world aren’t equal. But that is the way things are set up in your hypothetical.
In your hypothetical, a Rawlsian veil of ignorance really does apply between rich and poor. So I’m content to say that in your hypothetical, the rich and the poor are equal.
I suspect we haven’t yet identified the real mismatch, which probably has to do with what you meant and what I understood by “utility-increasing thing”. But I could be wrong, of course.
Rawls has done most of the work here, since this is basically the Rawlsian “veil of ignorance” test for a society—if the system is set up so that I’m genuinely, rationally indifferent between being born into one group and the other, the two groups can be considered equal.
Which utility function is this hypothetical rational agent supposed to use?
So, getting back to your hypothetical… if replacing one set of things-that-I-can-do (S1) with a different set of things-that-I-can-do (S2) doesn’t constitute a utility loss, then I don’t care about the substitution. Why should I? I’m just as well-off along all measurable dimensions of value as I was before.
Similarly, if group 1 has S1 and group 2 has S2, and there’s no utility difference, I don’t care which group I’m assigned to. Again, why should I? I’m just as well-off along all measurable dimensions of value either way. On what grounds would I pick one over the other?
So if, as you posited, rich people had S1 and poor people had S2, then I wouldn’t care whether I was rich or poor. That’s clearly not the way the real world is set up, which is precisely why I’m comfortable saying rich and poor people in the real world aren’t equal. But that is the way things are set up in your hypothetical.
But each of them only gets half! What about … well, what about individual variance, for a start. S1 and S2 wouldn’t be exactly equal for everybody if you’re dealing with humans, which to be fair I did not make explicit.
I don’t think that’s the point of the Rawlsian veil of ignorance—the point is that you should design a society as if you didn’t know which caste you’d be in, not that you should design it so you don’t care which caste you’d be in.
IOW, maximize the average utility, not minimize the differences between agents.
you should design a society as if you didn’t know which caste you’d be in, not that you should design it so you don’t care which caste you’d be in.
As I understand it, the goal of my not-knowing is to eliminate the temptation to take my personal status in that society into consideration when judging the society… that is, “ignorant of” is being used as a way of approximating “indifferent to”, not as a primary goal in and of itself.
But, OK, maybe I just don’t understand Rawls.
In any case, I infer that none of the rest of my explanation of why I think of equality in terms of equal-utility rather than equal-particulars is at all worth responding to, in which case I’m content to drop the subject here.
As I understand it, the goal of my not-knowing is to eliminate the temptation to take my personal status in that society into consideration when judging the society… that is, “ignorant of” is being used as a way of approximating “indifferent to”, not as a primary goal in and of itself.
But, OK, maybe I just don’t understand Rawls.
Nope, that’s my understanding too. You want to maximize utility, not just for your own caste, but for society.
In any case, I infer that none of the rest of my explanation of why I think of equality in terms of equal-utility rather than equal-particulars is at all worth responding to, in which case I’m content to drop the subject here.
Sorry about not responding to your other arguments, I kind of skimmed your comment and thought that was your argument.
Well, that rather depends on whether we define “wealth inequality” as “inequality caused by the wealth distribution” or “inequality in the wealth distribution”. If the world was divided into two different castes, rich and poor, each of whom could only do half the utility-increasing things, it seems to me that they would be unequal because if a poor person wanted to do a rich-person thing, they couldn’t. If you would consider them equal (a similar world could be divided by race or gender) then I guess the term in your utility function you call “equality” is different to mine, even though they have the same labels. Odd, but there you go.
If the “utility-increasing things” the rich and poor groups were capable of doing were equally utility-increasing, yeah, I’d probably say that we’d achieved equality between rich and poor. If you would further require that they be able to do the same things before making that claim, then yes, we’re using the term “equality” differently. Sorry for the confusion; I’ll try to avoid the term in our discussion moving forward.
Huh. Well, I guess we’ve identified the mismatch. Tapping out, unless you want to argue for Dave!equality.
Sure, why not?
Rawls has done most of the work here, since this is basically the Rawlsian “veil of ignorance” test for a society—if the system is set up so that I’m genuinely, rationally indifferent between being born into one group and the other, the two groups can be considered equal.
This seems like a pretty good test to me. If we have a big pile of stuff to divide between us, and we can divide it into two piles such that both of us are genuinely indifferent about which one we end up with, it seems natural to say we value the two piles equally… in other words, that they are equal in value.
Granted, I’m really not sure how to argue for caring only about value differences, if that’s a sticking point, other than to stare incredulously and say “well what else would you care about and why?”
So, getting back to your hypothetical… if replacing one set of things-that-I-can-do (S1) with a different set of things-that-I-can-do (S2) doesn’t constitute a utility loss, then I don’t care about the substitution. Why should I? I’m just as well-off along all measurable dimensions of value as I was before.
Similarly, if group 1 has S1 and group 2 has S2, and there’s no utility difference, I don’t care which group I’m assigned to. Again, why should I? I’m just as well-off along all measurable dimensions of value either way. On what grounds would I pick one over the other?
So if, as you posited, rich people had S1 and poor people had S2, then I wouldn’t care whether I was rich or poor. That’s clearly not the way the real world is set up, which is precisely why I’m comfortable saying rich and poor people in the real world aren’t equal. But that is the way things are set up in your hypothetical.
In your hypothetical, a Rawlsian veil of ignorance really does apply between rich and poor. So I’m content to say that in your hypothetical, the rich and the poor are equal.
I suspect we haven’t yet identified the real mismatch, which probably has to do with what you meant and what I understood by “utility-increasing thing”. But I could be wrong, of course.
Which utility function is this hypothetical rational agent supposed to use?
Beats me. MugaSofer asked me the question in terms of “the utility-increasing things” and I answered in those terms.
As long as it doesn’t include a term for Dave!equality, we should be good.
But each of them only gets half! What about … well, what about individual variance, for a start. S1 and S2 wouldn’t be exactly equal for everybody if you’re dealing with humans, which to be fair I did not make explicit.
OK. Given some additional data about what arguing for Dave!equality might look like, I’m tapping out here.
Lengthy, amirite?
Fair enough.
I don’t think that’s the point of the Rawlsian veil of ignorance—the point is that you should design a society as if you didn’t know which caste you’d be in, not that you should design it so you don’t care which caste you’d be in. IOW, maximize the average utility, not minimize the differences between agents.
As I understand it, the goal of my not-knowing is to eliminate the temptation to take my personal status in that society into consideration when judging the society… that is, “ignorant of” is being used as a way of approximating “indifferent to”, not as a primary goal in and of itself.
But, OK, maybe I just don’t understand Rawls.
In any case, I infer that none of the rest of my explanation of why I think of equality in terms of equal-utility rather than equal-particulars is at all worth responding to, in which case I’m content to drop the subject here.
Nope, that’s my understanding too. You want to maximize utility, not just for your own caste, but for society.
Sorry about not responding to your other arguments, I kind of skimmed your comment and thought that was your argument.