Hang on a second, I’ve just noticed something. Misandry is present in different situations to misogyny, and increases inequality in those situations. The question is whether inequality is a separate Bad Thing, as you’ve modeled it—in which case EMA is negative—or equal to the total harm done to men minus the total harm done to women—in which case it’s positive, I guess.
I tend to assume that, say, men being unable to do X utility-increasing thing when women can increases inequality, in the same way as women being unable to do Y utility-increasing thing when men can, whereas both men and women being unable to do X utility-increasing thing reduces inequality, even as it reduces utility (obviously.) Maybe this is the source of the confusion/disagreement?
Yes, I agree that whether inequality is a separate Bad Thing is an important part of the question. As I said initially, if someone doesn’t value equality, then that person would object to misandry and misogyny on the same grounds (within the very narrow subset of the current discussion), and they would not be mistaken to do so, merely value different things than I do.
I tend to assume that, say, men being unable to do X utility-increasing thing when women can increases inequality
That seems unlikely to me, for basically the same reason that it seems unlikely that wealthy people being unable to do X wealth-increasing thing when poor people can increases wealth inequality. But sure, if you assume this, you’d reach different conclusions than I do.
For example, there are several folks on this site who seem to argue that there is no gender-based social inequality in our culture, or that if there is it benefits women; if I were to believe either of those things, I would reach different conclusions. (In the latter case I would oppose misandry more strongly than misogyny, since misogyny would tend to reduce inequality while misandry increased it, while having equal effects on harm. In the former case I would oppose them equally, since they had equal effects on inequality and harm.)
Even if you value equity separately from total utility, it is still the case that, contingent on any given level of equity, you should maximize total utility. While this would still involve some kind of utility transfer between agents, compared to the maximum in total utility—and, for the sake of this example, this could be considered either “misandry” or “misogyny”—it’s not clear that what we now know as misandry or misogyny would be preserved.
Even if you value equity separately from total utility,
Not sure where this came from.
MugaSofer gave two choices, neither of which had anything to do with total utility as I understood it. One choice was “inequality is a separate Bad Thing,” the other was that “it” (I assume inequality) was “equal to the total harm done to men minus the total harm done to women”. I agreed with the former. (I might also agree with the latter; it depends on how we understand “harm”.)
In any case, I don’t value equality separate from total utility. I do value it separate from total harm, which I also (negatively) value, and both values factor into my calculations of total utility. As do various other things.
contingent on any given level of equity, you should maximize total utility.
Sure. Further, I’d agree that I should maximize total utility independent of equality, with the understanding that how we calculate utility and how we total utilities is not obvious.
The rest of your comment is harder for me to make sense of, but if I’ve understood you correctly, you’re saying that if we maximize net aggregate utility for all humans—whatever that turns out to involve—it’s likely that when we’re done some group(s) might end up worse off than they’d have ended up if we’d instead maximized that group’s net aggregate utility. Yes?
Sure, I agree with that completely.
this could be considered either “misandry” or “misoginy”—it’s not clear that what we now know as misandry or misoginy would be preserved.
In any case, I don’t value equality separate from total utility. I do value it separate from total harm, which I also (negatively) value, and both values factor into my calculations of total utility.
In that case, you can replace “maximize total utility” with “minimize total harm” and the gist of my comment is unchanged (under mild assumptions, such as that increasing harm never yields an increase in utility).
some group(s) might end up worse off than they’d have ended up if we’d instead maximized that group’s net aggregate utility. Yes?
Not just worse off than maximizing that group’s aggregate U, or minimizing its aggregate harm (which is obvious), but also worse off than if we took equity into account and traded one group’s aggregate U against the given group’s.
This assumes a framework where inequality can be conflated with the difference in total harm done to each group (or with the difference in aggregate utility, again under plausible assumptions).
But, on the other hand, the assumption that “inequality is a separate Bad Thing” in the sense that instances of misandry create something called “inequality”, and instances of misogyny create inequality, and the two instances of inequality add up instead of canceling out, seems redundant. It’s just saying that “inequality” is a kind of harm, so there’s no reason to have it as a separate concept.
It’s just saying that “inequality” is a kind of harm, so there’s no reason to have it as a separate concept.
I agree that with a sufficiently robust shared understanding of harm, there’s no reason to call out other related concepts separately. That said, it’s not been my experience that the English word “harm” conveys anything like such an understanding in ordinary conversation, so sometimes using other words is helpful for communication.
That seems unlikely to me, for basically the same reason that it seems unlikely that wealthy people being unable to do X wealth-increasing thing when poor people can increases wealth inequality. But sure, if you assume this, you’d reach different conclusions than I do.
Well, that rather depends on whether we define “wealth inequality” as “inequality caused by the wealth distribution” or “inequality in the wealth distribution”. If the world was divided into two different castes, rich and poor, each of whom could only do half the utility-increasing things, it seems to me that they would be unequal because if a poor person wanted to do a rich-person thing, they couldn’t. If you would consider them equal (a similar world could be divided by race or gender) then I guess the term in your utility function you call “equality” is different to mine, even though they have the same labels. Odd, but there you go.
If the “utility-increasing things” the rich and poor groups were capable of doing were equally utility-increasing, yeah, I’d probably say that we’d achieved equality between rich and poor. If you would further require that they be able to do the same things before making that claim, then yes, we’re using the term “equality” differently. Sorry for the confusion; I’ll try to avoid the term in our discussion moving forward.
Rawls has done most of the work here, since this is basically the Rawlsian “veil of ignorance” test for a society—if the system is set up so that I’m genuinely, rationally indifferent between being born into one group and the other, the two groups can be considered equal.
This seems like a pretty good test to me. If we have a big pile of stuff to divide between us, and we can divide it into two piles such that both of us are genuinely indifferent about which one we end up with, it seems natural to say we value the two piles equally… in other words, that they are equal in value.
Granted, I’m really not sure how to argue for caring only about value differences, if that’s a sticking point, other than to stare incredulously and say “well what else would you care about and why?”
So, getting back to your hypothetical… if replacing one set of things-that-I-can-do (S1) with a different set of things-that-I-can-do (S2) doesn’t constitute a utility loss, then I don’t care about the substitution. Why should I? I’m just as well-off along all measurable dimensions of value as I was before.
Similarly, if group 1 has S1 and group 2 has S2, and there’s no utility difference, I don’t care which group I’m assigned to. Again, why should I? I’m just as well-off along all measurable dimensions of value either way. On what grounds would I pick one over the other?
So if, as you posited, rich people had S1 and poor people had S2, then I wouldn’t care whether I was rich or poor. That’s clearly not the way the real world is set up, which is precisely why I’m comfortable saying rich and poor people in the real world aren’t equal. But that is the way things are set up in your hypothetical.
In your hypothetical, a Rawlsian veil of ignorance really does apply between rich and poor. So I’m content to say that in your hypothetical, the rich and the poor are equal.
I suspect we haven’t yet identified the real mismatch, which probably has to do with what you meant and what I understood by “utility-increasing thing”. But I could be wrong, of course.
Rawls has done most of the work here, since this is basically the Rawlsian “veil of ignorance” test for a society—if the system is set up so that I’m genuinely, rationally indifferent between being born into one group and the other, the two groups can be considered equal.
Which utility function is this hypothetical rational agent supposed to use?
So, getting back to your hypothetical… if replacing one set of things-that-I-can-do (S1) with a different set of things-that-I-can-do (S2) doesn’t constitute a utility loss, then I don’t care about the substitution. Why should I? I’m just as well-off along all measurable dimensions of value as I was before.
Similarly, if group 1 has S1 and group 2 has S2, and there’s no utility difference, I don’t care which group I’m assigned to. Again, why should I? I’m just as well-off along all measurable dimensions of value either way. On what grounds would I pick one over the other?
So if, as you posited, rich people had S1 and poor people had S2, then I wouldn’t care whether I was rich or poor. That’s clearly not the way the real world is set up, which is precisely why I’m comfortable saying rich and poor people in the real world aren’t equal. But that is the way things are set up in your hypothetical.
But each of them only gets half! What about … well, what about individual variance, for a start. S1 and S2 wouldn’t be exactly equal for everybody if you’re dealing with humans, which to be fair I did not make explicit.
I don’t think that’s the point of the Rawlsian veil of ignorance—the point is that you should design a society as if you didn’t know which caste you’d be in, not that you should design it so you don’t care which caste you’d be in.
IOW, maximize the average utility, not minimize the differences between agents.
you should design a society as if you didn’t know which caste you’d be in, not that you should design it so you don’t care which caste you’d be in.
As I understand it, the goal of my not-knowing is to eliminate the temptation to take my personal status in that society into consideration when judging the society… that is, “ignorant of” is being used as a way of approximating “indifferent to”, not as a primary goal in and of itself.
But, OK, maybe I just don’t understand Rawls.
In any case, I infer that none of the rest of my explanation of why I think of equality in terms of equal-utility rather than equal-particulars is at all worth responding to, in which case I’m content to drop the subject here.
As I understand it, the goal of my not-knowing is to eliminate the temptation to take my personal status in that society into consideration when judging the society… that is, “ignorant of” is being used as a way of approximating “indifferent to”, not as a primary goal in and of itself.
But, OK, maybe I just don’t understand Rawls.
Nope, that’s my understanding too. You want to maximize utility, not just for your own caste, but for society.
In any case, I infer that none of the rest of my explanation of why I think of equality in terms of equal-utility rather than equal-particulars is at all worth responding to, in which case I’m content to drop the subject here.
Sorry about not responding to your other arguments, I kind of skimmed your comment and thought that was your argument.
Hang on a second, I’ve just noticed something. Misandry is present in different situations to misogyny, and increases inequality in those situations. The question is whether inequality is a separate Bad Thing, as you’ve modeled it—in which case EMA is negative—or equal to the total harm done to men minus the total harm done to women—in which case it’s positive, I guess.
I tend to assume that, say, men being unable to do X utility-increasing thing when women can increases inequality, in the same way as women being unable to do Y utility-increasing thing when men can, whereas both men and women being unable to do X utility-increasing thing reduces inequality, even as it reduces utility (obviously.) Maybe this is the source of the confusion/disagreement?
Yes, I agree that whether inequality is a separate Bad Thing is an important part of the question. As I said initially, if someone doesn’t value equality, then that person would object to misandry and misogyny on the same grounds (within the very narrow subset of the current discussion), and they would not be mistaken to do so, merely value different things than I do.
That seems unlikely to me, for basically the same reason that it seems unlikely that wealthy people being unable to do X wealth-increasing thing when poor people can increases wealth inequality. But sure, if you assume this, you’d reach different conclusions than I do.
For example, there are several folks on this site who seem to argue that there is no gender-based social inequality in our culture, or that if there is it benefits women; if I were to believe either of those things, I would reach different conclusions. (In the latter case I would oppose misandry more strongly than misogyny, since misogyny would tend to reduce inequality while misandry increased it, while having equal effects on harm. In the former case I would oppose them equally, since they had equal effects on inequality and harm.)
Even if you value equity separately from total utility, it is still the case that, contingent on any given level of equity, you should maximize total utility. While this would still involve some kind of utility transfer between agents, compared to the maximum in total utility—and, for the sake of this example, this could be considered either “misandry” or “misogyny”—it’s not clear that what we now know as misandry or misogyny would be preserved.
Not sure where this came from.
MugaSofer gave two choices, neither of which had anything to do with total utility as I understood it. One choice was “inequality is a separate Bad Thing,” the other was that “it” (I assume inequality) was “equal to the total harm done to men minus the total harm done to women”. I agreed with the former. (I might also agree with the latter; it depends on how we understand “harm”.)
In any case, I don’t value equality separate from total utility. I do value it separate from total harm, which I also (negatively) value, and both values factor into my calculations of total utility. As do various other things.
Sure. Further, I’d agree that I should maximize total utility independent of equality, with the understanding that how we calculate utility and how we total utilities is not obvious.
The rest of your comment is harder for me to make sense of, but if I’ve understood you correctly, you’re saying that if we maximize net aggregate utility for all humans—whatever that turns out to involve—it’s likely that when we’re done some group(s) might end up worse off than they’d have ended up if we’d instead maximized that group’s net aggregate utility. Yes?
Sure, I agree with that completely.
Sure, that’s true.
In that case, you can replace “maximize total utility” with “minimize total harm” and the gist of my comment is unchanged (under mild assumptions, such as that increasing harm never yields an increase in utility).
Not just worse off than maximizing that group’s aggregate U, or minimizing its aggregate harm (which is obvious), but also worse off than if we took equity into account and traded one group’s aggregate U against the given group’s.
This assumes a framework where inequality can be conflated with the difference in total harm done to each group (or with the difference in aggregate utility, again under plausible assumptions).
But, on the other hand, the assumption that “inequality is a separate Bad Thing” in the sense that instances of misandry create something called “inequality”, and instances of misogyny create inequality, and the two instances of inequality add up instead of canceling out, seems redundant. It’s just saying that “inequality” is a kind of harm, so there’s no reason to have it as a separate concept.
I agree that with a sufficiently robust shared understanding of harm, there’s no reason to call out other related concepts separately. That said, it’s not been my experience that the English word “harm” conveys anything like such an understanding in ordinary conversation, so sometimes using other words is helpful for communication.
Well, that rather depends on whether we define “wealth inequality” as “inequality caused by the wealth distribution” or “inequality in the wealth distribution”. If the world was divided into two different castes, rich and poor, each of whom could only do half the utility-increasing things, it seems to me that they would be unequal because if a poor person wanted to do a rich-person thing, they couldn’t. If you would consider them equal (a similar world could be divided by race or gender) then I guess the term in your utility function you call “equality” is different to mine, even though they have the same labels. Odd, but there you go.
If the “utility-increasing things” the rich and poor groups were capable of doing were equally utility-increasing, yeah, I’d probably say that we’d achieved equality between rich and poor. If you would further require that they be able to do the same things before making that claim, then yes, we’re using the term “equality” differently. Sorry for the confusion; I’ll try to avoid the term in our discussion moving forward.
Huh. Well, I guess we’ve identified the mismatch. Tapping out, unless you want to argue for Dave!equality.
Sure, why not?
Rawls has done most of the work here, since this is basically the Rawlsian “veil of ignorance” test for a society—if the system is set up so that I’m genuinely, rationally indifferent between being born into one group and the other, the two groups can be considered equal.
This seems like a pretty good test to me. If we have a big pile of stuff to divide between us, and we can divide it into two piles such that both of us are genuinely indifferent about which one we end up with, it seems natural to say we value the two piles equally… in other words, that they are equal in value.
Granted, I’m really not sure how to argue for caring only about value differences, if that’s a sticking point, other than to stare incredulously and say “well what else would you care about and why?”
So, getting back to your hypothetical… if replacing one set of things-that-I-can-do (S1) with a different set of things-that-I-can-do (S2) doesn’t constitute a utility loss, then I don’t care about the substitution. Why should I? I’m just as well-off along all measurable dimensions of value as I was before.
Similarly, if group 1 has S1 and group 2 has S2, and there’s no utility difference, I don’t care which group I’m assigned to. Again, why should I? I’m just as well-off along all measurable dimensions of value either way. On what grounds would I pick one over the other?
So if, as you posited, rich people had S1 and poor people had S2, then I wouldn’t care whether I was rich or poor. That’s clearly not the way the real world is set up, which is precisely why I’m comfortable saying rich and poor people in the real world aren’t equal. But that is the way things are set up in your hypothetical.
In your hypothetical, a Rawlsian veil of ignorance really does apply between rich and poor. So I’m content to say that in your hypothetical, the rich and the poor are equal.
I suspect we haven’t yet identified the real mismatch, which probably has to do with what you meant and what I understood by “utility-increasing thing”. But I could be wrong, of course.
Which utility function is this hypothetical rational agent supposed to use?
Beats me. MugaSofer asked me the question in terms of “the utility-increasing things” and I answered in those terms.
As long as it doesn’t include a term for Dave!equality, we should be good.
But each of them only gets half! What about … well, what about individual variance, for a start. S1 and S2 wouldn’t be exactly equal for everybody if you’re dealing with humans, which to be fair I did not make explicit.
OK. Given some additional data about what arguing for Dave!equality might look like, I’m tapping out here.
Lengthy, amirite?
Fair enough.
I don’t think that’s the point of the Rawlsian veil of ignorance—the point is that you should design a society as if you didn’t know which caste you’d be in, not that you should design it so you don’t care which caste you’d be in. IOW, maximize the average utility, not minimize the differences between agents.
As I understand it, the goal of my not-knowing is to eliminate the temptation to take my personal status in that society into consideration when judging the society… that is, “ignorant of” is being used as a way of approximating “indifferent to”, not as a primary goal in and of itself.
But, OK, maybe I just don’t understand Rawls.
In any case, I infer that none of the rest of my explanation of why I think of equality in terms of equal-utility rather than equal-particulars is at all worth responding to, in which case I’m content to drop the subject here.
Nope, that’s my understanding too. You want to maximize utility, not just for your own caste, but for society.
Sorry about not responding to your other arguments, I kind of skimmed your comment and thought that was your argument.