Ah! I understand what you’re saying, now. Thanks for clarifying further.
Yes, you’re right, if the only thing I wanted to do was reduce the net inequality, I could achieve my goals most readily by harming X until it was just as bad off as Y (which would be a negative-sum game), and that would be equivalent to benefiting Y. Or I could use some combination of benefit-to-Y and harm-to-X.
And no, reducing the net inequality is not the only thing I want to do, for precisely this reason.
But it is a thing I want to do. And as a consequence, I don’t treat actions that benefit Y the same way as actions that improve X’s situation, and I don’t treat actions that harm Y the same way as actions that harm X.
I admire your consistency and refusal to be evasive about unfortunate implications. Upvoted. This is where conversations about social justice should have began.
Yeah, agreed about where the conversation should start.
I have struggled for years about what I want to say about maximizing net aggregated benefits vs minimizing net inequality in cases where tradeoffs are necessary. I am not really happy with any of my answers.
In practice, I think there’s a lot of low-hanging fruit where reducing inequality increases net aggregated benefits, so I don’t consider it a critical question right this minute, but it’s likely to be at some point.
I have struggled for years about what I want to say about maximizing net aggregated benefits vs minimizing net inequality in cases where tradeoffs are necessary.
My provisional solution for this: I want to maximize net aggregated benefits. I don’t want to minimize net inequality per se, but a useful heuristic is that if X is worse off than Y, then you can probably get more net aggregated benefits per unit resources by helping X (or refraining from harming X) than by helping Y (or refraining for harming Y).
Yeah, I’ve considered this. It doesn’t work for me, because I do seem to want to minimize inequality (in addition to maximizing benefit), and simply ignoring one of my wants is unsatisfying.
That said, I’m not exactly sure why I want to minimize inequality. I’m pretty sure I don’t just value equality for its own sake, for example, though some people claim they do.
One answer that often seems plausible to me is because I am aware that inequalities create an environment that facilitates various kinds of abuse, and what I actually want is to minimize those abuses; a system of inequality among agents who can be relied upon not to abuse one another would be all right with me.
Another answer that often seems plausible to me is because I want everyone to like me, and I’m convinced that inequalities foster resentment.
Other answers pop up from time to time. (And of course there’s always the potential confusion between wanting X and wanting to signal membership in a class characterized by wanting X.)
I get the sense that you think I disagree with TheOtherDave’s statement above, particularly:
reducing the net inequality is not the only thing I want to do, for precisely this reason [harming X seems morally repugnant].
But it is a thing I want to do. And as a consequence, I don’t treat actions that benefit Y the same way as actions that improve X’s situation, and I don’t treat actions that harm Y the same way as actions that harm X.
If you are willing, can you identify what I say that makes you think that. For example, if you think I’ve been mindkilled or such, feel free to tell me so.
The “consistency and refusal to be evasive about unfortunate implications”, if you’re taking that as a jibe, wasn’t directed at you (or anybody here on Less Wrong, for that matter), but rather the Dark Arts that currently constitute the majority of social justice conversations.
To be honest, I’m uncertain whether or not the line of conversation here parallels the line of conversation you and I were having (although it’s possible I’ve lost track of another line of conversation—searched, couldn’t find one). Our conversation drifted considerably in purpose, my apologies for that.
I was attempting to ascertain whether your belief was that social disapproval could correct a natural violent tendency in males, or whether your belief was that social approval/lack of social disapproval was creating a violent tendency in males. Probably would have been simpler to ask, in retrospect; my debate skills were largely honed with people who don’t know what they believe, and asking such questions tends to commit them to the answers. My apologies.
I don’t treat actions that benefit Y the same way as actions that improve X’s situation, and I don’t treat actions that harm Y the same way as actions that harm X.
Ah, right.
So you consider anti-X-ism better than anti-Y-ism, but both are worse than having neither?
Expect? No. Just acknowledging that anti-X-ism doesn’t necessarily harm X, nor does it necessarily only harm X.
But sure, it happens. The phrase “get off my side!” is often used in these cases.For example, the Westboro Baptist Church folks have probably done more good than harm for queers (net, aggregated over agents), despite being (I think) anti-queer.
By the same token, anti-Y-ism doesn’t necessarily harm Y?
Yup.
Well, sure. That’s true of everything. But is it especially true of misandry?
Beats me. I certainly didn’t mean to imply that it was. You went from my statement about acts that cause harm to X and Y to a superficially similar statement about ‘isms’. My point here is that going from endorsing FOO to endorsing ‘FOOism’ is not necessarily a truth-preserving operation for any ‘ism’, since ‘isms’ tend to carry additional baggage with them.
With respect to terms like ‘misandry,’ ‘misogyny,’ ‘misanthropy,’ ‘feminism,’ ‘masculism’, ‘sexism’, etc. I find it is almost always preferable to discard the term and instead talk about things like reducing harm to women, reducing harm to men, increasing benefits to women, increasing benefits to men, reducing net differentials between benefits to women and men, and similar concepts.
You’re response to one of those cases is what started this discussion.
Beats me. I certainly didn’t mean to imply that it was.
Ah. I was still responding to the comment where you said comparing misogyny to misandry was like comparing a rich man and a poor man stealing bread and sleeping on the streets.
I was still responding to the comment where you said comparing misogyny to misandry was like comparing a rich man and a poor man stealing bread and sleeping on the streets.
And you were responding to that by asking me whether it’s especially true of misandry that it doesn’t necessarily just harm men? You’ve kind of lost me again. If you can clarify the relationship between my comparison and your question—or perhaps back up a step further and clarify your objection to my comparison, which I infer you object to but am not exactly sure on what grounds (other than perhaps that it’s sexist, but I’m not quite sure how to interpret that label in this context), that might help resolve some confusions.
OK, if misandry (or other anti-X-ism) isn’t especially likely to have good side effects, compared to misogyny (anti-Y-ism), why is objecting to it on the same grounds as misogyny mistaken?
I feel like I’m repeating myself, which indicates that I haven’t been at all clear. So let me back up and express myself more precisely this time.
I’m going to temporarily divide misandry into two components: MA1 (those things which harm men) and MA2 (everything else). I will assume for the moment that MA1 is non-empty. (MA2 might be empty or non-empty, that’s irrelevant to my point.) I equivalently divide misogyny into MG1, which harms women, and MG2, which doesn’t.
As I’ve said elsewhere, I mostly care about MA1 and MG1, and not about MA2 and MG2. As I’ve also said elsewhere, I have two relevant values here: V1: to maximize net benefit V2: to minimize inequality.
So an (oversimplified subset of an) expected-value calculation for MA1 and MG1 might look like: EV(MA1) = BMA*WV1 + EMA*WV2 EV(MG1) = BMG*WV1 + EMG*WV2 ...where: EV(x) is the expected value of x; BMA/BMG is the expected change in net benefit due to MA1 and MG1 (respectively); EMA/EMG is the expected change in net equality due to MA1 and MG1 (respectively); WV1/WV2 is the weight of V1 and V2 (respectively) (For convenience, I’ve defined everything such that more positive is better.)
I object to MA1 on the grounds that I expect EV(MA1) to be negative. I expect this for two reasons: first, because BMA is negative—that is, MA1 results in less net benefit. second, because even though EMA is positive—that is, MA1 results in less inequality—I expect that (BMA*WV1) > (EMA*WV2).
I object to MG1 on the grounds that I expect EV(MG2) to be negative. I expect this for two reasons: first, because BMG is negative—that is, MG1 results in less net benefit. second, because EMG is negative—that is, MG1 results in more inequality.
So, rolling all of that tediously precise notation back into English, I could say that I object to misandry on the grounds that it causes harm, despite reducing inequality, and I object to misogyny on the grounds that it causes harm and increases inequality.
On consideration, I don’t say it’s necessarily a mistake to object to misandry and misogyny on the same grounds… for example, one might simply not care about inequality at all, in which case one would object to both of them on the same grounds—that is, the EV(MA1) and EV(MG1) calculations are basically the same. I don’t think it makes sense to say someone is mistaken to have or not have a particular value; if you don’t value equality, then you don’t, and there’s not much else to say about it.
But I do seem to value equality, and I therefore reject expected value calculations where EV(MA1) and EV(MG1) are basically the same.
Hang on a second, I’ve just noticed something. Misandry is present in different situations to misogyny, and increases inequality in those situations. The question is whether inequality is a separate Bad Thing, as you’ve modeled it—in which case EMA is negative—or equal to the total harm done to men minus the total harm done to women—in which case it’s positive, I guess.
I tend to assume that, say, men being unable to do X utility-increasing thing when women can increases inequality, in the same way as women being unable to do Y utility-increasing thing when men can, whereas both men and women being unable to do X utility-increasing thing reduces inequality, even as it reduces utility (obviously.) Maybe this is the source of the confusion/disagreement?
Yes, I agree that whether inequality is a separate Bad Thing is an important part of the question. As I said initially, if someone doesn’t value equality, then that person would object to misandry and misogyny on the same grounds (within the very narrow subset of the current discussion), and they would not be mistaken to do so, merely value different things than I do.
I tend to assume that, say, men being unable to do X utility-increasing thing when women can increases inequality
That seems unlikely to me, for basically the same reason that it seems unlikely that wealthy people being unable to do X wealth-increasing thing when poor people can increases wealth inequality. But sure, if you assume this, you’d reach different conclusions than I do.
For example, there are several folks on this site who seem to argue that there is no gender-based social inequality in our culture, or that if there is it benefits women; if I were to believe either of those things, I would reach different conclusions. (In the latter case I would oppose misandry more strongly than misogyny, since misogyny would tend to reduce inequality while misandry increased it, while having equal effects on harm. In the former case I would oppose them equally, since they had equal effects on inequality and harm.)
Even if you value equity separately from total utility, it is still the case that, contingent on any given level of equity, you should maximize total utility. While this would still involve some kind of utility transfer between agents, compared to the maximum in total utility—and, for the sake of this example, this could be considered either “misandry” or “misogyny”—it’s not clear that what we now know as misandry or misogyny would be preserved.
Even if you value equity separately from total utility,
Not sure where this came from.
MugaSofer gave two choices, neither of which had anything to do with total utility as I understood it. One choice was “inequality is a separate Bad Thing,” the other was that “it” (I assume inequality) was “equal to the total harm done to men minus the total harm done to women”. I agreed with the former. (I might also agree with the latter; it depends on how we understand “harm”.)
In any case, I don’t value equality separate from total utility. I do value it separate from total harm, which I also (negatively) value, and both values factor into my calculations of total utility. As do various other things.
contingent on any given level of equity, you should maximize total utility.
Sure. Further, I’d agree that I should maximize total utility independent of equality, with the understanding that how we calculate utility and how we total utilities is not obvious.
The rest of your comment is harder for me to make sense of, but if I’ve understood you correctly, you’re saying that if we maximize net aggregate utility for all humans—whatever that turns out to involve—it’s likely that when we’re done some group(s) might end up worse off than they’d have ended up if we’d instead maximized that group’s net aggregate utility. Yes?
Sure, I agree with that completely.
this could be considered either “misandry” or “misoginy”—it’s not clear that what we now know as misandry or misoginy would be preserved.
In any case, I don’t value equality separate from total utility. I do value it separate from total harm, which I also (negatively) value, and both values factor into my calculations of total utility.
In that case, you can replace “maximize total utility” with “minimize total harm” and the gist of my comment is unchanged (under mild assumptions, such as that increasing harm never yields an increase in utility).
some group(s) might end up worse off than they’d have ended up if we’d instead maximized that group’s net aggregate utility. Yes?
Not just worse off than maximizing that group’s aggregate U, or minimizing its aggregate harm (which is obvious), but also worse off than if we took equity into account and traded one group’s aggregate U against the given group’s.
This assumes a framework where inequality can be conflated with the difference in total harm done to each group (or with the difference in aggregate utility, again under plausible assumptions).
But, on the other hand, the assumption that “inequality is a separate Bad Thing” in the sense that instances of misandry create something called “inequality”, and instances of misogyny create inequality, and the two instances of inequality add up instead of canceling out, seems redundant. It’s just saying that “inequality” is a kind of harm, so there’s no reason to have it as a separate concept.
It’s just saying that “inequality” is a kind of harm, so there’s no reason to have it as a separate concept.
I agree that with a sufficiently robust shared understanding of harm, there’s no reason to call out other related concepts separately. That said, it’s not been my experience that the English word “harm” conveys anything like such an understanding in ordinary conversation, so sometimes using other words is helpful for communication.
That seems unlikely to me, for basically the same reason that it seems unlikely that wealthy people being unable to do X wealth-increasing thing when poor people can increases wealth inequality. But sure, if you assume this, you’d reach different conclusions than I do.
Well, that rather depends on whether we define “wealth inequality” as “inequality caused by the wealth distribution” or “inequality in the wealth distribution”. If the world was divided into two different castes, rich and poor, each of whom could only do half the utility-increasing things, it seems to me that they would be unequal because if a poor person wanted to do a rich-person thing, they couldn’t. If you would consider them equal (a similar world could be divided by race or gender) then I guess the term in your utility function you call “equality” is different to mine, even though they have the same labels. Odd, but there you go.
If the “utility-increasing things” the rich and poor groups were capable of doing were equally utility-increasing, yeah, I’d probably say that we’d achieved equality between rich and poor. If you would further require that they be able to do the same things before making that claim, then yes, we’re using the term “equality” differently. Sorry for the confusion; I’ll try to avoid the term in our discussion moving forward.
Rawls has done most of the work here, since this is basically the Rawlsian “veil of ignorance” test for a society—if the system is set up so that I’m genuinely, rationally indifferent between being born into one group and the other, the two groups can be considered equal.
This seems like a pretty good test to me. If we have a big pile of stuff to divide between us, and we can divide it into two piles such that both of us are genuinely indifferent about which one we end up with, it seems natural to say we value the two piles equally… in other words, that they are equal in value.
Granted, I’m really not sure how to argue for caring only about value differences, if that’s a sticking point, other than to stare incredulously and say “well what else would you care about and why?”
So, getting back to your hypothetical… if replacing one set of things-that-I-can-do (S1) with a different set of things-that-I-can-do (S2) doesn’t constitute a utility loss, then I don’t care about the substitution. Why should I? I’m just as well-off along all measurable dimensions of value as I was before.
Similarly, if group 1 has S1 and group 2 has S2, and there’s no utility difference, I don’t care which group I’m assigned to. Again, why should I? I’m just as well-off along all measurable dimensions of value either way. On what grounds would I pick one over the other?
So if, as you posited, rich people had S1 and poor people had S2, then I wouldn’t care whether I was rich or poor. That’s clearly not the way the real world is set up, which is precisely why I’m comfortable saying rich and poor people in the real world aren’t equal. But that is the way things are set up in your hypothetical.
In your hypothetical, a Rawlsian veil of ignorance really does apply between rich and poor. So I’m content to say that in your hypothetical, the rich and the poor are equal.
I suspect we haven’t yet identified the real mismatch, which probably has to do with what you meant and what I understood by “utility-increasing thing”. But I could be wrong, of course.
Rawls has done most of the work here, since this is basically the Rawlsian “veil of ignorance” test for a society—if the system is set up so that I’m genuinely, rationally indifferent between being born into one group and the other, the two groups can be considered equal.
Which utility function is this hypothetical rational agent supposed to use?
So, getting back to your hypothetical… if replacing one set of things-that-I-can-do (S1) with a different set of things-that-I-can-do (S2) doesn’t constitute a utility loss, then I don’t care about the substitution. Why should I? I’m just as well-off along all measurable dimensions of value as I was before.
Similarly, if group 1 has S1 and group 2 has S2, and there’s no utility difference, I don’t care which group I’m assigned to. Again, why should I? I’m just as well-off along all measurable dimensions of value either way. On what grounds would I pick one over the other?
So if, as you posited, rich people had S1 and poor people had S2, then I wouldn’t care whether I was rich or poor. That’s clearly not the way the real world is set up, which is precisely why I’m comfortable saying rich and poor people in the real world aren’t equal. But that is the way things are set up in your hypothetical.
But each of them only gets half! What about … well, what about individual variance, for a start. S1 and S2 wouldn’t be exactly equal for everybody if you’re dealing with humans, which to be fair I did not make explicit.
I don’t think that’s the point of the Rawlsian veil of ignorance—the point is that you should design a society as if you didn’t know which caste you’d be in, not that you should design it so you don’t care which caste you’d be in.
IOW, maximize the average utility, not minimize the differences between agents.
you should design a society as if you didn’t know which caste you’d be in, not that you should design it so you don’t care which caste you’d be in.
As I understand it, the goal of my not-knowing is to eliminate the temptation to take my personal status in that society into consideration when judging the society… that is, “ignorant of” is being used as a way of approximating “indifferent to”, not as a primary goal in and of itself.
But, OK, maybe I just don’t understand Rawls.
In any case, I infer that none of the rest of my explanation of why I think of equality in terms of equal-utility rather than equal-particulars is at all worth responding to, in which case I’m content to drop the subject here.
As I understand it, the goal of my not-knowing is to eliminate the temptation to take my personal status in that society into consideration when judging the society… that is, “ignorant of” is being used as a way of approximating “indifferent to”, not as a primary goal in and of itself.
But, OK, maybe I just don’t understand Rawls.
Nope, that’s my understanding too. You want to maximize utility, not just for your own caste, but for society.
In any case, I infer that none of the rest of my explanation of why I think of equality in terms of equal-utility rather than equal-particulars is at all worth responding to, in which case I’m content to drop the subject here.
Sorry about not responding to your other arguments, I kind of skimmed your comment and thought that was your argument.
Ah! I understand what you’re saying, now. Thanks for clarifying further.
Yes, you’re right, if the only thing I wanted to do was reduce the net inequality, I could achieve my goals most readily by harming X until it was just as bad off as Y (which would be a negative-sum game), and that would be equivalent to benefiting Y. Or I could use some combination of benefit-to-Y and harm-to-X.
And no, reducing the net inequality is not the only thing I want to do, for precisely this reason.
But it is a thing I want to do. And as a consequence, I don’t treat actions that benefit Y the same way as actions that improve X’s situation, and I don’t treat actions that harm Y the same way as actions that harm X.
I admire your consistency and refusal to be evasive about unfortunate implications. Upvoted. This is where conversations about social justice should have began.
Yeah, agreed about where the conversation should start.
I have struggled for years about what I want to say about maximizing net aggregated benefits vs minimizing net inequality in cases where tradeoffs are necessary. I am not really happy with any of my answers.
In practice, I think there’s a lot of low-hanging fruit where reducing inequality increases net aggregated benefits, so I don’t consider it a critical question right this minute, but it’s likely to be at some point.
There are even more actions that will increase both net aggregate benefits and inequality.
(nods) That’s true.
My provisional solution for this: I want to maximize net aggregated benefits. I don’t want to minimize net inequality per se, but a useful heuristic is that if X is worse off than Y, then you can probably get more net aggregated benefits per unit resources by helping X (or refraining from harming X) than by helping Y (or refraining for harming Y).
Yeah, I’ve considered this. It doesn’t work for me, because I do seem to want to minimize inequality (in addition to maximizing benefit), and simply ignoring one of my wants is unsatisfying.
That said, I’m not exactly sure why I want to minimize inequality. I’m pretty sure I don’t just value equality for its own sake, for example, though some people claim they do.
One answer that often seems plausible to me is because I am aware that inequalities create an environment that facilitates various kinds of abuse, and what I actually want is to minimize those abuses; a system of inequality among agents who can be relied upon not to abuse one another would be all right with me.
Another answer that often seems plausible to me is because I want everyone to like me, and I’m convinced that inequalities foster resentment.
Other answers pop up from time to time. (And of course there’s always the potential confusion between wanting X and wanting to signal membership in a class characterized by wanting X.)
Crocker’s Rules
I get the sense that you think I disagree with TheOtherDave’s statement above, particularly:
If you are willing, can you identify what I say that makes you think that. For example, if you think I’ve been mindkilled or such, feel free to tell me so.
The “consistency and refusal to be evasive about unfortunate implications”, if you’re taking that as a jibe, wasn’t directed at you (or anybody here on Less Wrong, for that matter), but rather the Dark Arts that currently constitute the majority of social justice conversations.
To be honest, I’m uncertain whether or not the line of conversation here parallels the line of conversation you and I were having (although it’s possible I’ve lost track of another line of conversation—searched, couldn’t find one). Our conversation drifted considerably in purpose, my apologies for that.
I was attempting to ascertain whether your belief was that social disapproval could correct a natural violent tendency in males, or whether your belief was that social approval/lack of social disapproval was creating a violent tendency in males. Probably would have been simpler to ask, in retrospect; my debate skills were largely honed with people who don’t know what they believe, and asking such questions tends to commit them to the answers. My apologies.
No problem.
To answer your question, I suspect that social approval / lack of social disapproval creates most tendencies. At least on the margins.
Ah, right.
So you consider anti-X-ism better than anti-Y-ism, but both are worse than having neither?
If the only expected effects of anti-X-ism and anti-Y-ism are harm to X and harm to Y (respectively), yes, that’s correct.
But you expect some secondary sociological/reputational benefit, at least in this case?
Expect? No. Just acknowledging that anti-X-ism doesn’t necessarily harm X, nor does it necessarily only harm X.
But sure, it happens. The phrase “get off my side!” is often used in these cases.For example, the Westboro Baptist Church folks have probably done more good than harm for queers (net, aggregated over agents), despite being (I think) anti-queer.
By the same token, anti-Y-ism doesn’t necessarily harm Y?
Well, sure. That’s true of everything. But is it especially true of misandry?
You’re response to one of those cases is what started this discussion.
Yup.
Beats me. I certainly didn’t mean to imply that it was. You went from my statement about acts that cause harm to X and Y to a superficially similar statement about ‘isms’. My point here is that going from endorsing FOO to endorsing ‘FOOism’ is not necessarily a truth-preserving operation for any ‘ism’, since ‘isms’ tend to carry additional baggage with them.
With respect to terms like ‘misandry,’ ‘misogyny,’ ‘misanthropy,’ ‘feminism,’ ‘masculism’, ‘sexism’, etc. I find it is almost always preferable to discard the term and instead talk about things like reducing harm to women, reducing harm to men, increasing benefits to women, increasing benefits to men, reducing net differentials between benefits to women and men, and similar concepts.
Yes. And?
Ah. I was still responding to the comment where you said comparing misogyny to misandry was like comparing a rich man and a poor man stealing bread and sleeping on the streets.
Just noting.
And you were responding to that by asking me whether it’s especially true of misandry that it doesn’t necessarily just harm men?
You’ve kind of lost me again.
If you can clarify the relationship between my comparison and your question—or perhaps back up a step further and clarify your objection to my comparison, which I infer you object to but am not exactly sure on what grounds (other than perhaps that it’s sexist, but I’m not quite sure how to interpret that label in this context), that might help resolve some confusions.
OK, if misandry (or other anti-X-ism) isn’t especially likely to have good side effects, compared to misogyny (anti-Y-ism), why is objecting to it on the same grounds as misogyny mistaken?
I feel like I’m repeating myself, which indicates that I haven’t been at all clear.
So let me back up and express myself more precisely this time.
I’m going to temporarily divide misandry into two components: MA1 (those things which harm men) and MA2 (everything else). I will assume for the moment that MA1 is non-empty. (MA2 might be empty or non-empty, that’s irrelevant to my point.) I equivalently divide misogyny into MG1, which harms women, and MG2, which doesn’t.
As I’ve said elsewhere, I mostly care about MA1 and MG1, and not about MA2 and MG2.
As I’ve also said elsewhere, I have two relevant values here:
V1: to maximize net benefit
V2: to minimize inequality.
So an (oversimplified subset of an) expected-value calculation for MA1 and MG1 might look like:
EV(MA1) = BMA*WV1 + EMA*WV2
EV(MG1) = BMG*WV1 + EMG*WV2
...where:
EV(x) is the expected value of x;
BMA/BMG is the expected change in net benefit due to MA1 and MG1 (respectively);
EMA/EMG is the expected change in net equality due to MA1 and MG1 (respectively);
WV1/WV2 is the weight of V1 and V2 (respectively)
(For convenience, I’ve defined everything such that more positive is better.)
I object to MA1 on the grounds that I expect EV(MA1) to be negative. I expect this for two reasons:
first, because BMA is negative—that is, MA1 results in less net benefit.
second, because even though EMA is positive—that is, MA1 results in less inequality—I expect that (BMA*WV1) > (EMA*WV2).
I object to MG1 on the grounds that I expect EV(MG2) to be negative. I expect this for two reasons:
first, because BMG is negative—that is, MG1 results in less net benefit.
second, because EMG is negative—that is, MG1 results in more inequality.
So, rolling all of that tediously precise notation back into English, I could say that I object to misandry on the grounds that it causes harm, despite reducing inequality, and I object to misogyny on the grounds that it causes harm and increases inequality.
On consideration, I don’t say it’s necessarily a mistake to object to misandry and misogyny on the same grounds… for example, one might simply not care about inequality at all, in which case one would object to both of them on the same grounds—that is, the EV(MA1) and EV(MG1) calculations are basically the same. I don’t think it makes sense to say someone is mistaken to have or not have a particular value; if you don’t value equality, then you don’t, and there’s not much else to say about it.
But I do seem to value equality, and I therefore reject expected value calculations where EV(MA1) and EV(MG1) are basically the same.
Is that any clearer?
Hang on a second, I’ve just noticed something. Misandry is present in different situations to misogyny, and increases inequality in those situations. The question is whether inequality is a separate Bad Thing, as you’ve modeled it—in which case EMA is negative—or equal to the total harm done to men minus the total harm done to women—in which case it’s positive, I guess.
I tend to assume that, say, men being unable to do X utility-increasing thing when women can increases inequality, in the same way as women being unable to do Y utility-increasing thing when men can, whereas both men and women being unable to do X utility-increasing thing reduces inequality, even as it reduces utility (obviously.) Maybe this is the source of the confusion/disagreement?
Yes, I agree that whether inequality is a separate Bad Thing is an important part of the question. As I said initially, if someone doesn’t value equality, then that person would object to misandry and misogyny on the same grounds (within the very narrow subset of the current discussion), and they would not be mistaken to do so, merely value different things than I do.
That seems unlikely to me, for basically the same reason that it seems unlikely that wealthy people being unable to do X wealth-increasing thing when poor people can increases wealth inequality. But sure, if you assume this, you’d reach different conclusions than I do.
For example, there are several folks on this site who seem to argue that there is no gender-based social inequality in our culture, or that if there is it benefits women; if I were to believe either of those things, I would reach different conclusions. (In the latter case I would oppose misandry more strongly than misogyny, since misogyny would tend to reduce inequality while misandry increased it, while having equal effects on harm. In the former case I would oppose them equally, since they had equal effects on inequality and harm.)
Even if you value equity separately from total utility, it is still the case that, contingent on any given level of equity, you should maximize total utility. While this would still involve some kind of utility transfer between agents, compared to the maximum in total utility—and, for the sake of this example, this could be considered either “misandry” or “misogyny”—it’s not clear that what we now know as misandry or misogyny would be preserved.
Not sure where this came from.
MugaSofer gave two choices, neither of which had anything to do with total utility as I understood it. One choice was “inequality is a separate Bad Thing,” the other was that “it” (I assume inequality) was “equal to the total harm done to men minus the total harm done to women”. I agreed with the former. (I might also agree with the latter; it depends on how we understand “harm”.)
In any case, I don’t value equality separate from total utility. I do value it separate from total harm, which I also (negatively) value, and both values factor into my calculations of total utility. As do various other things.
Sure. Further, I’d agree that I should maximize total utility independent of equality, with the understanding that how we calculate utility and how we total utilities is not obvious.
The rest of your comment is harder for me to make sense of, but if I’ve understood you correctly, you’re saying that if we maximize net aggregate utility for all humans—whatever that turns out to involve—it’s likely that when we’re done some group(s) might end up worse off than they’d have ended up if we’d instead maximized that group’s net aggregate utility. Yes?
Sure, I agree with that completely.
Sure, that’s true.
In that case, you can replace “maximize total utility” with “minimize total harm” and the gist of my comment is unchanged (under mild assumptions, such as that increasing harm never yields an increase in utility).
Not just worse off than maximizing that group’s aggregate U, or minimizing its aggregate harm (which is obvious), but also worse off than if we took equity into account and traded one group’s aggregate U against the given group’s.
This assumes a framework where inequality can be conflated with the difference in total harm done to each group (or with the difference in aggregate utility, again under plausible assumptions).
But, on the other hand, the assumption that “inequality is a separate Bad Thing” in the sense that instances of misandry create something called “inequality”, and instances of misogyny create inequality, and the two instances of inequality add up instead of canceling out, seems redundant. It’s just saying that “inequality” is a kind of harm, so there’s no reason to have it as a separate concept.
I agree that with a sufficiently robust shared understanding of harm, there’s no reason to call out other related concepts separately. That said, it’s not been my experience that the English word “harm” conveys anything like such an understanding in ordinary conversation, so sometimes using other words is helpful for communication.
Well, that rather depends on whether we define “wealth inequality” as “inequality caused by the wealth distribution” or “inequality in the wealth distribution”. If the world was divided into two different castes, rich and poor, each of whom could only do half the utility-increasing things, it seems to me that they would be unequal because if a poor person wanted to do a rich-person thing, they couldn’t. If you would consider them equal (a similar world could be divided by race or gender) then I guess the term in your utility function you call “equality” is different to mine, even though they have the same labels. Odd, but there you go.
If the “utility-increasing things” the rich and poor groups were capable of doing were equally utility-increasing, yeah, I’d probably say that we’d achieved equality between rich and poor. If you would further require that they be able to do the same things before making that claim, then yes, we’re using the term “equality” differently. Sorry for the confusion; I’ll try to avoid the term in our discussion moving forward.
Huh. Well, I guess we’ve identified the mismatch. Tapping out, unless you want to argue for Dave!equality.
Sure, why not?
Rawls has done most of the work here, since this is basically the Rawlsian “veil of ignorance” test for a society—if the system is set up so that I’m genuinely, rationally indifferent between being born into one group and the other, the two groups can be considered equal.
This seems like a pretty good test to me. If we have a big pile of stuff to divide between us, and we can divide it into two piles such that both of us are genuinely indifferent about which one we end up with, it seems natural to say we value the two piles equally… in other words, that they are equal in value.
Granted, I’m really not sure how to argue for caring only about value differences, if that’s a sticking point, other than to stare incredulously and say “well what else would you care about and why?”
So, getting back to your hypothetical… if replacing one set of things-that-I-can-do (S1) with a different set of things-that-I-can-do (S2) doesn’t constitute a utility loss, then I don’t care about the substitution. Why should I? I’m just as well-off along all measurable dimensions of value as I was before.
Similarly, if group 1 has S1 and group 2 has S2, and there’s no utility difference, I don’t care which group I’m assigned to. Again, why should I? I’m just as well-off along all measurable dimensions of value either way. On what grounds would I pick one over the other?
So if, as you posited, rich people had S1 and poor people had S2, then I wouldn’t care whether I was rich or poor. That’s clearly not the way the real world is set up, which is precisely why I’m comfortable saying rich and poor people in the real world aren’t equal. But that is the way things are set up in your hypothetical.
In your hypothetical, a Rawlsian veil of ignorance really does apply between rich and poor. So I’m content to say that in your hypothetical, the rich and the poor are equal.
I suspect we haven’t yet identified the real mismatch, which probably has to do with what you meant and what I understood by “utility-increasing thing”. But I could be wrong, of course.
Which utility function is this hypothetical rational agent supposed to use?
Beats me. MugaSofer asked me the question in terms of “the utility-increasing things” and I answered in those terms.
As long as it doesn’t include a term for Dave!equality, we should be good.
But each of them only gets half! What about … well, what about individual variance, for a start. S1 and S2 wouldn’t be exactly equal for everybody if you’re dealing with humans, which to be fair I did not make explicit.
OK. Given some additional data about what arguing for Dave!equality might look like, I’m tapping out here.
Lengthy, amirite?
Fair enough.
I don’t think that’s the point of the Rawlsian veil of ignorance—the point is that you should design a society as if you didn’t know which caste you’d be in, not that you should design it so you don’t care which caste you’d be in. IOW, maximize the average utility, not minimize the differences between agents.
As I understand it, the goal of my not-knowing is to eliminate the temptation to take my personal status in that society into consideration when judging the society… that is, “ignorant of” is being used as a way of approximating “indifferent to”, not as a primary goal in and of itself.
But, OK, maybe I just don’t understand Rawls.
In any case, I infer that none of the rest of my explanation of why I think of equality in terms of equal-utility rather than equal-particulars is at all worth responding to, in which case I’m content to drop the subject here.
Nope, that’s my understanding too. You want to maximize utility, not just for your own caste, but for society.
Sorry about not responding to your other arguments, I kind of skimmed your comment and thought that was your argument.