Or, to put it another way—read this post with the genders reversed and few would hesitate to call the result misogynistic. This is my personal yardstick for discussing gender issues; swap the genders and see how it reads.
“The law, in its majestic equality, forbids the rich as well as the poor to sleep under bridges, to beg in the streets, and to steal bread.” (The Red Lily; Anatole France)
Which is to say, insisting on treating two people identically when they are embedded in a system of inequality sometimes leads us to absurd conclusions.
You don’t get a pass on your own biases merely because you oppose somebody else’s. You especially don’t get a pass on your own biases when you’re using them as the basis to assert somebody else’s.
To be absolutely clear here—you’re saying actual, overt sexism is acceptable, as long as it’s women doing it to men?
Well, that’s pretty damn sexist, so I guess you’re consistent, at least. Or … maybe not, because your username implies you’re male, and Wilde was accusing the OP of misogyny as well as misandry.
To be absolutely clear here—you’re saying actual, overt sexism is acceptable, as long as it’s women doing it to men?
I’m not sure if I’m saying that, since I’m never quite sure what people mean by “sexism”, let alone “actual, overt sexism”.
But I am saying that in a system that differentially benefits group X over group Y, I consider it much more acceptable for an individual to treat X and Y differently in a way that differentially benefits Y than in a way that further differentially benefits X. If that’s actual, overt sexism in case where (X,Y)=(men, women), then yes, I’m saying actual, overt sexism is sometimes acceptable as long as it’s being done to men. (The gender of the person doing it is irrelevant.)
If that’s itself pretty damn sexist, I’m OK with that. My purpose here is not to avoid nasty sounding labels, but to reduce the (net) differential in distribution of social benefits (among other purposes). So if I have a choice between “being sexist” while reducing that differential and “not being sexist” while increasing it (all else being equal), I choose to reduce that differential. Labels don’t matter as much as the properties of the system itself.
All that said, I do agree that treating women who abuse their family members as though they lack agency and merely express the patriarchy, while treating men who abuse their family members as though they do possess agency, is unjustified.
My objection was not to that, nor to the other statements in the OP that I didn’t quote, but rather to the sentences I quoted and the “personal yardstick” they suggested using, which I don’t endorse.
Personally, I would say most “sexism” is less taking from Y and giving to X and more just harming Y, which benefits X only through weaker competition. I suppose if you view the battle of the sexes to be a zero-sum game, that yardstick doesn’t make much sense. However, if you thing misogyny and misandry hurt everyone, it does. Looks like there was an inarticulated assumption in OrphanWilde’s post, I guess.
I don’t necessarily think the distribution of social benefits is a zero sum game; in fact, I find that unlikely.
However, it’s also irrelevant to my point. I can value equalizing the net playing field for X and Y whether that playing field is on average rising, on average lowering, or on average staying the same. My point is simply that if I value equalizing the net playing field between X and Y, I should endorse reducing the (net) differential in distribution of social benefits between X and Y.
One of the many benefits a society can provide its members is protection from harm. So differentially harming Y is one of many ways that a (net) differential in distribution of social benefits to X and Y can manifest.
And, again, if we want to label reducing the (net) differential in distribution of social benefits between men and women, with the goal of ultimately altering our society so that it provides women and men with the same level of benefits, “sexism”, I won’t argue with that labeling, but I also won’t care very much about avoiding things labeled that way.
I don’t necessarily think the distribution of social benefits is a zero sum game; in fact, I find that unlikely.
[...]
One of the many benefits a society can provide its members is protection from harm. So differentially harming Y is one of many ways that a (net) differential in distribution of social benefits to X and Y can manifest.
Unless I’ve misunderstood the term, what you describe is, in fact, a zero-sum game.
And, again, if we want to label reducing the (net) differential in distribution of social benefits between men and women, with the goal of ultimately altering our society so that it provides women and men with the same level of benefits, “sexism”, I won’t argue with that labeling, but I also won’t care very much about avoiding things labeled that way.
If I had persuaded you by changing the label, I’d be pretty ashamed of myself for using Dark Arts in a LW discussion.
One of the many benefits a society can provide its members is protection from harm. So differentially harming Y is one of many ways that a (net) differential in distribution of social benefits to X and Y can manifest.
Unless I’ve misunderstood the term, what you describe is, in fact, a zero-sum game.
One of us is misunderstanding the term, then. It might be me. We might do best to not use the term, given that.
“A situation where harming one side is equivalent to helping the other—perhaps because the first to pull ahead by a certain number of points wins, or because they both derive utility from the disutility of the other side.”
OK, soo you’re claiming that when I say that one of the many benefits a society can provide its members is protection from harm, so differentially harming Y is one of many ways that a (net) differential in distribution of social benefits to X and Y can manifest, I’m implicitly asserting that harming Y is equivalent to helping X?
If I understood that, then no, I think this is simply false.
For example, suppose there are dangerous insects about and I have a supply of insect-repellent, which I choose to give only to group X. This is a differential distribution of social benefits (specifically, insect repellent) to X and Y, and sure enough, Y is differentially harmed by the insects as a manifestation of that differential distribution of insect repellent. But it doesn’t follow that harming group Y is equivalent to helping group X… it might well be that if I gave everyone insect repellent, both X and Y would be better off.
But if you’re trying to optimize the net inequality … surely that means that you’ll treat harming the better-off one as equivalent to aiding the worse-off one?
Ah! I understand what you’re saying, now. Thanks for clarifying further.
Yes, you’re right, if the only thing I wanted to do was reduce the net inequality, I could achieve my goals most readily by harming X until it was just as bad off as Y (which would be a negative-sum game), and that would be equivalent to benefiting Y. Or I could use some combination of benefit-to-Y and harm-to-X.
And no, reducing the net inequality is not the only thing I want to do, for precisely this reason.
But it is a thing I want to do. And as a consequence, I don’t treat actions that benefit Y the same way as actions that improve X’s situation, and I don’t treat actions that harm Y the same way as actions that harm X.
I admire your consistency and refusal to be evasive about unfortunate implications. Upvoted. This is where conversations about social justice should have began.
Yeah, agreed about where the conversation should start.
I have struggled for years about what I want to say about maximizing net aggregated benefits vs minimizing net inequality in cases where tradeoffs are necessary. I am not really happy with any of my answers.
In practice, I think there’s a lot of low-hanging fruit where reducing inequality increases net aggregated benefits, so I don’t consider it a critical question right this minute, but it’s likely to be at some point.
I have struggled for years about what I want to say about maximizing net aggregated benefits vs minimizing net inequality in cases where tradeoffs are necessary.
My provisional solution for this: I want to maximize net aggregated benefits. I don’t want to minimize net inequality per se, but a useful heuristic is that if X is worse off than Y, then you can probably get more net aggregated benefits per unit resources by helping X (or refraining from harming X) than by helping Y (or refraining for harming Y).
Yeah, I’ve considered this. It doesn’t work for me, because I do seem to want to minimize inequality (in addition to maximizing benefit), and simply ignoring one of my wants is unsatisfying.
That said, I’m not exactly sure why I want to minimize inequality. I’m pretty sure I don’t just value equality for its own sake, for example, though some people claim they do.
One answer that often seems plausible to me is because I am aware that inequalities create an environment that facilitates various kinds of abuse, and what I actually want is to minimize those abuses; a system of inequality among agents who can be relied upon not to abuse one another would be all right with me.
Another answer that often seems plausible to me is because I want everyone to like me, and I’m convinced that inequalities foster resentment.
Other answers pop up from time to time. (And of course there’s always the potential confusion between wanting X and wanting to signal membership in a class characterized by wanting X.)
I get the sense that you think I disagree with TheOtherDave’s statement above, particularly:
reducing the net inequality is not the only thing I want to do, for precisely this reason [harming X seems morally repugnant].
But it is a thing I want to do. And as a consequence, I don’t treat actions that benefit Y the same way as actions that improve X’s situation, and I don’t treat actions that harm Y the same way as actions that harm X.
If you are willing, can you identify what I say that makes you think that. For example, if you think I’ve been mindkilled or such, feel free to tell me so.
The “consistency and refusal to be evasive about unfortunate implications”, if you’re taking that as a jibe, wasn’t directed at you (or anybody here on Less Wrong, for that matter), but rather the Dark Arts that currently constitute the majority of social justice conversations.
To be honest, I’m uncertain whether or not the line of conversation here parallels the line of conversation you and I were having (although it’s possible I’ve lost track of another line of conversation—searched, couldn’t find one). Our conversation drifted considerably in purpose, my apologies for that.
I was attempting to ascertain whether your belief was that social disapproval could correct a natural violent tendency in males, or whether your belief was that social approval/lack of social disapproval was creating a violent tendency in males. Probably would have been simpler to ask, in retrospect; my debate skills were largely honed with people who don’t know what they believe, and asking such questions tends to commit them to the answers. My apologies.
I don’t treat actions that benefit Y the same way as actions that improve X’s situation, and I don’t treat actions that harm Y the same way as actions that harm X.
Ah, right.
So you consider anti-X-ism better than anti-Y-ism, but both are worse than having neither?
Expect? No. Just acknowledging that anti-X-ism doesn’t necessarily harm X, nor does it necessarily only harm X.
But sure, it happens. The phrase “get off my side!” is often used in these cases.For example, the Westboro Baptist Church folks have probably done more good than harm for queers (net, aggregated over agents), despite being (I think) anti-queer.
By the same token, anti-Y-ism doesn’t necessarily harm Y?
Yup.
Well, sure. That’s true of everything. But is it especially true of misandry?
Beats me. I certainly didn’t mean to imply that it was. You went from my statement about acts that cause harm to X and Y to a superficially similar statement about ‘isms’. My point here is that going from endorsing FOO to endorsing ‘FOOism’ is not necessarily a truth-preserving operation for any ‘ism’, since ‘isms’ tend to carry additional baggage with them.
With respect to terms like ‘misandry,’ ‘misogyny,’ ‘misanthropy,’ ‘feminism,’ ‘masculism’, ‘sexism’, etc. I find it is almost always preferable to discard the term and instead talk about things like reducing harm to women, reducing harm to men, increasing benefits to women, increasing benefits to men, reducing net differentials between benefits to women and men, and similar concepts.
You’re response to one of those cases is what started this discussion.
Beats me. I certainly didn’t mean to imply that it was.
Ah. I was still responding to the comment where you said comparing misogyny to misandry was like comparing a rich man and a poor man stealing bread and sleeping on the streets.
I was still responding to the comment where you said comparing misogyny to misandry was like comparing a rich man and a poor man stealing bread and sleeping on the streets.
And you were responding to that by asking me whether it’s especially true of misandry that it doesn’t necessarily just harm men? You’ve kind of lost me again. If you can clarify the relationship between my comparison and your question—or perhaps back up a step further and clarify your objection to my comparison, which I infer you object to but am not exactly sure on what grounds (other than perhaps that it’s sexist, but I’m not quite sure how to interpret that label in this context), that might help resolve some confusions.
OK, if misandry (or other anti-X-ism) isn’t especially likely to have good side effects, compared to misogyny (anti-Y-ism), why is objecting to it on the same grounds as misogyny mistaken?
I feel like I’m repeating myself, which indicates that I haven’t been at all clear. So let me back up and express myself more precisely this time.
I’m going to temporarily divide misandry into two components: MA1 (those things which harm men) and MA2 (everything else). I will assume for the moment that MA1 is non-empty. (MA2 might be empty or non-empty, that’s irrelevant to my point.) I equivalently divide misogyny into MG1, which harms women, and MG2, which doesn’t.
As I’ve said elsewhere, I mostly care about MA1 and MG1, and not about MA2 and MG2. As I’ve also said elsewhere, I have two relevant values here: V1: to maximize net benefit V2: to minimize inequality.
So an (oversimplified subset of an) expected-value calculation for MA1 and MG1 might look like: EV(MA1) = BMA*WV1 + EMA*WV2 EV(MG1) = BMG*WV1 + EMG*WV2 ...where: EV(x) is the expected value of x; BMA/BMG is the expected change in net benefit due to MA1 and MG1 (respectively); EMA/EMG is the expected change in net equality due to MA1 and MG1 (respectively); WV1/WV2 is the weight of V1 and V2 (respectively) (For convenience, I’ve defined everything such that more positive is better.)
I object to MA1 on the grounds that I expect EV(MA1) to be negative. I expect this for two reasons: first, because BMA is negative—that is, MA1 results in less net benefit. second, because even though EMA is positive—that is, MA1 results in less inequality—I expect that (BMA*WV1) > (EMA*WV2).
I object to MG1 on the grounds that I expect EV(MG2) to be negative. I expect this for two reasons: first, because BMG is negative—that is, MG1 results in less net benefit. second, because EMG is negative—that is, MG1 results in more inequality.
So, rolling all of that tediously precise notation back into English, I could say that I object to misandry on the grounds that it causes harm, despite reducing inequality, and I object to misogyny on the grounds that it causes harm and increases inequality.
On consideration, I don’t say it’s necessarily a mistake to object to misandry and misogyny on the same grounds… for example, one might simply not care about inequality at all, in which case one would object to both of them on the same grounds—that is, the EV(MA1) and EV(MG1) calculations are basically the same. I don’t think it makes sense to say someone is mistaken to have or not have a particular value; if you don’t value equality, then you don’t, and there’s not much else to say about it.
But I do seem to value equality, and I therefore reject expected value calculations where EV(MA1) and EV(MG1) are basically the same.
Hang on a second, I’ve just noticed something. Misandry is present in different situations to misogyny, and increases inequality in those situations. The question is whether inequality is a separate Bad Thing, as you’ve modeled it—in which case EMA is negative—or equal to the total harm done to men minus the total harm done to women—in which case it’s positive, I guess.
I tend to assume that, say, men being unable to do X utility-increasing thing when women can increases inequality, in the same way as women being unable to do Y utility-increasing thing when men can, whereas both men and women being unable to do X utility-increasing thing reduces inequality, even as it reduces utility (obviously.) Maybe this is the source of the confusion/disagreement?
Yes, I agree that whether inequality is a separate Bad Thing is an important part of the question. As I said initially, if someone doesn’t value equality, then that person would object to misandry and misogyny on the same grounds (within the very narrow subset of the current discussion), and they would not be mistaken to do so, merely value different things than I do.
I tend to assume that, say, men being unable to do X utility-increasing thing when women can increases inequality
That seems unlikely to me, for basically the same reason that it seems unlikely that wealthy people being unable to do X wealth-increasing thing when poor people can increases wealth inequality. But sure, if you assume this, you’d reach different conclusions than I do.
For example, there are several folks on this site who seem to argue that there is no gender-based social inequality in our culture, or that if there is it benefits women; if I were to believe either of those things, I would reach different conclusions. (In the latter case I would oppose misandry more strongly than misogyny, since misogyny would tend to reduce inequality while misandry increased it, while having equal effects on harm. In the former case I would oppose them equally, since they had equal effects on inequality and harm.)
Even if you value equity separately from total utility, it is still the case that, contingent on any given level of equity, you should maximize total utility. While this would still involve some kind of utility transfer between agents, compared to the maximum in total utility—and, for the sake of this example, this could be considered either “misandry” or “misogyny”—it’s not clear that what we now know as misandry or misogyny would be preserved.
Even if you value equity separately from total utility,
Not sure where this came from.
MugaSofer gave two choices, neither of which had anything to do with total utility as I understood it. One choice was “inequality is a separate Bad Thing,” the other was that “it” (I assume inequality) was “equal to the total harm done to men minus the total harm done to women”. I agreed with the former. (I might also agree with the latter; it depends on how we understand “harm”.)
In any case, I don’t value equality separate from total utility. I do value it separate from total harm, which I also (negatively) value, and both values factor into my calculations of total utility. As do various other things.
contingent on any given level of equity, you should maximize total utility.
Sure. Further, I’d agree that I should maximize total utility independent of equality, with the understanding that how we calculate utility and how we total utilities is not obvious.
The rest of your comment is harder for me to make sense of, but if I’ve understood you correctly, you’re saying that if we maximize net aggregate utility for all humans—whatever that turns out to involve—it’s likely that when we’re done some group(s) might end up worse off than they’d have ended up if we’d instead maximized that group’s net aggregate utility. Yes?
Sure, I agree with that completely.
this could be considered either “misandry” or “misoginy”—it’s not clear that what we now know as misandry or misoginy would be preserved.
In any case, I don’t value equality separate from total utility. I do value it separate from total harm, which I also (negatively) value, and both values factor into my calculations of total utility.
In that case, you can replace “maximize total utility” with “minimize total harm” and the gist of my comment is unchanged (under mild assumptions, such as that increasing harm never yields an increase in utility).
some group(s) might end up worse off than they’d have ended up if we’d instead maximized that group’s net aggregate utility. Yes?
Not just worse off than maximizing that group’s aggregate U, or minimizing its aggregate harm (which is obvious), but also worse off than if we took equity into account and traded one group’s aggregate U against the given group’s.
This assumes a framework where inequality can be conflated with the difference in total harm done to each group (or with the difference in aggregate utility, again under plausible assumptions).
But, on the other hand, the assumption that “inequality is a separate Bad Thing” in the sense that instances of misandry create something called “inequality”, and instances of misogyny create inequality, and the two instances of inequality add up instead of canceling out, seems redundant. It’s just saying that “inequality” is a kind of harm, so there’s no reason to have it as a separate concept.
It’s just saying that “inequality” is a kind of harm, so there’s no reason to have it as a separate concept.
I agree that with a sufficiently robust shared understanding of harm, there’s no reason to call out other related concepts separately. That said, it’s not been my experience that the English word “harm” conveys anything like such an understanding in ordinary conversation, so sometimes using other words is helpful for communication.
That seems unlikely to me, for basically the same reason that it seems unlikely that wealthy people being unable to do X wealth-increasing thing when poor people can increases wealth inequality. But sure, if you assume this, you’d reach different conclusions than I do.
Well, that rather depends on whether we define “wealth inequality” as “inequality caused by the wealth distribution” or “inequality in the wealth distribution”. If the world was divided into two different castes, rich and poor, each of whom could only do half the utility-increasing things, it seems to me that they would be unequal because if a poor person wanted to do a rich-person thing, they couldn’t. If you would consider them equal (a similar world could be divided by race or gender) then I guess the term in your utility function you call “equality” is different to mine, even though they have the same labels. Odd, but there you go.
If the “utility-increasing things” the rich and poor groups were capable of doing were equally utility-increasing, yeah, I’d probably say that we’d achieved equality between rich and poor. If you would further require that they be able to do the same things before making that claim, then yes, we’re using the term “equality” differently. Sorry for the confusion; I’ll try to avoid the term in our discussion moving forward.
Rawls has done most of the work here, since this is basically the Rawlsian “veil of ignorance” test for a society—if the system is set up so that I’m genuinely, rationally indifferent between being born into one group and the other, the two groups can be considered equal.
This seems like a pretty good test to me. If we have a big pile of stuff to divide between us, and we can divide it into two piles such that both of us are genuinely indifferent about which one we end up with, it seems natural to say we value the two piles equally… in other words, that they are equal in value.
Granted, I’m really not sure how to argue for caring only about value differences, if that’s a sticking point, other than to stare incredulously and say “well what else would you care about and why?”
So, getting back to your hypothetical… if replacing one set of things-that-I-can-do (S1) with a different set of things-that-I-can-do (S2) doesn’t constitute a utility loss, then I don’t care about the substitution. Why should I? I’m just as well-off along all measurable dimensions of value as I was before.
Similarly, if group 1 has S1 and group 2 has S2, and there’s no utility difference, I don’t care which group I’m assigned to. Again, why should I? I’m just as well-off along all measurable dimensions of value either way. On what grounds would I pick one over the other?
So if, as you posited, rich people had S1 and poor people had S2, then I wouldn’t care whether I was rich or poor. That’s clearly not the way the real world is set up, which is precisely why I’m comfortable saying rich and poor people in the real world aren’t equal. But that is the way things are set up in your hypothetical.
In your hypothetical, a Rawlsian veil of ignorance really does apply between rich and poor. So I’m content to say that in your hypothetical, the rich and the poor are equal.
I suspect we haven’t yet identified the real mismatch, which probably has to do with what you meant and what I understood by “utility-increasing thing”. But I could be wrong, of course.
Rawls has done most of the work here, since this is basically the Rawlsian “veil of ignorance” test for a society—if the system is set up so that I’m genuinely, rationally indifferent between being born into one group and the other, the two groups can be considered equal.
Which utility function is this hypothetical rational agent supposed to use?
So, getting back to your hypothetical… if replacing one set of things-that-I-can-do (S1) with a different set of things-that-I-can-do (S2) doesn’t constitute a utility loss, then I don’t care about the substitution. Why should I? I’m just as well-off along all measurable dimensions of value as I was before.
Similarly, if group 1 has S1 and group 2 has S2, and there’s no utility difference, I don’t care which group I’m assigned to. Again, why should I? I’m just as well-off along all measurable dimensions of value either way. On what grounds would I pick one over the other?
So if, as you posited, rich people had S1 and poor people had S2, then I wouldn’t care whether I was rich or poor. That’s clearly not the way the real world is set up, which is precisely why I’m comfortable saying rich and poor people in the real world aren’t equal. But that is the way things are set up in your hypothetical.
But each of them only gets half! What about … well, what about individual variance, for a start. S1 and S2 wouldn’t be exactly equal for everybody if you’re dealing with humans, which to be fair I did not make explicit.
I don’t think that’s the point of the Rawlsian veil of ignorance—the point is that you should design a society as if you didn’t know which caste you’d be in, not that you should design it so you don’t care which caste you’d be in.
IOW, maximize the average utility, not minimize the differences between agents.
you should design a society as if you didn’t know which caste you’d be in, not that you should design it so you don’t care which caste you’d be in.
As I understand it, the goal of my not-knowing is to eliminate the temptation to take my personal status in that society into consideration when judging the society… that is, “ignorant of” is being used as a way of approximating “indifferent to”, not as a primary goal in and of itself.
But, OK, maybe I just don’t understand Rawls.
In any case, I infer that none of the rest of my explanation of why I think of equality in terms of equal-utility rather than equal-particulars is at all worth responding to, in which case I’m content to drop the subject here.
As I understand it, the goal of my not-knowing is to eliminate the temptation to take my personal status in that society into consideration when judging the society… that is, “ignorant of” is being used as a way of approximating “indifferent to”, not as a primary goal in and of itself.
But, OK, maybe I just don’t understand Rawls.
Nope, that’s my understanding too. You want to maximize utility, not just for your own caste, but for society.
In any case, I infer that none of the rest of my explanation of why I think of equality in terms of equal-utility rather than equal-particulars is at all worth responding to, in which case I’m content to drop the subject here.
Sorry about not responding to your other arguments, I kind of skimmed your comment and thought that was your argument.
Biology doesn’t dictate your values. Avoid the naturalistic fallacy.
For instance, even if the descriptive claims that PUAs make about women’s desires were true, this would not make it right to demean women.
It is surely the case that women and men have morally significant biological differences. Perhaps the biggest of these is pregnancy and childbirth — the vastly greater cost that women bear in childbearing. However, it would be the naturalistic fallacy to claim that women should bear this cost (e.g. that the creation of artificial wombs would not be a moral improvement); and it would be rationalization of misogyny to claim that women should be treated as baby-makers.
(Tim Wise makes a related argument about why it’s silly for progressives to worry too much about race-IQ research: we don’t believe that smart people have more political rights than average people, so even if it were shown that one racial group were on average smarter than another, this wouldn’t change anyone’s commitments to political equality.)
I’m not sure what you mean by demean women. Do you mean that to even make truthful observations that could make a woman feel bad is wrong?
I’m not sure what treated as baby makers means. I think it entirely reasonable, in this universe without well functioning artificial wombs, to take as a default that women will bear children, even to have incentives towards such. I like humans existing.
I’m not sure what you mean by demean women. Do you mean that to even make truthful observations that could make a woman feel bad is wrong?
No.
Some descriptive claims associated with the PUA memeplex seem to come with an addendum that could be crudely rendered as ”… and therefore, women are your inferiors.” Women are manipulable; therefore, you have the right to manipulate them. Women desire approval, therefore, you should manipulate their desire for approval to get sex out of them that they may otherwise not want to have. And so on.
(To make a geek analogy: “Their server has a security vulnerability; therefore, they are morons and you should hack them and take all their stuff.”)
I’m not sure what treated as baby makers means.
Perhaps I should have said “treated merely as baby-makers”; as opposed to thinkers, dreamers, desirers, planners, possessors of values and goals, colleagues, rivals, partners — you know, people.
I don’t know what your footnote references.
Blaaah … that’s because I removed the sentence it was a footnote to, and didn’t remove the footnote. Edited.
I’m certainly not going to defend all pua, or necessarily any particular pua.
I will defend women-as-baby makers, because I think that’s one of the most awesome things about them (especially rolling together all the child-raising instincts and so forth).
Absolutely, sexual reproduction is a wonder of nature, but there is an awesomeness differential there. Although of course awe is a subjective emotion, I could discuss it in more intellectual terms if you remain unconvinced.
Yeah, it’s not like the minimal obligatory parental investment from the father is, like, five orders of magnitude smaller than from the mother. At all.
Hmm, 6 orders of magnitude. If you are limiting fatherhood to conception that would be about 5 minutes for the man; times 1,000,000 then, 5,000,000 for the woman equals 3472 days, or 9.5 years. Not a bad approximation, except that that obviously isn’t the absolute minimum the woman could invest, as she could give it up for adoption after birth, or about 432,000 minutes, only 5 orders of magnitude larger than the father.
Shouldn’t you be including recovery time in that minimum?
Then you should also exclude the first few months, when (from what I’ve seen) aren’t that bad.
Also, why focus on the minimum rather than the typical in practice
Because that’s what my awesomeness-o-meter (which is what started this subthread) seems to respond to, especially given that Pragmatist put it in terms of sperm.
Let’s call it 4*10^5 minutes, then.
My response was to what is ‘minimal obligatory’—assuming the obligation is placed by biology, rather than law or honor or reason. Over a lifetime of care, the differences vary more by couple than gender, I’d expect.
I had pulled a figure out of my ass, and then divided the normal duration of a pregnancy by it to see whether the result was reasonable, but I must have goofed with the maths because I had got 23 minutes.¹ (Fixed.)
Yes, a man can take shorter than that to just ejaculate, but then again it’s not like pregnancy completely prevents you from working throughout its duration.
Seems to me that it’s a pretty serious bug that currently the only way we have for making more human-level intelligences involves causing one heck of a lot of pain, risk, and general discomfort to already-existing human-level intelligences.
And therefore?
Seems to me that all potential ways of making more human-or-greater-level intelligences discussed at this site involves heck of a lot of pain (in terms of man-hours of work), risk (of the x kind), and general discomfort (in adjusting to ai technology) to already exisiting human-level intelligences. And yet, when it happens, if we survive and all, it will be rather awesome.
I mean, is calling the grand canyon awesome minimizing the hazards of flash-flooding rivers may pose? Is calling a sky-scraper awesome callous to those who have ever labored or died in construction?
I’ll make the obligatory confused about down-voting post here. My assumption is that it is down-voted because “Duh, of course baby-making is awesome, and saying you defend it implies other people don’t which is stupid,” but there’s a chance that my comment could be read as either pro or anti feminist, and down voted accordingly; I’m just not sure which.
What if that system of inequality is biology? Is is still absurd to treat them equal?
Sometimes, sure. For example, if there’s some task to be performed, and because of their biology X is capable of performing it and Y is not, it’s frequently absurd to behave as though X and Y were equally capable of performing it. Having a long “conversation” with a deaf person who is not looking at me can be absurd, for example, as can giving a pregnancy test to a man.
“The law, in its majestic equality, forbids the rich as well as the poor to sleep under bridges, to beg in the streets, and to steal bread.” (The Red Lily; Anatole France)
Which is to say, insisting on treating two people identically when they are embedded in a system of inequality sometimes leads us to absurd conclusions.
You don’t get a pass on your own biases merely because you oppose somebody else’s. You especially don’t get a pass on your own biases when you’re using them as the basis to assert somebody else’s.
Sure, I agree with all of that.
To be absolutely clear here—you’re saying actual, overt sexism is acceptable, as long as it’s women doing it to men?
Well, that’s pretty damn sexist, so I guess you’re consistent, at least. Or … maybe not, because your username implies you’re male, and Wilde was accusing the OP of misogyny as well as misandry.
I’m not sure if I’m saying that, since I’m never quite sure what people mean by “sexism”, let alone “actual, overt sexism”.
But I am saying that in a system that differentially benefits group X over group Y, I consider it much more acceptable for an individual to treat X and Y differently in a way that differentially benefits Y than in a way that further differentially benefits X. If that’s actual, overt sexism in case where (X,Y)=(men, women), then yes, I’m saying actual, overt sexism is sometimes acceptable as long as it’s being done to men. (The gender of the person doing it is irrelevant.)
If that’s itself pretty damn sexist, I’m OK with that. My purpose here is not to avoid nasty sounding labels, but to reduce the (net) differential in distribution of social benefits (among other purposes). So if I have a choice between “being sexist” while reducing that differential and “not being sexist” while increasing it (all else being equal), I choose to reduce that differential. Labels don’t matter as much as the properties of the system itself.
All that said, I do agree that treating women who abuse their family members as though they lack agency and merely express the patriarchy, while treating men who abuse their family members as though they do possess agency, is unjustified.
My objection was not to that, nor to the other statements in the OP that I didn’t quote, but rather to the sentences I quoted and the “personal yardstick” they suggested using, which I don’t endorse.
Well alright, as long as you’re consistent ;)
Personally, I would say most “sexism” is less taking from Y and giving to X and more just harming Y, which benefits X only through weaker competition. I suppose if you view the battle of the sexes to be a zero-sum game, that yardstick doesn’t make much sense. However, if you thing misogyny and misandry hurt everyone, it does. Looks like there was an inarticulated assumption in OrphanWilde’s post, I guess.
I don’t necessarily think the distribution of social benefits is a zero sum game; in fact, I find that unlikely.
However, it’s also irrelevant to my point. I can value equalizing the net playing field for X and Y whether that playing field is on average rising, on average lowering, or on average staying the same. My point is simply that if I value equalizing the net playing field between X and Y, I should endorse reducing the (net) differential in distribution of social benefits between X and Y.
One of the many benefits a society can provide its members is protection from harm. So differentially harming Y is one of many ways that a (net) differential in distribution of social benefits to X and Y can manifest.
And, again, if we want to label reducing the (net) differential in distribution of social benefits between men and women, with the goal of ultimately altering our society so that it provides women and men with the same level of benefits, “sexism”, I won’t argue with that labeling, but I also won’t care very much about avoiding things labeled that way.
Unless I’ve misunderstood the term, what you describe is, in fact, a zero-sum game.
If I had persuaded you by changing the label, I’d be pretty ashamed of myself for using Dark Arts in a LW discussion.
One of us is misunderstanding the term, then.
It might be me.
We might do best to not use the term, given that.
Taboo time!
“A situation where harming one side is equivalent to helping the other—perhaps because the first to pull ahead by a certain number of points wins, or because they both derive utility from the disutility of the other side.”
Thank you for clarifying.
OK, soo you’re claiming that when I say that one of the many benefits a society can provide its members is protection from harm, so differentially harming Y is one of many ways that a (net) differential in distribution of social benefits to X and Y can manifest, I’m implicitly asserting that harming Y is equivalent to helping X?
If I understood that, then no, I think this is simply false.
For example, suppose there are dangerous insects about and I have a supply of insect-repellent, which I choose to give only to group X. This is a differential distribution of social benefits (specifically, insect repellent) to X and Y, and sure enough, Y is differentially harmed by the insects as a manifestation of that differential distribution of insect repellent. But it doesn’t follow that harming group Y is equivalent to helping group X… it might well be that if I gave everyone insect repellent, both X and Y would be better off.
But if you’re trying to optimize the net inequality … surely that means that you’ll treat harming the better-off one as equivalent to aiding the worse-off one?
Ah! I understand what you’re saying, now. Thanks for clarifying further.
Yes, you’re right, if the only thing I wanted to do was reduce the net inequality, I could achieve my goals most readily by harming X until it was just as bad off as Y (which would be a negative-sum game), and that would be equivalent to benefiting Y. Or I could use some combination of benefit-to-Y and harm-to-X.
And no, reducing the net inequality is not the only thing I want to do, for precisely this reason.
But it is a thing I want to do. And as a consequence, I don’t treat actions that benefit Y the same way as actions that improve X’s situation, and I don’t treat actions that harm Y the same way as actions that harm X.
I admire your consistency and refusal to be evasive about unfortunate implications. Upvoted. This is where conversations about social justice should have began.
Yeah, agreed about where the conversation should start.
I have struggled for years about what I want to say about maximizing net aggregated benefits vs minimizing net inequality in cases where tradeoffs are necessary. I am not really happy with any of my answers.
In practice, I think there’s a lot of low-hanging fruit where reducing inequality increases net aggregated benefits, so I don’t consider it a critical question right this minute, but it’s likely to be at some point.
There are even more actions that will increase both net aggregate benefits and inequality.
(nods) That’s true.
My provisional solution for this: I want to maximize net aggregated benefits. I don’t want to minimize net inequality per se, but a useful heuristic is that if X is worse off than Y, then you can probably get more net aggregated benefits per unit resources by helping X (or refraining from harming X) than by helping Y (or refraining for harming Y).
Yeah, I’ve considered this. It doesn’t work for me, because I do seem to want to minimize inequality (in addition to maximizing benefit), and simply ignoring one of my wants is unsatisfying.
That said, I’m not exactly sure why I want to minimize inequality. I’m pretty sure I don’t just value equality for its own sake, for example, though some people claim they do.
One answer that often seems plausible to me is because I am aware that inequalities create an environment that facilitates various kinds of abuse, and what I actually want is to minimize those abuses; a system of inequality among agents who can be relied upon not to abuse one another would be all right with me.
Another answer that often seems plausible to me is because I want everyone to like me, and I’m convinced that inequalities foster resentment.
Other answers pop up from time to time. (And of course there’s always the potential confusion between wanting X and wanting to signal membership in a class characterized by wanting X.)
Crocker’s Rules
I get the sense that you think I disagree with TheOtherDave’s statement above, particularly:
If you are willing, can you identify what I say that makes you think that. For example, if you think I’ve been mindkilled or such, feel free to tell me so.
The “consistency and refusal to be evasive about unfortunate implications”, if you’re taking that as a jibe, wasn’t directed at you (or anybody here on Less Wrong, for that matter), but rather the Dark Arts that currently constitute the majority of social justice conversations.
To be honest, I’m uncertain whether or not the line of conversation here parallels the line of conversation you and I were having (although it’s possible I’ve lost track of another line of conversation—searched, couldn’t find one). Our conversation drifted considerably in purpose, my apologies for that.
I was attempting to ascertain whether your belief was that social disapproval could correct a natural violent tendency in males, or whether your belief was that social approval/lack of social disapproval was creating a violent tendency in males. Probably would have been simpler to ask, in retrospect; my debate skills were largely honed with people who don’t know what they believe, and asking such questions tends to commit them to the answers. My apologies.
No problem.
To answer your question, I suspect that social approval / lack of social disapproval creates most tendencies. At least on the margins.
Ah, right.
So you consider anti-X-ism better than anti-Y-ism, but both are worse than having neither?
If the only expected effects of anti-X-ism and anti-Y-ism are harm to X and harm to Y (respectively), yes, that’s correct.
But you expect some secondary sociological/reputational benefit, at least in this case?
Expect? No. Just acknowledging that anti-X-ism doesn’t necessarily harm X, nor does it necessarily only harm X.
But sure, it happens. The phrase “get off my side!” is often used in these cases.For example, the Westboro Baptist Church folks have probably done more good than harm for queers (net, aggregated over agents), despite being (I think) anti-queer.
By the same token, anti-Y-ism doesn’t necessarily harm Y?
Well, sure. That’s true of everything. But is it especially true of misandry?
You’re response to one of those cases is what started this discussion.
Yup.
Beats me. I certainly didn’t mean to imply that it was. You went from my statement about acts that cause harm to X and Y to a superficially similar statement about ‘isms’. My point here is that going from endorsing FOO to endorsing ‘FOOism’ is not necessarily a truth-preserving operation for any ‘ism’, since ‘isms’ tend to carry additional baggage with them.
With respect to terms like ‘misandry,’ ‘misogyny,’ ‘misanthropy,’ ‘feminism,’ ‘masculism’, ‘sexism’, etc. I find it is almost always preferable to discard the term and instead talk about things like reducing harm to women, reducing harm to men, increasing benefits to women, increasing benefits to men, reducing net differentials between benefits to women and men, and similar concepts.
Yes. And?
Ah. I was still responding to the comment where you said comparing misogyny to misandry was like comparing a rich man and a poor man stealing bread and sleeping on the streets.
Just noting.
And you were responding to that by asking me whether it’s especially true of misandry that it doesn’t necessarily just harm men?
You’ve kind of lost me again.
If you can clarify the relationship between my comparison and your question—or perhaps back up a step further and clarify your objection to my comparison, which I infer you object to but am not exactly sure on what grounds (other than perhaps that it’s sexist, but I’m not quite sure how to interpret that label in this context), that might help resolve some confusions.
OK, if misandry (or other anti-X-ism) isn’t especially likely to have good side effects, compared to misogyny (anti-Y-ism), why is objecting to it on the same grounds as misogyny mistaken?
I feel like I’m repeating myself, which indicates that I haven’t been at all clear.
So let me back up and express myself more precisely this time.
I’m going to temporarily divide misandry into two components: MA1 (those things which harm men) and MA2 (everything else). I will assume for the moment that MA1 is non-empty. (MA2 might be empty or non-empty, that’s irrelevant to my point.) I equivalently divide misogyny into MG1, which harms women, and MG2, which doesn’t.
As I’ve said elsewhere, I mostly care about MA1 and MG1, and not about MA2 and MG2.
As I’ve also said elsewhere, I have two relevant values here:
V1: to maximize net benefit
V2: to minimize inequality.
So an (oversimplified subset of an) expected-value calculation for MA1 and MG1 might look like:
EV(MA1) = BMA*WV1 + EMA*WV2
EV(MG1) = BMG*WV1 + EMG*WV2
...where:
EV(x) is the expected value of x;
BMA/BMG is the expected change in net benefit due to MA1 and MG1 (respectively);
EMA/EMG is the expected change in net equality due to MA1 and MG1 (respectively);
WV1/WV2 is the weight of V1 and V2 (respectively)
(For convenience, I’ve defined everything such that more positive is better.)
I object to MA1 on the grounds that I expect EV(MA1) to be negative. I expect this for two reasons:
first, because BMA is negative—that is, MA1 results in less net benefit.
second, because even though EMA is positive—that is, MA1 results in less inequality—I expect that (BMA*WV1) > (EMA*WV2).
I object to MG1 on the grounds that I expect EV(MG2) to be negative. I expect this for two reasons:
first, because BMG is negative—that is, MG1 results in less net benefit.
second, because EMG is negative—that is, MG1 results in more inequality.
So, rolling all of that tediously precise notation back into English, I could say that I object to misandry on the grounds that it causes harm, despite reducing inequality, and I object to misogyny on the grounds that it causes harm and increases inequality.
On consideration, I don’t say it’s necessarily a mistake to object to misandry and misogyny on the same grounds… for example, one might simply not care about inequality at all, in which case one would object to both of them on the same grounds—that is, the EV(MA1) and EV(MG1) calculations are basically the same. I don’t think it makes sense to say someone is mistaken to have or not have a particular value; if you don’t value equality, then you don’t, and there’s not much else to say about it.
But I do seem to value equality, and I therefore reject expected value calculations where EV(MA1) and EV(MG1) are basically the same.
Is that any clearer?
Hang on a second, I’ve just noticed something. Misandry is present in different situations to misogyny, and increases inequality in those situations. The question is whether inequality is a separate Bad Thing, as you’ve modeled it—in which case EMA is negative—or equal to the total harm done to men minus the total harm done to women—in which case it’s positive, I guess.
I tend to assume that, say, men being unable to do X utility-increasing thing when women can increases inequality, in the same way as women being unable to do Y utility-increasing thing when men can, whereas both men and women being unable to do X utility-increasing thing reduces inequality, even as it reduces utility (obviously.) Maybe this is the source of the confusion/disagreement?
Yes, I agree that whether inequality is a separate Bad Thing is an important part of the question. As I said initially, if someone doesn’t value equality, then that person would object to misandry and misogyny on the same grounds (within the very narrow subset of the current discussion), and they would not be mistaken to do so, merely value different things than I do.
That seems unlikely to me, for basically the same reason that it seems unlikely that wealthy people being unable to do X wealth-increasing thing when poor people can increases wealth inequality. But sure, if you assume this, you’d reach different conclusions than I do.
For example, there are several folks on this site who seem to argue that there is no gender-based social inequality in our culture, or that if there is it benefits women; if I were to believe either of those things, I would reach different conclusions. (In the latter case I would oppose misandry more strongly than misogyny, since misogyny would tend to reduce inequality while misandry increased it, while having equal effects on harm. In the former case I would oppose them equally, since they had equal effects on inequality and harm.)
Even if you value equity separately from total utility, it is still the case that, contingent on any given level of equity, you should maximize total utility. While this would still involve some kind of utility transfer between agents, compared to the maximum in total utility—and, for the sake of this example, this could be considered either “misandry” or “misogyny”—it’s not clear that what we now know as misandry or misogyny would be preserved.
Not sure where this came from.
MugaSofer gave two choices, neither of which had anything to do with total utility as I understood it. One choice was “inequality is a separate Bad Thing,” the other was that “it” (I assume inequality) was “equal to the total harm done to men minus the total harm done to women”. I agreed with the former. (I might also agree with the latter; it depends on how we understand “harm”.)
In any case, I don’t value equality separate from total utility. I do value it separate from total harm, which I also (negatively) value, and both values factor into my calculations of total utility. As do various other things.
Sure. Further, I’d agree that I should maximize total utility independent of equality, with the understanding that how we calculate utility and how we total utilities is not obvious.
The rest of your comment is harder for me to make sense of, but if I’ve understood you correctly, you’re saying that if we maximize net aggregate utility for all humans—whatever that turns out to involve—it’s likely that when we’re done some group(s) might end up worse off than they’d have ended up if we’d instead maximized that group’s net aggregate utility. Yes?
Sure, I agree with that completely.
Sure, that’s true.
In that case, you can replace “maximize total utility” with “minimize total harm” and the gist of my comment is unchanged (under mild assumptions, such as that increasing harm never yields an increase in utility).
Not just worse off than maximizing that group’s aggregate U, or minimizing its aggregate harm (which is obvious), but also worse off than if we took equity into account and traded one group’s aggregate U against the given group’s.
This assumes a framework where inequality can be conflated with the difference in total harm done to each group (or with the difference in aggregate utility, again under plausible assumptions).
But, on the other hand, the assumption that “inequality is a separate Bad Thing” in the sense that instances of misandry create something called “inequality”, and instances of misogyny create inequality, and the two instances of inequality add up instead of canceling out, seems redundant. It’s just saying that “inequality” is a kind of harm, so there’s no reason to have it as a separate concept.
I agree that with a sufficiently robust shared understanding of harm, there’s no reason to call out other related concepts separately. That said, it’s not been my experience that the English word “harm” conveys anything like such an understanding in ordinary conversation, so sometimes using other words is helpful for communication.
Well, that rather depends on whether we define “wealth inequality” as “inequality caused by the wealth distribution” or “inequality in the wealth distribution”. If the world was divided into two different castes, rich and poor, each of whom could only do half the utility-increasing things, it seems to me that they would be unequal because if a poor person wanted to do a rich-person thing, they couldn’t. If you would consider them equal (a similar world could be divided by race or gender) then I guess the term in your utility function you call “equality” is different to mine, even though they have the same labels. Odd, but there you go.
If the “utility-increasing things” the rich and poor groups were capable of doing were equally utility-increasing, yeah, I’d probably say that we’d achieved equality between rich and poor. If you would further require that they be able to do the same things before making that claim, then yes, we’re using the term “equality” differently. Sorry for the confusion; I’ll try to avoid the term in our discussion moving forward.
Huh. Well, I guess we’ve identified the mismatch. Tapping out, unless you want to argue for Dave!equality.
Sure, why not?
Rawls has done most of the work here, since this is basically the Rawlsian “veil of ignorance” test for a society—if the system is set up so that I’m genuinely, rationally indifferent between being born into one group and the other, the two groups can be considered equal.
This seems like a pretty good test to me. If we have a big pile of stuff to divide between us, and we can divide it into two piles such that both of us are genuinely indifferent about which one we end up with, it seems natural to say we value the two piles equally… in other words, that they are equal in value.
Granted, I’m really not sure how to argue for caring only about value differences, if that’s a sticking point, other than to stare incredulously and say “well what else would you care about and why?”
So, getting back to your hypothetical… if replacing one set of things-that-I-can-do (S1) with a different set of things-that-I-can-do (S2) doesn’t constitute a utility loss, then I don’t care about the substitution. Why should I? I’m just as well-off along all measurable dimensions of value as I was before.
Similarly, if group 1 has S1 and group 2 has S2, and there’s no utility difference, I don’t care which group I’m assigned to. Again, why should I? I’m just as well-off along all measurable dimensions of value either way. On what grounds would I pick one over the other?
So if, as you posited, rich people had S1 and poor people had S2, then I wouldn’t care whether I was rich or poor. That’s clearly not the way the real world is set up, which is precisely why I’m comfortable saying rich and poor people in the real world aren’t equal. But that is the way things are set up in your hypothetical.
In your hypothetical, a Rawlsian veil of ignorance really does apply between rich and poor. So I’m content to say that in your hypothetical, the rich and the poor are equal.
I suspect we haven’t yet identified the real mismatch, which probably has to do with what you meant and what I understood by “utility-increasing thing”. But I could be wrong, of course.
Which utility function is this hypothetical rational agent supposed to use?
Beats me. MugaSofer asked me the question in terms of “the utility-increasing things” and I answered in those terms.
As long as it doesn’t include a term for Dave!equality, we should be good.
But each of them only gets half! What about … well, what about individual variance, for a start. S1 and S2 wouldn’t be exactly equal for everybody if you’re dealing with humans, which to be fair I did not make explicit.
OK. Given some additional data about what arguing for Dave!equality might look like, I’m tapping out here.
Lengthy, amirite?
Fair enough.
I don’t think that’s the point of the Rawlsian veil of ignorance—the point is that you should design a society as if you didn’t know which caste you’d be in, not that you should design it so you don’t care which caste you’d be in. IOW, maximize the average utility, not minimize the differences between agents.
As I understand it, the goal of my not-knowing is to eliminate the temptation to take my personal status in that society into consideration when judging the society… that is, “ignorant of” is being used as a way of approximating “indifferent to”, not as a primary goal in and of itself.
But, OK, maybe I just don’t understand Rawls.
In any case, I infer that none of the rest of my explanation of why I think of equality in terms of equal-utility rather than equal-particulars is at all worth responding to, in which case I’m content to drop the subject here.
Nope, that’s my understanding too. You want to maximize utility, not just for your own caste, but for society.
Sorry about not responding to your other arguments, I kind of skimmed your comment and thought that was your argument.
What if that system of inequality is biology? Is is still absurd to treat them equal?
Biology doesn’t dictate your values. Avoid the naturalistic fallacy.
For instance, even if the descriptive claims that PUAs make about women’s desires were true, this would not make it right to demean women.
It is surely the case that women and men have morally significant biological differences. Perhaps the biggest of these is pregnancy and childbirth — the vastly greater cost that women bear in childbearing. However, it would be the naturalistic fallacy to claim that women should bear this cost (e.g. that the creation of artificial wombs would not be a moral improvement); and it would be rationalization of misogyny to claim that women should be treated as baby-makers.
(Tim Wise makes a related argument about why it’s silly for progressives to worry too much about race-IQ research: we don’t believe that smart people have more political rights than average people, so even if it were shown that one racial group were on average smarter than another, this wouldn’t change anyone’s commitments to political equality.)
I’m not sure what you mean by demean women. Do you mean that to even make truthful observations that could make a woman feel bad is wrong?
I’m not sure what treated as baby makers means. I think it entirely reasonable, in this universe without well functioning artificial wombs, to take as a default that women will bear children, even to have incentives towards such. I like humans existing.
I don’t know what your footnote references.
No.
Some descriptive claims associated with the PUA memeplex seem to come with an addendum that could be crudely rendered as ”… and therefore, women are your inferiors.” Women are manipulable; therefore, you have the right to manipulate them. Women desire approval, therefore, you should manipulate their desire for approval to get sex out of them that they may otherwise not want to have. And so on.
(To make a geek analogy: “Their server has a security vulnerability; therefore, they are morons and you should hack them and take all their stuff.”)
Perhaps I should have said “treated merely as baby-makers”; as opposed to thinkers, dreamers, desirers, planners, possessors of values and goals, colleagues, rivals, partners — you know, people.
Blaaah … that’s because I removed the sentence it was a footnote to, and didn’t remove the footnote. Edited.
I’m certainly not going to defend all pua, or necessarily any particular pua. I will defend women-as-baby makers, because I think that’s one of the most awesome things about them (especially rolling together all the child-raising instincts and so forth).
Yes! And one of the most awesome things about men is that they can produce sperm, also crucial for baby-making. Gotta love those sperm factories.
Absolutely, sexual reproduction is a wonder of nature, but there is an awesomeness differential there. Although of course awe is a subjective emotion, I could discuss it in more intellectual terms if you remain unconvinced.
Yeah, it’s not like the minimal obligatory parental investment from the father is, like, five orders of magnitude smaller than from the mother. At all.
Hmm, 6 orders of magnitude. If you are limiting fatherhood to conception that would be about 5 minutes for the man; times 1,000,000 then, 5,000,000 for the woman equals 3472 days, or 9.5 years. Not a bad approximation, except that that obviously isn’t the absolute minimum the woman could invest, as she could give it up for adoption after birth, or about 432,000 minutes, only 5 orders of magnitude larger than the father.
Shouldn’t you be including recovery time in that minimum?
Also, why focus on the minimum rather than the typical in practice or father’s investment which is most likely to lead to grandchildren?
Then you should also exclude the first few months, when (from what I’ve seen) aren’t that bad.
Because that’s what my awesomeness-o-meter (which is what started this subthread) seems to respond to, especially given that Pragmatist put it in terms of sperm.
Let’s call it 4*10^5 minutes, then. My response was to what is ‘minimal obligatory’—assuming the obligation is placed by biology, rather than law or honor or reason. Over a lifetime of care, the differences vary more by couple than gender, I’d expect.
I had pulled a figure out of my ass, and then divided the normal duration of a pregnancy by it to see whether the result was reasonable, but I must have goofed with the maths because I had got 23 minutes.¹ (Fixed.)
Yes, a man can take shorter than that to just ejaculate, but then again it’s not like pregnancy completely prevents you from working throughout its duration.
I actually had no idea how it would end up before I did the math. You must have read the Fermi post*.
*(Not to imply you couldn’t have read about it beforehand; just a figure of speech)
I have, but IIRC I hadn’t when I wrote that comment.
Historically maybe. In modern societies there are parental testing and child support laws.
Seems to me that it’s a pretty serious bug that currently the only way we have for making more human-level intelligences involves causing one heck of a lot of pain, risk, and general discomfort to already-existing human-level intelligences.
And therefore? Seems to me that all potential ways of making more human-or-greater-level intelligences discussed at this site involves heck of a lot of pain (in terms of man-hours of work), risk (of the x kind), and general discomfort (in adjusting to ai technology) to already exisiting human-level intelligences. And yet, when it happens, if we survive and all, it will be rather awesome.
I mean, is calling the grand canyon awesome minimizing the hazards of flash-flooding rivers may pose? Is calling a sky-scraper awesome callous to those who have ever labored or died in construction?
Can you point to any bug-free system?
I’ll make the obligatory confused about down-voting post here. My assumption is that it is down-voted because “Duh, of course baby-making is awesome, and saying you defend it implies other people don’t which is stupid,” but there’s a chance that my comment could be read as either pro or anti feminist, and down voted accordingly; I’m just not sure which.
Sometimes, sure. For example, if there’s some task to be performed, and because of their biology X is capable of performing it and Y is not, it’s frequently absurd to behave as though X and Y were equally capable of performing it. Having a long “conversation” with a deaf person who is not looking at me can be absurd, for example, as can giving a pregnancy test to a man.