Yet it described human behaviour accurately. People take significant risk of loosing decades of beta to get 5 minutes of alpha.
In fact… not even then, because you need that beta to help care for your offspring.
Remember there is no need for the beta taking care of the child to be the sperm donor. Also in tropical agricultural societies (like say West Africa) and in modern social democracies (like say Norway), women don’t need the help of their sexual partners to care for their offspring.
But if someone’s revealed preferences are irrational (as revealed human preferences often, nay typically are), then it doesn’t serve anyone to follow them. So contrary to your assertion, you are assuming that these preferences are rational, or else you wouldn’t be encouraging people to follow them.
So my question is this: Is a woman who has sex with Brad Pitt once and remains alone for the rest of her life actually happier than a woman who is comfortably married to an ordinary guy for several years?
If the answer is no—and I think it’s pretty obvious that the answer is, in fact, no—then your maxim fails, and any woman who follows it is being irrational and self-destructive. She’s following her genes right off a cliff.
So my question is this: Is a woman who has sex with Brad Pitt once and remains alone for the rest of her life actually happier than a woman who is comfortably married to an ordinary guy for several years?
You make a compelling argument. I clearly misused the word rational when I was just looking at what the genes “want”. I thus retract that part of the statement.
I do wish to emphasise that “5 minutes of Alpha is worth 5 years of beta” while mostly hyperbole is something people should be keeping in mind when trying to predict the sexual and romantic behaviour of women.
Actually, if your “utility function” doesn’t obey the axioms of Von Neumann-Morgenstern utility, it’s not an utility function in the normal sense of the world.
Downvoted for trying to argue against a principle that is actually irrelevant to your claims. (“The utility function is not up for grabs” doesn’t mean that decisions are always rational, and is actually inapplicable here.)
I didn’t mean decisions are always rational. I meant that it makes no sense for preferences to be rational or irrational: they just are. Rationality is a property of decisions, not of preferences: if a decision maximizes the expectation of your preferences it’s rational and if it doesn’t it isn’t.
Preferences can, however, be inconsistent. And rational decision-making across inconsistent preferences is sometimes difficult to distinguish from irrational decision-making.
In fact, it’s worse than that. Utility is still up for grabs, even if it does obey the axioms—because we will soon be in the condition of being able to modify our own utility functions! (If we aren’t already: Addictive drugs alter your ability to experience non-drug pleasure; and could psychotherapy change my level of narcissism, or my level of empathy?)
Indeed, the entire project of Friendly AI can be taken to be the project of specifying the right utility function for a superintelligent AI. If any utility that follows the axioms would qualify, then a paperclipper would be just fine.
So not only does “the utility function is not up for grabs” not work in this situation (because I’m saying precisely that women who behave this way are denying themselves happiness); I’m not sure it works in any situation. Even if you are sufficiently rational that you really do obey a consistent utility function in everything you do, that could still be a bad utility function (you could be a psychopath, or a paperclipper).
Which is, at best, true only in terms of inclusive fitness.
In fact… not even then, because you need that beta to help care for your offspring.
Yet it described human behaviour accurately. People take significant risk of loosing decades of beta to get 5 minutes of alpha.
Remember there is no need for the beta taking care of the child to be the sperm donor. Also in tropical agricultural societies (like say West Africa) and in modern social democracies (like say Norway), women don’t need the help of their sexual partners to care for their offspring.
I hope you’re not assuming that all human behavior is rational...
I’m not assuming it is. The maxim does however encapsulated revealed preferences of women. It would be irrational of men to pretend they don’t.
Edit: I don’t agree with the statement below any more. It is a misuse of the word rational.
In any case I would argue that this behaviour happens to be rational when women don’t need men to provide materially for their offspring.
But if someone’s revealed preferences are irrational (as revealed human preferences often, nay typically are), then it doesn’t serve anyone to follow them. So contrary to your assertion, you are assuming that these preferences are rational, or else you wouldn’t be encouraging people to follow them.
So my question is this: Is a woman who has sex with Brad Pitt once and remains alone for the rest of her life actually happier than a woman who is comfortably married to an ordinary guy for several years?
If the answer is no—and I think it’s pretty obvious that the answer is, in fact, no—then your maxim fails, and any woman who follows it is being irrational and self-destructive. She’s following her genes right off a cliff.
You mean, if their revealed preferences are not their actual preferences, which is often the case, because of irrationality?
You make a compelling argument. I clearly misused the word rational when I was just looking at what the genes “want”. I thus retract that part of the statement.
I do wish to emphasise that “5 minutes of Alpha is worth 5 years of beta” while mostly hyperbole is something people should be keeping in mind when trying to predict the sexual and romantic behaviour of women.
The utility function is not up for grabs.
Yes it is, if your “utility function” doesn’t obey the axioms of Von Neumann-Morgenstern utility, which it doesn’t, if you are at all a normal human.
Prospect theory? Allais paradox?
Seriously, what are we even doing on Less Wrong, if you think that the decisions people make are automatically rational just because people made them?
Actually, if your “utility function” doesn’t obey the axioms of Von Neumann-Morgenstern utility, it’s not an utility function in the normal sense of the world.
I suppose that’s why pnrjulius put “utility function” in quotes.
Downvoted for trying to argue against a principle that is actually irrelevant to your claims. (“The utility function is not up for grabs” doesn’t mean that decisions are always rational, and is actually inapplicable here.)
I didn’t mean decisions are always rational. I meant that it makes no sense for preferences to be rational or irrational: they just are. Rationality is a property of decisions, not of preferences: if a decision maximizes the expectation of your preferences it’s rational and if it doesn’t it isn’t.
Preferences can, however, be inconsistent.
And rational decision-making across inconsistent preferences is sometimes difficult to distinguish from irrational decision-making.
In fact, it’s worse than that. Utility is still up for grabs, even if it does obey the axioms—because we will soon be in the condition of being able to modify our own utility functions! (If we aren’t already: Addictive drugs alter your ability to experience non-drug pleasure; and could psychotherapy change my level of narcissism, or my level of empathy?)
Indeed, the entire project of Friendly AI can be taken to be the project of specifying the right utility function for a superintelligent AI. If any utility that follows the axioms would qualify, then a paperclipper would be just fine.
So not only does “the utility function is not up for grabs” not work in this situation (because I’m saying precisely that women who behave this way are denying themselves happiness); I’m not sure it works in any situation. Even if you are sufficiently rational that you really do obey a consistent utility function in everything you do, that could still be a bad utility function (you could be a psychopath, or a paperclipper).