Actually, if your “utility function” doesn’t obey the axioms of Von Neumann-Morgenstern utility, it’s not an utility function in the normal sense of the world.
Downvoted for trying to argue against a principle that is actually irrelevant to your claims. (“The utility function is not up for grabs” doesn’t mean that decisions are always rational, and is actually inapplicable here.)
I didn’t mean decisions are always rational. I meant that it makes no sense for preferences to be rational or irrational: they just are. Rationality is a property of decisions, not of preferences: if a decision maximizes the expectation of your preferences it’s rational and if it doesn’t it isn’t.
Preferences can, however, be inconsistent. And rational decision-making across inconsistent preferences is sometimes difficult to distinguish from irrational decision-making.
In fact, it’s worse than that. Utility is still up for grabs, even if it does obey the axioms—because we will soon be in the condition of being able to modify our own utility functions! (If we aren’t already: Addictive drugs alter your ability to experience non-drug pleasure; and could psychotherapy change my level of narcissism, or my level of empathy?)
Indeed, the entire project of Friendly AI can be taken to be the project of specifying the right utility function for a superintelligent AI. If any utility that follows the axioms would qualify, then a paperclipper would be just fine.
So not only does “the utility function is not up for grabs” not work in this situation (because I’m saying precisely that women who behave this way are denying themselves happiness); I’m not sure it works in any situation. Even if you are sufficiently rational that you really do obey a consistent utility function in everything you do, that could still be a bad utility function (you could be a psychopath, or a paperclipper).
Yes it is, if your “utility function” doesn’t obey the axioms of Von Neumann-Morgenstern utility, which it doesn’t, if you are at all a normal human.
Prospect theory? Allais paradox?
Seriously, what are we even doing on Less Wrong, if you think that the decisions people make are automatically rational just because people made them?
Actually, if your “utility function” doesn’t obey the axioms of Von Neumann-Morgenstern utility, it’s not an utility function in the normal sense of the world.
I suppose that’s why pnrjulius put “utility function” in quotes.
Downvoted for trying to argue against a principle that is actually irrelevant to your claims. (“The utility function is not up for grabs” doesn’t mean that decisions are always rational, and is actually inapplicable here.)
I didn’t mean decisions are always rational. I meant that it makes no sense for preferences to be rational or irrational: they just are. Rationality is a property of decisions, not of preferences: if a decision maximizes the expectation of your preferences it’s rational and if it doesn’t it isn’t.
Preferences can, however, be inconsistent.
And rational decision-making across inconsistent preferences is sometimes difficult to distinguish from irrational decision-making.
In fact, it’s worse than that. Utility is still up for grabs, even if it does obey the axioms—because we will soon be in the condition of being able to modify our own utility functions! (If we aren’t already: Addictive drugs alter your ability to experience non-drug pleasure; and could psychotherapy change my level of narcissism, or my level of empathy?)
Indeed, the entire project of Friendly AI can be taken to be the project of specifying the right utility function for a superintelligent AI. If any utility that follows the axioms would qualify, then a paperclipper would be just fine.
So not only does “the utility function is not up for grabs” not work in this situation (because I’m saying precisely that women who behave this way are denying themselves happiness); I’m not sure it works in any situation. Even if you are sufficiently rational that you really do obey a consistent utility function in everything you do, that could still be a bad utility function (you could be a psychopath, or a paperclipper).