IAWY right up to the penultimate sentence. Humans continuously modify their utility functions to maintain a steady level of happiness. A change in your utility function’s input—like winning the lottery, or suffering a permanent injury—has only a temporary effect. The day you collect your winnings, you’re super-happy; a year later, you’re no happier than you were when you bought the ticket. If you’re considering picking up a crack habit, you had better realize that in a year your baseline happiness will be no higher than it is now, despite all the things you’ll sacrifice trying to be happy.
Supplying yourself with cocaine and money isn’t an effective way to achieve a goal of happiness, just like supplying a country with foreign aid isn’t an effective way to improve quality of life there. The rational thing to do is to grab the levers of your hedonic treadmill and set it where you want it to be. But it’s risky to monkey around with such things—which is why I’m interested in That Which Must Not Be Named. I have no personal stake in Eliezer’s mission, but his methodical approach to studying utility functions suggests what parts of your utility function you can safely alter.
That does introduce another level of complication. Utility functions assume a static model. They are not happiness functions. We talk about maximizing utility all the time on LW, when really we want to maximize happiness.
Maximizing your happiness is a higher level of rationality than maximizing your utility. I think it’s still okay to sometimes define “rational” as maximizing expected utility.
(I don’t think foreign aid has anything to do with the delta-nature of happiness, btw.)
I think the point Phil tries to make is the difference between “instantaneous utility”, that is a function on things at some point in time (actually, phase space), and the “general utility”, which is a function that also has time (or position in phase space) as an argument.
While not immediately obvious, I think his naming choice could be worse. According to my non-scientific poll of one (me), when seeing the word “happiness” people think of time as a parameter instinctively, but consider specific instants for “utility” unless there are other cues in the context.
A strict definition such as yours would require coining a few new words for the discussion. That’s not a bad thing per se, I just can’t think of any that have the advantage of being already used as such in general vocabulary.
This is an area that is generally plagued with ambuiguities and inconsistent usage. - which makes it even more important to be clear what we mean. I think this will usually this will require the use of adjectives/modifiers, rather than attempting to define already ambiguous words in our own idiosyncratically-preferred ways.
Instantaneous vs. life-time (or smaller life-slice) utility seems to make a clear distinction; decision-utility (i.e. the utility embodied in whatever function describes our decisions) vs. experienced utility (e.g. happiness or other psychological states) seem to make clear-ish distinctions. (Though if we care about non-experienced things, then maybe we need to further distinguish either of these from true-utility.)
But using “utility” and “happiness” to distinguish between different degrees of time aggregation seems unnecessarily confusing to me.
If we really wanted to maximize happiness, then we’d jump at the chance to wirehead ourselves. We don’t, because happiness is only an indicator of what we desire, not the thing we desire itself. Making yourself happier using drugs is like making yourself wealthier by telling your bank to lie to you on account statements.
It seems as though you’re equivocating over ‘happiness’. You suggest that happiness is just an indicator, not the thing we desire itself. Your analogy suggests otherwise. Having your bank lie to you on your statements does not actually make you wealthier. Similarly, using drugs to feel pleasure doesn’t actually make you happier.
Actually, happiness is one of the things I desire; it’s just not the only thing I desire. And drug induced happiness can be perfectly real, even if it’s not necessarily the optimal way for me to achieve a positive emotional state all things considered.
Making myself happier using drugs doesn’t seem at all analogous to telling my bank to lie.
Countries that rely heavily on foreign aid risk becoming self-stabilizing systems in which increasing foreign aid to Hypothetistan reduces the incentives for Hypothetistanis to be productive, instead of providing capital they need to act on those incentives. This is by no means a complete explanation—I’m just explaining the analogy between self-stabilizing systems more explicitly.
IAWY right up to the penultimate sentence. Humans continuously modify their utility functions to maintain a steady level of happiness. A change in your utility function’s input—like winning the lottery, or suffering a permanent injury—has only a temporary effect. The day you collect your winnings, you’re super-happy; a year later, you’re no happier than you were when you bought the ticket. If you’re considering picking up a crack habit, you had better realize that in a year your baseline happiness will be no higher than it is now, despite all the things you’ll sacrifice trying to be happy.
Supplying yourself with cocaine and money isn’t an effective way to achieve a goal of happiness, just like supplying a country with foreign aid isn’t an effective way to improve quality of life there. The rational thing to do is to grab the levers of your hedonic treadmill and set it where you want it to be. But it’s risky to monkey around with such things—which is why I’m interested in That Which Must Not Be Named. I have no personal stake in Eliezer’s mission, but his methodical approach to studying utility functions suggests what parts of your utility function you can safely alter.
That does introduce another level of complication. Utility functions assume a static model. They are not happiness functions. We talk about maximizing utility all the time on LW, when really we want to maximize happiness.
Maximizing your happiness is a higher level of rationality than maximizing your utility. I think it’s still okay to sometimes define “rational” as maximizing expected utility.
(I don’t think foreign aid has anything to do with the delta-nature of happiness, btw.)
If you “really want to maximize” X, how is X not utility?
I think the point Phil tries to make is the difference between “instantaneous utility”, that is a function on things at some point in time (actually, phase space), and the “general utility”, which is a function that also has time (or position in phase space) as an argument.
While not immediately obvious, I think his naming choice could be worse. According to my non-scientific poll of one (me), when seeing the word “happiness” people think of time as a parameter instinctively, but consider specific instants for “utility” unless there are other cues in the context.
A strict definition such as yours would require coining a few new words for the discussion. That’s not a bad thing per se, I just can’t think of any that have the advantage of being already used as such in general vocabulary.
This is an area that is generally plagued with ambuiguities and inconsistent usage. - which makes it even more important to be clear what we mean. I think this will usually this will require the use of adjectives/modifiers, rather than attempting to define already ambiguous words in our own idiosyncratically-preferred ways.
Instantaneous vs. life-time (or smaller life-slice) utility seems to make a clear distinction; decision-utility (i.e. the utility embodied in whatever function describes our decisions) vs. experienced utility (e.g. happiness or other psychological states) seem to make clear-ish distinctions. (Though if we care about non-experienced things, then maybe we need to further distinguish either of these from true-utility.)
But using “utility” and “happiness” to distinguish between different degrees of time aggregation seems unnecessarily confusing to me.
Yes, thanks; that’s what I meant.
If we really wanted to maximize happiness, then we’d jump at the chance to wirehead ourselves. We don’t, because happiness is only an indicator of what we desire, not the thing we desire itself. Making yourself happier using drugs is like making yourself wealthier by telling your bank to lie to you on account statements.
It seems as though you’re equivocating over ‘happiness’. You suggest that happiness is just an indicator, not the thing we desire itself. Your analogy suggests otherwise. Having your bank lie to you on your statements does not actually make you wealthier. Similarly, using drugs to feel pleasure doesn’t actually make you happier.
I prefer the latter usage.
Actually, happiness is one of the things I desire; it’s just not the only thing I desire. And drug induced happiness can be perfectly real, even if it’s not necessarily the optimal way for me to achieve a positive emotional state all things considered.
Making myself happier using drugs doesn’t seem at all analogous to telling my bank to lie.
Countries that rely heavily on foreign aid risk becoming self-stabilizing systems in which increasing foreign aid to Hypothetistan reduces the incentives for Hypothetistanis to be productive, instead of providing capital they need to act on those incentives. This is by no means a complete explanation—I’m just explaining the analogy between self-stabilizing systems more explicitly.
The specs for happiness require it to be self-stabilizing. Poverty can be self-stabilizing, but doesn’t have to be.