Utilons vs. Hedons
Related to: Would Your Real Preferences Please Stand Up?
I have to admit, there are a lot of people I don’t care about. Comfortably over six billion, I would bet. It’s not that I’m a callous person; I simply don’t know that many people, and even if I did I hardly have time to process that much information. Every day hundreds of millions of incredibly wonderful and terrible things happen to people out there, and if they didn’t, I wouldn’t even know it.
On the other hand, my professional goals deal with economics, policy, and improving decision making for the purpose of making millions of people I’ll never meet happier. Their happiness does not affect my experience of life one bit, but I believe it’s a good thing and I plan to work hard to figure out how to create more happiness.
This underscores an essential distinction in understanding any utilitarian viewpoint: the difference between experience and values. One can value unweighted total utility. One cannot experience unweighted total utility. It will always hurt more if a friend or loved one dies than if someone you never knew in a place you never heard of dies. I would be truly amazed to meet someone who is an exception to this rule and is not an absolute stoic. Your experiential utility function may have coefficients for other people’s happiness (or at least your perception of such), but there’s no way it has an identical coefficient for everyone everywhere, unless that coefficient is zero. On the other hand, you probably care in an abstract way about whether people you don’t know die or are enslaved or imprisoned, and may even contribute some money or effort to prevent such from happening. I’m going to use “utilons” to refer to value utility units and “hedons” to refer to experiential utility units; I’ll demonstrate that this is a meaningful distinction shortly, and that we value utilons over hedons explains much of our moral reasoning appearing to fail.
Let’s try a hypothetical to illustrate the difference between experiential and value utility. An employee of Omega, LLC,1 offers you a deal to absolutely double your hedons but kill five people in, say, rural China, then wipe your memory of the deal. Do you take it? What about five hundred? Five hundred thousand?
I can’t speak for you, so I’ll go through my evaluation of this deal and hope it generalizes reasonably well. I don’t take it at any of these values. There’s no clear hedonistic explanation for this—after all, I forget it happened. It would be absurd to say that the disutility I experience between entering the agreement and having my memory wiped is so tremendous as to outweigh everything I will experience for the rest of my life (particularly since I immediately forget this disutility), and this is the only way I can see my rejection could be explained with hedons. In fact, even if the memory wipe weren’t part of the deal, I doubt the act of having a few people killed would really cause me more displeasure than doubling my future hedons would yield; I’d bet more than five people have died in rural China as I’ve written this post, and it hasn’t upset me in the slightest.
The reason I don’t take the deal is my values; I believe it’s wrong to kill random people to improve my own happiness. If I knew that they were people who sincerely wanted to be dead or that they were, say, serial killers, my decision would be different, even though my hedonic experience would probably not be that different. If I knew that millions of people in China would be significantly happier as a result, as well, then there’s a good chance I’d take the deal even if it didn’t help me. I seem to be maximizing utilons and not hedons, and I think most people would do the same.
Also, as another example so obvious that I feel like it’s cheating, if most people read the headline “100 workers die in Beijing factory fire” or “1000 workers die in Beijing factory fire,” they will not feel ten times the hedonic blow, even if they live in Beijing. That it is ten times worse is measured in our values, not our experiences; these values are correct, since there are roughly ten times as many people who have seriously suffered from the fire, but if we’re talking about people’s hedons, no individual suffers ten times as much.
In general, people value utilons much more than hedons. Drugs being illegal are an illustration of this. Arguments for (and against) drug legalization are an even better illustration of this. Such arguments usually involve weakening organized crime, increasing safety, reducing criminal behaviour, reducing expenditures on prisons, improving treatment for addicts, and improving similar values. “Lots of people who want to will get really, really high” is only very rarely touted as a major argument, even though the net hedonic value of drug legalization would probably be massive (just as the hedonic cost of prohibition in the 20′s was clearly massive).
As a practical matter, this is important because many people do things precisely because they are important in their abstract value system, even if they result in little or no hedonic payoff. This, I believe, is an excellent explanation of why success is no guarantee of happiness; success is conducive to getting hedons, but it also tends to cost a lot of hedons, so there is little guarantee that earned wealth will be a net positive (and, with anchoring, hedons will get a lot more expensive than they are for the less successful). On the other hand, earning wealth (or status) is a very common value, so people pursue it irrespective of its hedonistic payoff.
It may be convenient to argue that the hedonistic payoffs must balance out, but this does not make it the case in reality. Some people work hard on assignments that are practically meaningless to their long-term happiness because they believe they should, not because they have any delusions about their hedonistic payoff. To say, “If you did X instead of Y because you ‘value’ X, then the hedonistic cost of breaking your values must exceed Y-X,” is to win an argument by definition; you have to actually figure out the values and see if that’s true. If it’s not, then I’m not a hedon-maximizer. You can’t then assert that I’m an “irrational” hedon maximizer unless you can make some very clear distinction between “irrationally maximizing hedons” and “maximizing something other than hedons.”
This dichotomy also describes akrasia fairly well, though I’d hesitate to say it truly explains it. Akrasia is what happens when we maximize our hedons at the expense of our utilons. We play video games/watch TV/post on blogs because it feels good, and we feel bad about it because, first, “it feels good” is not recognized as a major positive value in most of our utilon-functions, and second, because doing our homework is recognized as a major positive value in our utilon functions. The experience makes us procrastinate and our values make us feel guilty about it. Just as we should not needlessly multiply causes, neither should we erroneously merge them.
Furthermore, this may cause our intuition to seriously interfere with utility-based hypotheticals, such as these. Basically, you choose to draw cards, one at a time, that have a 10% chance of killing you and a 90% chance of doubling your utility. Logically, if your current utility is positive and you assign a utility of zero2 (or greater) to your death (which makes sense in hedons, but not necessarily in utilons), you should draw cards until you die. The problem of course being that if you draw a card a second, you will be dead in a minute with P= ~.9982, and dead in an hour with P=~1-1.88*10-165.
There’s a bigger problem that causes our intuition to reject this hypothetical as “just wrong:” it leads to major errors in both utilons and hedons. The mind cannot comprehend unlimited doubling of hedons. I doubt you can imagine being 260 times as happy as you are now; indeed, I doubt it is meaningfully possible to be so happy. As for utilons, most people assign a much greater value to “not dying,” compared with having more hedons. Thus, a hedonic reading of the problem returns an error because repeated doubling feels meaningless, and a utilon reading (may) return an error if we assign a significant enough negative value to death. But if we look at it purely in terms of numbers, we end up very, very happy right up until we end up very, very dead.
Any useful utilitarian calculus need take into account that hedonic utility is, for most people, incomplete. Value utility is often a major motivating factor, and it need not translate perfectly into hedonic terms. Incorporating value utility seems necessary to have a map of human happiness that actually matches the territory. It also may be good that it can be easier to change values than it is to change hedonic experiences. But assuming people maximize hedons, and then assuming quantitative values that conform to this assumption, proves nothing about what actually motivates people and risks serious systematic error in furthering human happiness.
We know that our experiential utility cannot encompass all that really matters to us, so we have a value system that we place above it precisely to avoid risking destroying the whole world to make ourselves marginally happier, or to avoid pursuing any other means of gaining happiness that carries tremendous potential expense.
1- Apparently Omega has started a firm due to excessive demand for its services, or to avoid having to talk to me.
2- This assumption is rather problematic, though zero seems to be the only correct value of death in hedons. But imagine that you just won the lottery (without buying a ticket, presumably) and got selected as the most important, intelligent, attractive person in whatever field or social circle you care most about. How bad would it be to drop dead? Now, imagine you just got captured by some psychopath and are going to be tortured for years until you eventually die. How bad would it be to drop dead? Assigning zero (or the same value, period) to both outcomes seems wrong. I realize that you can say that death in one is negative and in the other is positive relative to expected utility, but still, the value of death does not seem identical, so I’m suspicious of assigning it the same value in both cases. I realize this is hand-wavy; I think I’d need a separate post to address this issue properly.
- Are wireheads happy? by 1 Jan 2010 16:41 UTC; 182 points) (
- Not for the Sake of Pleasure Alone by 11 Jun 2011 23:21 UTC; 50 points) (
- 26 Jan 2010 19:26 UTC; 28 points) 's comment on You cannot be mistaken about (not) wanting to wirehead by (
- 18 Aug 2009 19:59 UTC; 21 points) 's comment on Experiential Pica by (
- 11 Aug 2009 13:59 UTC; 5 points) 's comment on Misleading the witness by (
- 11 Aug 2009 23:16 UTC; 4 points) 's comment on Exterminating life is rational by (
- 17 Aug 2009 17:40 UTC; 4 points) 's comment on Happiness is a Heuristic by (
- 11 Aug 2009 22:05 UTC; 2 points) 's comment on Exterminating life is rational by (
- 19 Feb 2013 19:45 UTC; 2 points) 's comment on Falsifiable and non-Falsifiable Ideas by (
- 27 Jun 2012 17:23 UTC; 2 points) 's comment on A (small) critique of total utilitarianism by (
- 14 Jun 2011 20:46 UTC; 1 point) 's comment on Model Uncertainty, Pascalian Reasoning and Utilitarianism by (
- 12 Aug 2009 14:10 UTC; 0 points) 's comment on Exterminating life is rational by (
I’m sorry, but this cannot possibly explain the akrasia I have experienced. Living a purposefully hedonistic life is widely considered low-status, so most people do not admit to their consciously hedonistic goals. Thus, the goals we hear about akrasia preventing people from pursuing are all noble, selfless goals: “I would like to do this thing that provides me utility but not hedonistic pleasure, but that damned akrasia is stopping me.” With that as your only evidence, it is not unreasonable that you should conclude that akrasia occurs because of the divide between utilons and hedons.
Someone has to take the status hit and end this silence, and it might as well be me. I live my live mostly hedonically. I apologize to everyone who wanted me to optimize for their happiness, but that’s the truth. (I may write a top level article eventually in defense of this position.) So, my utility and my hedonic pleasure are basically unified. But I still suffer akrasia! I will sometimes have an activity rich with hedons available to me, but I will instead watch TV and settle for the meager trickle of hedons it provides. I procrastinate in taking pleasure! It is a surprising result, one that a non-hedonist would likely not predict, but it’s true. This thing we call akrasia has deeper roots than just resistance against self-abnegation.
Is there a time-horizon aspect to this behavior? (That is, can it be explained by saying that highly enjoyable activities with some start-up time are deferred in favor of flopping on the couch and grabbing the remote control?)
Smiling is an example of hedonistic activity with no start-up time.
This discussion has made me feel I don’t understand what “utilon” really means. Hedons are easy: clearly happiness and pleasure exist, so we can try to measure them. But what are utilons?
“Whatever we maximize”? But we’re not rational, quite inefficient, and whatever we actually maximize as we are today probably includes a lot of pain and failures and isn’t something we consciously want.
“Whatever we self-report as maximizing”? Most of the time this is very different from what we actually try to maximize in practice, because self-reporting is signaling. And for a lot of people it includes plans or goals that, when achieved, are likely (or even intended) to change their top-level goals drastically.
“If we are asked to choose between two futures, and we prefer one, that one is said to be of higher utility.” That’s a definition, yes, but it doesn’t really prove that the collection-of-preferred-universes can be described any more easily than the real decision function of which utilons are supposed to be a simplification. For instance, what if by minor and apparently irrelevant changes in the present, I can heavily influence all of people’s preferences for the future?
Also a note on the post:
That definition feels too broad to me. Typically akrasia has two further atttributes:
Improper time discounting: we don’t spend an hour a day exercising even though we believe it would make us lose weight, with a huge hedonic payoff if you maximize hedons over a time horizon of a year.
Feeling so bad due to not doing the necessary task that we don’t really enjoy ourselves no matter what we do instead (and frequently leading to doing nothing for long periods of time). Hedonically, even doing the homework usually feels a lot better (after the first ten minutes) than putting it off, and we know this from experience—but we just can’t get started!
I agree that the OP is somewhat ambiguous on this. For my own part, I distinguish between at least the following four categories of things-that-people-might-call-a-utility-function. Each involves a mapping from world histories into the reals according to:
how the history affects our mind/emotional states;
how we value the history from a self-regarding perspective (“for our own sake”);
how we value the history from an impartial (moral) perspective; or
the choices we would actually make between different world histories (or gambles over world histories).
Hedons are clearly the output of the first mapping. My best guess is that the OP is defining utilons as something like the output of 3, but it may be a broader definition that could also encompass the output of 2, or it could be 4 instead.
I guess that part of the point of rationality is to get the output of 4 to correspond more closely to the output of either 2 or 3 (or maybe something in between): that is to help us act in greater accordance with our values—in either the self-regarding or impartial sense of the term.
“Values” are still a bit of a black box here though, and it’s not entirely clear how to cash them out. I don’t think we want to reduce them either to actual choices or simply to stated values. Believed values might come closer, but I think we probably still want to allow that we could be mistaken about them.
What’s the difference between 1 and 2? If we’re being selfish then surely we just want to experience the most pleasurable emotional states. I would read “values” as an individual strategy for achieving this. Then, being unselfish is valuing the emotional states of everyone equally… …so long as they are capable of experiencing equally pleasurable emotions, which may be untestable.
Note: just re-read OP, and I’m thinking about integrating over instantaneous hedons/utilons in time and then maximising the integral, which it didn’t seem like the OP did.
We can value more than just our emotional states. The experience machine is the classic thought experiment designed to demonstrate this. Another example that was discussed a lot here recently was the possibility that we could value not being deceived.
Which is why it’s pretty blatantly obvious that humans aren’t utility maximizers on our native hardware. We’re not even contextual utility maximizers; we’re state-dependent error minimizers, where what errors we’re trying to minimize are based heavily on short-term priming and longer-term time-decayed perceptual averages like “how much relaxation time I’ve had” or “how much i’ve gotten done lately”.
Consciously and rationally, we can argue we ought to maximize utility, but our behavior and emotions are still controlled by the error-minimizing hardware, to the extent that it motivates all sorts of bizarre rationalizations about utility, trying to force the consciously-appealing idea of utility maximization to contort itself enough to not too badly violate our error-minimizing intuitions. (That is, if we weren’t error-minimizers, we wouldn’t feel the need to reduce the difference between our intuitive notions of morality, etc. and our more “logical” inclinations.)
Then, can you tell me what utility is? What is it that I ought to maximize? (As I expanded on in my toplevel comment)
Something that people argue they ought to maximize, but have trouble precisely defining. ;-)
Has anybody ever proposed a way to value utilons?
It would be easier to discuss about them if we knew exactly what they can mean, that is, in a more precise way than just by the “unit of utility” definition. For example, how to handle them through time?
So why not defining them with something like that :
Suppose we could precisely measure the level of instant happiness of a person on a linear scale between 1 to 10, with 1 being the worst pain imaginable and 10 the best of climaxes. This level is constantly varying, for everybody. In this context, one utilon could be the value of an action that is increasing the level of happiness of a person by one, on this scale, during one hour.
Then, for example, if you help an old lady to cross the road, making her a bit happier during the next hour (let’s say she would have been around 6⁄10 happy but thanks to you she will be 6,5⁄10 happy during this hour), then your action has a utility of one half of a utilon. You just created 0.5 utilon, and it’s a definitely valid statement, isn’t that great?
Using that, a hedon is nothing more than a utilon that we create by raising our own happiness.
What you describe are hedons. It’s misleading to call them utilons. For rational (not human) agents, utilons are the value units of a utility function which they try to maximize. But humans don’t try to maximize hedons, so hedons are not human-utilons.
Then would you agree that any utility function should, in the end, maximize hedons (if we were rational agents, that is) ? If yes, that would mean that hedons are the goal and utilons are a tool, a sub-goal, which doesn’t seem to be what OP was saying.
No, of course not. There’s nothing that a utility function should maximize, regardless of the agent’s rationality. Goal choice is arational; rationality has nothing to do with hedons. First you choose goals, which may or may not be hedons, and then you rationally pursue them.
This is best demonstrated by forcibly separating hedon-maximizing from most other goals. Take a wirehead (someone with a wire into their “pleasure center” controlled by a thumb switch). A wirehead is as happy as possible (barring changes to neurocognitive architecture), but they don’t seek any other goals, ever. They just sit there pressing the button until they die. (In experiments with mice, the mice wouldn’t take time off from pressing the button even to eat or drink, and died from thirst. IIRC this went on happening even when the system was turned off and the trigger no longer did anything.)
Short of the wireheading state, noone is truly hedon-maximizing. It wouldn’t make any sense to say that we “should” be.
Wireheads aren’t truly hedon-maximizing either. If they were, they’d eat and drink enough to live as long as possible and push the button a greater total number of times.
They are hedon-maximizing, but with a very short time horizon of a few seconds.
If we prefer time horizons as long as possible, then we can conclude that hedon-maximizing implies first researching the technology for medical immortality, then building an army of self-maintaining robot caretakers, and only then starting to hit the wirehead switch.
Of course this is all tongue in cheek. I realize that wireheads (at today’s level of technology) aren’t maximizing hedons; they’re broken minds. When the button stops working, they don’t stop pushing it. Adaptation executers in an induced failure mode.
It depends on your discount function: if its integral is finite over an infinite period of time (e.g. in case of exponential discount) then it will depend on the effort of reaching immortality whether you will go that route or just dedicate yourself to momentary bliss.
This example is hardly hypothetical. According to GiveWell, you can save the life of one African person for $200 - $1000.
Almost everyone has spent $5000 on things that they didn’t need—for example a new car as opposed to a second hand one, a refurbishment of a room in the house, a family holiday. $5000 comes nowhere close to “doubling your hedons”—in fact it probably hardly makes a dent. Furthermore, almost everyone is aware of this fact, but we conveniently don’t pay any attention to it, and our subconscious minds don’t remind us about it because the deaths in Africa are remote and impersonal.
Since I know of very few people who spend literally all their spare money on saving lives at $1000 per life, and almost everyone would honestly claim that they would pay $200 − 1000 to save someone from a painful death, it is fair to say that people pretty universally don’t maximize “utilons”.
This is intriguing, but what if the main indirect cause of death in Africa is overpopulation? Depending on the method by which the life is saved, you might not actually do much good by saving it. It’s been touted, for example, that food aid in Africa has been bad for its inhabitants in the long-term. If there is evidence that there are ways to permanently improve conditions to that extent for that cheap, then this would be very compelling.
I am not an expert on development in Africa, but my guess is that there is no single cause to the overall problem. Africa’s population density is 26 people per km^2 source, whereas the EU’s population density is 114 people per km^2 Source. Thus it is probably the case that Africa could easily sustain its current population if it were more economically developed.
Reducing the population artificially, whether by force or by education wouldn’t make the problem magically go away, though it may help as part of an overall strategy.
If one is interested in charitable projects to improve overall African standards of living, take a look at the Copenhagen Consensus. Improvements in infrastructure, peacekeeping, health and womens’ education are all needed.
I think the main reason food aid has been criticized is that it is often implemented in a way which a) empowers dictators or b) reduces profit opportunities for for African farmers and food distributors which reduces their incentive to invest in improving their farming or other businesses.
IOW, over-population is not the source of the negative externalities.
How reliable is this information?
I found a second source
According to Peter Unger, it is more like one dollar:
Even if this is true, I think it is still more important to spend money to reduce existential risks given that one of the factors is 6 billion + a much larger number for successive generations + humanity itself.
One dollar is the approximate cost if the right treatment is in the right place at the right time. How much does it cost to get the right treatment to the right place at the right time?
The price of the salt pill itself is only a few pennies. The one dollar figure was meant to include overhead. That said, the Copenhagen report mentioned above ($64 per death averted) looks more credible. But during a particular crisis the number could be less.
In the footnote, Unger quotes UNICEF’s 10 cents and makes up the 40 cents. UNICEF lied to him. Next time UNICEF tells you it can save a life for 10 cents, ask it what percentage of its $1 billion budget it’s spending on this particular project.
According to the Copenhagen Consesus cited by SforSingularity, the goal is to provide about 100 pills per childhood and most children would have survived the diarrhea anyhow. (to get it as effective as $64/life, diarrhea has to be awfully fatal; more fatal than the article seems to say) They put overhead at about the same as the cost of the pills, which I find hard to believe. But they’re not making it up out of thin air: they’re looking at actual clinics dispensing ORT and vitamin A. (actually, they apply to zinc the overhead for vitamin A, which is distributed 2x/year 80% penetration, while zinc is distributed with ORT as needed at clinics, with much less penetration. I don’t know which is cheaper, but that’s sloppy.)
CC says that only 1⁄3 of bouts of diarrhea are reached by ORT, but the death rate has dropped by 2⁄3. That’s weird. My best guess is that multiple bouts cumulatively weaken the child, which suggests that increasing from 1⁄3 to 100% would have diminishing returns on diarrhea bouts, but might have hard to account benefits in general mortality. (Actually, my best guess is that they cherry-picked numbers, but the positive theory is also plausible.)
ETA: there’s a simple explanation, since the parents seek treatment at the clinics, which is that the parents can tell which bouts are bad. But I think my first two explanations play a role, too.
I’m very suspicious that all these numbers may be dramatic underestimates, ignoring costs like bribing the clinicians or dictators. (I haven’t looked at them carefully, so if they do produce numbers based on actual start-to-finish interventions, please tell me.) It would be interesting to know how much it cost outsiders to lean on India’s salt industry and get it to add iodine.
+1 for above.
As a separate question, what would you do if you lived in a world where Peter Unger was correct? And what if it was 1 penny instead of 1 dollar and giving the money wouldn’t cause other problems? Would you never have a burger for lunch instead of rice since it would mean 100 children would die who could otherwise be saved?
In the footnote, Unger quotes UNICEF’s 10 cents and makes up the 40 cents. UNICEF lied to him. Next time UNICEF tells you it can save a life for 10 cents, ask it what percentage of its $1 billion budget it’s spending on this particular project.
According to the Copenhagen Consesus cited by SforSingularity, the goal is to provide about 100 pills per childhood and most children would have survived the diarrhea anyhow. (to get it as effective as $64/life, diarrhea has to be awfully fatal; more fatal than the article seems to say) They put overhead at about the same as the cost of the pills, which I find hard to believe. But they’re not making it up out of thin air: they’re looking at actual clinics dispensing ORT and vitamin A. (actually, they apply to zinc the overhead for vitamin A, which is distributed 2x/year 80% penetration, while zinc is distributed with ORT as needed at clinics, with much less penetration. I don’t know which is cheaper, but that’s sloppy.)
CC says that only 1⁄3 of bouts of diarrhea are reached by ORT, but the death rate has dropped by 2⁄3. That’s weird. My best guess is that multiple bouts cumulatively weaken the child, which suggests that increasing from 1⁄3 to 100% would have diminishing returns on diarrhea bouts, but might have hard to account benefits in general mortality. (Actually, my best guess is that they cherry-picked numbers, but the positive theory is also plausible.)
I’m very suspicious that all these numbers may be dramatic underestimates, ignoring costs like bribing the clinicians or dictators. (I haven’t looked at them carefully, so if they do produce numbers based on actual start-to-finish interventions, please tell me.) It would be interesting to know how much it cost outsiders to lean on India’s salt industry and get it to add iodine.
Salt as rehydration therapy?!
People lose electrolytes in their body fluids. If you rehydrate them without replacing the electrolytes, they get hyponatremia.
No; it’s fair to say that their utilons are not a linear function of human lives saved.
If you think there are too many people in the world, you might be willing to pay to prevent the saving of lives.
Funny thing is, the only people I know who don’t agree that there are too many people in the world, are objectivists, libertarians, and extropians (there’s a high correlation between these categories), who are among the least-likely to give money to save people in Africa.
Africa’s population density is 26 people per km^2 source, whereas the EU’s population density is 114 people per km^2 Source. Thus it is probably the case that Africa could easily sustain its current population if it were more economically developed.
That’s a huge “if”.
Sending money there is not a way to get the local economy to develop. It’s been done for decades and the African economy is barely developped.
IMO, I think the main reasons aid has been ineffective is the particular ways it has been given. It often a) empowers dictators or b) reduces profit opportunities for for African farmers and food distributors which reduces their incentive to invest in improving their farming or other businesses.
In my opinion, it would be easy to make sending money somewhat helpful. But even if I’m right, somewhat helpful is far from maximally helpful.
Something like the Grameen Bank would probably be the best bet. If there’s room for economic growth but no capital to power it, then making microcredit available seems like the obvious choice.
I suspect we already indirectly, incrementally cause the death of unknown persons in order to accumulate personal wealth and pleasure. Consider goods produced in factories causing air and water contamination affecting incumbent farmers. While I’d like to punish those goods’ producers by buying alternatives, it’s apparently not worth my time*.
Probably, faced with the requirement to directly and completely cause a death, we would feel wrong enough about this (even with a promise of memory-wipe) to desist. But I find it difficult to consider such a situation honestly when I’m so strongly driven to signal pervasively (even to myself) that I am not an evil person. Perhaps a sufficiently anonymous poll could give us a better indication of what people would actually do.
There are certainly scenarios where under average utility maximization, you’d want to kill innocent people—draw lots if you like, but there’s only enough air for 3 of us to survive the return trip from Mars.
* And maybe the economic benefit to the producing region is greater than the harm to the backyarders, and they just need to spend more in compensating or protecting them. But I believe there are some unambiguous cases where I ought to avoid consuming said product at the very least.
In general industrialized economies have better health, lifespan, standard of living and etc. You seem to be paying attention only to the negative side effects of your manufactured goods.
(That graph is not proof. Correlation is not causation. This is a short comment that makes a small point. Go easy on me.)
Yes, but I acknowledged that possibility in my asterisk turned bullet point (thanks, markup).
To get the asterisk back, use ” \ ” instead of ” ”.
Nice post! This distinction should clear up several confusions. Incidentally, I don’t know if there’s a word for the opposite of a utilon, but the antonym of “hedon” is “dolor”.
disutilon?
2 utilons + 2 disutilons = 2 futilons
If we can split the futilon, we’ll double everyone’s utility function without needing Omega!
Sadly, the project was then used to bombard enemy countries with disutilons...
The card drawing paradox is isomorphic to the old paradox of the game where you double your money each time a coin comes up heads (the paradox being that simplistic theory assigns infinite value to both games). The solution is the same in each case: first, the entity underwriting the game cannot pay out infinite resources, and second, your utility function is not infinitely scalable in whatever resource is being paid.
I have the sense that much of this was written as a response to this paradox in which maximizing expected utility tells you to draw cards until you die.
Psychohistorian wrote:
The paradox is stated in utilons, not hedons. But if your hedons were measured properly, your inability to imagine them now is not an argument. This is Omega we’re talking about. Perhaps it will augment your mind to help you reach each doubling. Whatever. It’s stipulated in the problem that Omega will double whatever the proper metric is. Futurists should never accept “but I can’t imagine that” as an argument.
We need to look at it purely in terms of numbers if we are rationalists, or let us say “ratio-ists”. Is your argument really that numeric analysis is the wrong thing to do?
Changing the value you assign life vs. death doesn’t sidestep the paradox. We can rescale the problem by an affine transformation so that your present utility is 1 and the utility of death is 0. That will not change the results of expected utility maximization.
Let’s try a new card game. Losing isn’t death, it’s 50 years of torture, followed by death in the most horribly painful way imaginable, for you and everyone you know. We’ll say that utility is zero, your current utility is one, and a win doubles your current utility. Do you take the bet?
Or, losing isn’t death, it’s having to listen to a person scratch a chalkboard for 15 seconds. We’ll call that 0, your current situation 1, and a win 2. Do you take the bet?
This is the problem with such scaling. You’re defining “double your utility” as “the amount of utility that would make you indifferent to an even-odds bet between X and Y” and then proposing a bet between X and Y where the odds are better than even in your favor. No other definition will consistently yield the results you claim (or at least no other definition type—you could define it the same way but with a different odds threshold). It proves nothing useful.
The example may not prove anything useful, but it did something useful for me. It reminded me that 1) we don’t have a single perfect-for-all-situations definition of utility. and 2) our intuition often leads us astray.
We need to look at it purely in terms of numbers, only if we assume that we’re maximizing hedons (or whatever Omega will double). But why should we assume that?
Let’s go back to the beginning of this problem. Suppose for simplicity’s sake we choose only between playing once, and playing until we die (these two alternatives were the ones discussed the most). In the latter case we die with very high probability, quite soon. Now I, personally, prefer in such a case not to play at all. Why? Well, I just do—it’s fundamental to my desires not to want to die in an hour no matter what the gain in happiness during that hour.
This is how I’d actually behave, and I assume many other people as well. I don’t have to explain this fact by inventing a utility function that is maximized by not playing. Even if I don’t understand myself why I’d choose this, I’m very sure that I would.
Utilons and hedons are models that are supposed to help explain human behavior, but if they don’t fit it, it’s the models that are wrong. (This is related to the fact that I’m not sure anymore what utilons are exactly, as per my comment above.)
If we were designing a new system to achieve a goal, or even modifying humans towards a given goal, then it might be best to build maximizers of something. But if we’re analyzing actual human behavior, which is how the thread about Omega’s game got started, there’s no reason to assume that humans maximize anything. If we insist on defining human behavior as maximizing hedons (and/or utilons), it follows that hedons do not behave numerically, and so are quite confusing.
|there’s no reason to assume that humans maximize anything. If we insist on defining human behavior as maximizing hedons (and/or utilons), it follows that hedons do not behave numerically, and so are quite confusing.
In theory, any behavior can be described as a maximization of some function. The question is when this is useful and when it isn’t.
We’re modeling rational behavior, not human behavior.
It seems to me that we’re talking about both things in this thread. But I’m pretty sure this post is about analyzing human behavior… Why else does it give examples of human behavior as anecdotal proof of certain models?
I understand that utilons arise from discussions of rational goal-seeking behavior. I still think that they don’t necessarily apply to human (arational) behavior.
I think we’re doing both, and for good reason. Modeling rational behavior and actual behavior are both useful. You are right to point out that confusion about what we are modeling is rampant here though.
Assume you are indifferent towards buying Chocolate Bar A at $1 per bar. How much would you pay for a chocolate bar that is 3.25186 times as delicious? What about one that is 12.35 times as delicious? 2^60 times as delicious? What if you were really, really hungry, so much so that vaguely edible dirt would be delicious. Would that 3.25186 remain significant to 5 decimal places, or might it change slightly?
It is not an argument; it is evidence. I cannot measure how many hedons I am experiencing now. I can kind of compare it to how many hedons I’ve experienced at times in the past, but it would be difficult. I certainly couldn’t say I’m experiencing 10% less hedonic pleasure than my average day, 20% more than I did yesterday, and 45% less than my happiest day ever. The fact that hedons do not appear to yield to simple quantification is why I cannot imagine doubling my hedons. This fact also suggests that “double your hedons” is not a meaningful, or even possible operation, much as it seems meaningless to say that a chocolate bar is 3.873 times as tasty as another chocolate bar; at best I could say it’s better or worse.
Expecting a chocolate bar that is “twice as delicous” to be worth twice as many hedons, and then thinking that is a problem with hedons, is the same mistake as expecting 2X dollars to have twice the utility of X dollars. It is a common mistake; but it has been explained many times on LW lately. Hedons, like utilons, are defined in a way that accounts for scaling effects. If you are committed to expectation maximization, then utilons are defined such that you will prefer a 50% chance of 2X utilons + epsilon to X utilons.
EDIT: Folks, if this comment gets a −3, we have a serious problem. You can’t participate in a lot of the discussions on LW if you don’t understand this point. Apparently, most LW readers don’t understand this point. (Unless they are voting it down because they think I am misinterpreting Psychohistorian.)
Please explain your objections.
Wow. I never said this. Not even “I kind of said this, and you took it out of context.” I just plain never claimed anything about the hedonic value of deliciousness, and I never said anything about a doubly delicious chocolate bar being worth double hedons, double dollars, double utilons, or double anything. Moreover, this is unrelated to my point.
My point was that deliciousness isn’t properly quantifiable. You don’t know how many dollars you’d pay to double your experienced deliciousness, because you don’t even know what that would mean. Omega can tell me that a chocolate bar will be twice as delicious, but I can’t sample chocolate bars and tell myself which one, if any, was twice as delicious as the first. I have absolutely no way of estimating what it would be like to double the deliciousness of my experience, and if I did double the deliciousness of my experience, I wouldn’t know it unless Omega told me so.
This is a very, very big problem. That I have never experienced multiplying deliciousness by a scalar and cannot imagine experiencing such is evidence that “twice as delicious” cannot reasonably modify “chocolate bar,” or anything else for that matter. The same seems to be true of hedons; you’d need Omega to tell you precisely how many hedons you’ve gotten today as compared to yesterday. Obviously though, you don’t need Omega to tell you if you have 20% more dollars than you did yesterday.
Except immediately above, in the passage we are both talking about, when you said:
Either that was a statement implying that hedons are in invalid concept because it doesn’t make sense to talk about being “twice as delicious” without accounting for other factors; or else it had nothing to do with what followed.
Your point still makes the same mistake. You don’t have to presently know what twice as many hedons will feel like, or what twice as delicious will taste like. You know that some things are more pleasurable than others. The problem is defined so that Omega can be trusted to double your hedons, or utilons. So stop saying “I can’t imagine doubling my hedons” or anything like that. It doesn’t matter.
If you meant that you are cognitively incapable of experience twice the utility without losing your identity, that may be a valid objection. But AFAIK you’re not making that objection.
I seem to have missed some context for this, I understand that once you’ve gone down the road of drawing the cards, you have no decision-theoretic reason to stop, but why would I ever draw the first card?
A mere doubling of my current utilons measured against a 10% chance of eliminating all possible future utilons is a sucker’s bet. I haven’t even hit a third of my expected lifespan given current technology, and my rate of utilon acquisition has been accelerating. Quite aside from the fact that I’m certain my utility function includes terms regarding living a long time, and experiencing certain anticipated future events.
If you accept that you’re maximizing expected utility, then you should draw the first card, and all future cards. It doesn’t matter what terms your utility function includes. The logic for the first step is the same as for any other step.
If you don’t accept this, then what precisely do you mean when you talk about your utility function?
Actually, on rethinking, this depends entirely on what you mean by “utility”. Here’s a way of framing the problem such that the logic can change.
Assume that we have some function V(x) that maps world histories into (non-negative*) real-valued “valutilons”, and that, with no intervention from Omega, the world history that will play out is valued at V(status quo) = q.
Omega then turns up and offers you the card deal, with a deck as described above: 90% stars, 10% skulls. Stars give you double V(star)=2c, where c is the value of whatever history is currently slated to play out (so c=q when the deal is first offered, but could be higher than that if you’ve played and won before). Skulls give you death: V(skull)=d, and d < q.
If our choices obey the vNM axioms, there will be some function f(x), such that our choices correspond to maximising E[f(x)]. It seems reasonable to assume that f(x) must be (weakly) increasing in V(x). A few questions present themselves:
Is there a function, f(x), such that, for some values of q and d, we should take a card every time one is offered?
Yes. f(x)=V(x) gives this result for all d<q. This is the standard approach.
Is there a function, f(x), such that, for some values of q and d, we should never take a card?
Yes. Set d=0, q=1000, and f(x) = ln(V(x)+1). The card gives expected vNM utility of 0.9ln(2001)~6.8, which is less than ln(1001)~6.9.
Is there a function, f(x), such that, for some values of q and d, we should take some finite number of cards then stop?
Yes. Set d=0, q=1, and f(x) = ln(V(x)+1). The first time you get the offer, its expected vNM utility is 0.9ln(3)~1 which is greater than ln(2)~0.7. But at the 10th time you play (assuming you’re still alive), c=512, and the expected vNM utility of the offer is now 0.9ln(1025)~6.239, which is less than ln(513)~6.240.
So you take 9 cards, then stop. (You can verify for yourself, that the 9th card is still a good bet.)
* This is just to ensure that doubling your valutilons cannot make you worse off, as would happen if they were negative. It should be possible to reframe the problem to avoid this, but let’s stick with it for now.
Redefining “utility” like this doesn’t help us with the actual problem at hand: what do we do if Omega offers to double the f(x) which we’re actually maximizing?
In your restatement of the problem, the only thing we assume about Omega’s offer is that it would change the universe in a desirable way (f is increasing in V(x)). Of course we can find an f such that a doubling in V translates to adding a constant to f, or if we like, even an infinitesimal increase in f. But all this means is that Omega is offering us the wrong thing, which we don’t really value.
It wasn’t intended to help with the the problem specified in terms of f(x). For the reasons set out in the thread beginning here, I don’t find the problem specified in terms of f(x) very interesting.
You’re assuming the output of V(x) is ordinal. It could be cardinal.
I’m afraid I don’t understand what you mean here. “Wrong” relative to what?
Eh? Valutilons were defined to be something we value (ETA: each of us individually, rather than collectively).
I guess what I’m suggesting, in part, is that the actual problem at hand isn’t well-defined, unless you specify what you mean by utility in advance.
You take cards every time, obviously. But then the result is tautologically true and pretty uninteresting, AFAICT. (The thread beginning here has more on this.) It’s also worth noting that there are vNM-rational preferences for which Omega could not possibly make this offer (f(x) bounded above and q greater than half the bound.)
That’s only true given a particular assumption about what the output of V(x) means. If I say that V(x) is, say, a cardinally measurable and interpersonally comparable measure of my well-being, then Omega’s offer to double means rather more than that.
“Wrong” relative to what? Omega offers whatever Omega offers. We can specify the thought experiment any way we like if it helps us answer questions we are interested in. My point is that you can’t learn anything interesting from the thought experiment if Omega is offering to double f(x), so we shouldn’t set it up that way.
Eh? “Valutilons” are specifically defined to be a measure of what we value.
Utility means “the function f, whose expectation I am in fact maximizing”. The discussion then indeed becomes whether f exists and whether it can be doubled.
That was the original point of the thread where the thought experiment was first discussed, though.
The interesting result is that if you’re maximizing something you may be vulnerable to a failure mode of taking risks that can be considered excessive. This is in view of the original goals you want to achieve, to which maximizing f is a proxy—whether a designed one (in AI) or an evolved strategy (in humans).
If “we” refers to humans, then “what we value” isn’t well defined.
There are many definitions of utility, of which that is one. Usage in general is pretty inconsistent. (Wasn’t that the point of this post?) Either way, definitional arguments aren’t very interesting. ;)
Your maximand already embodies a particular view as to what sorts of risk are excessive. I tend to the view that if you consider the risks demanded by your maximand excessive, then you should either change your maximand, or change your view of what constitutes excessive risk.
Yes, that was the point :-) On my reading of OP, this is the meaning of utility that was intended.
Yes. Here’s my current take:
The OP argument demonstrates the danger of using a function-maximizer as a proxy for some other goal. If there can always exist a chance to increase f by an amount proportional to its previous value (e.g. double it), then the maximizer will fall into the trap of taking ever-increasing risks for ever-increasing payoffs in the value of f, and will lose with probability approaching 1 in a finite (and short) timespan.
This qualifies as losing if the original goal (the goal of the AI’s designer, perhaps) does not itself have this quality. This can be the case when the designer sloppily specifies its goal (chooses f poorly), but perhaps more interesting/vivid examples can be found.
To expand on this slightly, it seems like it should be possible to separate goal achievement from risk preference (at least under certain conditions).
You first specify a goal function g(x) designating the degree to which your goals are met in a particular world history, x. You then specify another (monotonic) function, f(g) that embodies your risk-preference with respect to goal attainment (with concavity indicating risk-aversion, convexity risk-tolerance, and linearity risk-neutrality, in the usual way). Then you maximise E[f(g(x))].
If g(x) is only ordinal, this won’t be especially helpful, but if you had a reasonable way of establishing an origin and scale it would seem potentially useful. Note also that f could be unbounded even if g were bounded, and vice-versa. In theory, that seems to suggest that taking ever increasing risks to achieve a bounded goal could be rational, if one were sufficiently risk-loving (though it does seem unlikely that anyone would really be that “crazy”). Also, one could avoid ever taking such risks, even in the pursuit of an unbounded goal, if one were sufficiently risk-averse that one’s f function were bounded.
P.S.
You’re probably right.
Crap. Sorry about the delete. :(
Note however, that there is no particular reason that one needs to maximise expected utilons.
The standard axioms for choice under uncertainty imply only that consistent choices over gambles can be represented as maximizing the expectation of some function that maps world histories into the reals. This function is conventionally called a utility function. However, if (as here) you already have another function that maps world histories into the reals, and happen to have called this a utility function as well, this does not imply that your two utility functions (which you’ve derived in completely different ways and for completely different purposes) need to be the same function. In general (and as I’ve I’ve tried, with varying degrees of success to point out elsewhere) the utility function describing your choices over gambles can be any positive monotonic transform of the latter, and you will still comply with the Savage-vNM-Marschak axioms.
All of which is to say that you don’t actually have to draw the first card if you are sufficiently risk averse over utilons (at least as I understand Psychohistorian to have defined the term).
Thanks! You’re the first person who’s started to explain to me what “utilons” are actually supposed to be under a rigorous definition and incidentally why people sometimes seem to be using slightly different definitions in these discussions.
How is consistency defined here?
You can learn more from e.g. the following lecture notes:
B. L. Slantchev (2008). `Game Theory: Preferences and Expected Utility’. (PDF)
Briefly, as requiring completeness, transitivity, continuity, and (more controversially) independence. Vladimir’s link looks good, so check that for the details.
I will when I have time tomorrow, thanks.
I see, I misparsed the terms of the argument, I thought it was doubling my current utilons, you’re positing I have a 90% chance of doubling my currently expected utility over my entire life.
The reason I bring up the terms in my utility function, is that they reference concrete objects, people, time passing, and so on. So, measuring expected utility, for me, involves projecting the course of the world, and my place in it.
So, assuming I follow the suggested course of action, and keep drawing cards until I die, to fulfill the terms, Omega must either give me all the utilons before I die, or somehow compress the things I value into something that can be achieved in between drawing cards as fast as I can. This either involves massive changes to reality, which I can verify instantly, or some sort of orthogonal life I get to lead while simultaneously drawing cards, so I guess that’s fine.
Otherwise, given the certainty that I will die essentially immediately, I certainly don’t recognize that I’m getting a 90% chance of doubled expected utility, as my expectations certainly include whether or not I will draw a card.
I don’t think “current utilons” makes that much sense. Utilons should be for a utility function, which is equivalent to a decision function, and the purpose of decisions is probably to influence the future. So utility has to be about the whole future course of the world. “Currently expected utilons” means what you expect to happen, averaged over your uncertainty and actual randomness, and this is what the dilemma should be about.
“Current hedons” certainly does make sense, at least because hedons haven’t been specified as well.
Like Douglas_Knight, I don’t think current utilons are a useful unit.
Suppose your utility function behaves as you describe. If you play once (and win, with 90% probability), Omega will modify the universe in a way that all the concrete things you derive utility from will bring you twice as much utility, over the course of the infinite future. You’ll live out your life with twice as much of all the things you value. So it makes sense to play this once, by the terms of your utility function.
You don’t know, when you play your first game, whether or not you’ll ever play again; your future includes both options. You can decide, for yourself, that you’ll play once but never again. It’s a free decision both now and later.
And now a second has passed and Omega is offering a second game. You remember your decision. But what place do decisions have in a utility function? You’re free to choose to play again if you wish, and the logic for playing is the same as the first time around...
Now, you could bind yourself to your promise (after the first game). Maybe you have a way to hardwire your own decision procedure to force something like this. But how do you decide (in advance) after how many games to stop? Why one and not, say, ten?
OTOH, if you decide not to play at all—would you really forgo a one-time 90% chance of doubling your lifelong future utility? How about a 99.999% chance? The probability of death in any one round of the game can be made as small as you like, as long as it’s finite and fixed for all future rounds. Is there no probability at which you’d take the risk for one round?
Why on earth wouldn’t I consider whether or not I would play again? Am I barred from doing so?
If I know that the card game will continue to be available, and that Omega can truly double my expected utility every draw, either it’s a relatively insignificant increase of expected utility over the next few minutes it takes me to die, in which case it’s a foolish bet, compared to my expected utility over the decades I have left, conservatively, or Omega can somehow change the whole world in the radical fashion needed for my expected utility over the next few minutes it takes me to die to dwarf my expected utility right now.
This paradox seems to depend on the idea that the card game is somehow excepted from the 90% likely doubling of expected utility. As I mentioned before, my expected utility certainly includes the decisions I’m likely to make, and it’s easy to see that continuing to draw cards will result in my death. So, it depends on what you mean. If it’s just doubling expected utility over my expected life IF I don’t die in the card game, then it’s a foolish decision to draw the first or any number of cards. If it’s doubling expected utility in all cases, then I draw cards until I die, happily forcing Omega to make verifiable changes to the universe and myself.
Now, there are terms at which I would take the one round, IF you don’t die in the card game version of the gamble, but it would probably depend on how it’s implemented. I don’t have a way of accessing my utility function directly, and my ability to appreciate maximizing it is indirect at best. So I would be very concerned about the way Omega plans to double my expected utility, and how I’m meant to experience it.
In practice, of course, any possible doubt that it’s not Omega giving you this gamble far outweighs any possibility of such lofty returns, but the thought experiment has some interesting complexities.
This, again, depends on what you mean by “utility”. Here’s a way of framing the problem such that the logic can change.
Assume that we have some function V(x) that maps world histories into (non-negative*) real-valued “valutilons”, and that, with no intervention from Omega, the world history that will play out is valued at V(status quo) = q.
Then Omega turns up and offers you the card deal, with a deck as described above: 90% stars, 10% skulls. Stars give you double V(star)=2c, where c is the value of whatever history is currently slated to play (so c=q when the deal is first offered, but could be higher than that if you’ve played and won before). Skulls give you death: V(skull)=d, and d < q.
If our choices obey the vNM axioms, there will be some function f(x), such that our choices correspond to maximising E[f(x)]. It seems reasonable to assume that f(x) must be (weakly) increasing in V(x). A few questions present themselves:
Is there a function, f(x), such that, for some values of q and d, we should take cards every time this bet is offered?
Yes. f(x)=V(x) gives this result for all d<q.
Is there a function, f(x), such that, for some values of q and d, we should never take the bet?
Yes. Set d=0, q=1000, and f(x) = ln(V(x)+1). The offer gives vNM utility of 0.9ln(2001)~6.8, which is less than ln(1001)~6.9.
Is there a function, f(x), such that, for some values of q and d, we should take cards for some finite number of offers, and then stop?
Yes. Set d=0, q=1, and f(x) = ln(V(x)+1). The first time you get the offer, it’s vNM utility is 0.9ln(3)~1 which is greater than ln(2)~0.7. But at the 10th time you play (assuming you’re still alive), c=512, and the vNM utility of the offer is now 0.9ln(1025)~6.239, which is less than ln(513)~6.240. So you play up until the 10th offer, then stop.
* This is just to ensure that doubling your valutilons cannot make you worse off, as would happen if they were negative. It should be possible to reframe the problem to avoid this, but let’s stick with this for now.
In ethical and axiological matters, it is an argument.
If Omega alters your mind so that you can experience “doubled utility”, and you choose not to identify with the resultant creature, then Omega has killed you.
I can’t imagine any situation in which “I can’t imagine that” is an acceptable argument. QED.
And thus, the alcoholic who wishes to sober up, but is unable, dies with every slug of cheap cider!
It’s not an argument at all. Otherwise the concept of utilons as a currency with any...currency, is nonsense.
I don’t understand. Can you make this point clearer?
Somewhat off-topic, but: Many people do many things that they have previously wished not to do, through coercion or otherwise. And when asked ‘are you still you’ most would probably answer in the affirmative.
If Omega doubled your fun-points and asked you if you were still you, you would say yes. Why would you-now be right and you-altered be wrong?
The concept of a currency of utility is very counterintuitive. It’s not how we feel utility. However, if we’re to shut up and calculate (which we probably should) then ‘I can’t imagine twice the utility’ isn’t a smart response.
I don’t know. But I do know for sure that if Omega doubled them 60 times, the resultant being wouldn’t be me.
At which doubling would you cease being you? Or would it be an incremental process? What function links ‘number of doublings’ to ‘degree of me-ness’?
I don’t think we’re going anywhere useful with this. But I do know that if you get too tight on continuous personal identity and what that means, you start coming up with all sorts of paradoxes.
But that doesn’t mean that we should just give up on personal identity. The utility function is not up for grabs, as they say: if I consider it integral to my utility function that I don’t get significantly altered, then no amount of logical argument ought to persuade me otherwise.
I think you need a minus sign in there
It’s there—it’s the fifth character.
I was thinking of putting another one in, to change
10^165
into
10^-165
Right you are.
In public policy discussions, that’s true. In private conversations with individuals, I’ve heard that reason more than any other.
Depending on your purpose, I think it’s probably useful to distinguish between self-regarding and other-regarding utilons as well. A consequentialist moral theory may want to maximise the (weighted) sum of (some transform of) self-regarding utilons, but to exclude other-regarding utilons from the maximand (to avoid “double-counting”).
The other interesting question is: what does it actually mean to “value” something?
In what way are hedons anything other than a subset of utilons? Please clarify.
Increasing happiness is a part of human utility, it just isn’t all of it. This post doesn’t really make sense because it is arguing Superset vs Subset.
Hedons won’t be a subset of utilons if we happen not to value all hedons. One might not value hedons that arise out of false beliefs, for example. (From memory, I think Lawrence Sumner is a proponent of a view something like this.)
NB: Even if hedons were simply a subset of utilons, I don’t quite see how that would mean that this post “doesn’t really make sense”.
Ah, I see! Thank you, that helps.
RE:NB Reading hedons as a subset of utilons, phrases like “maximize our hedons at the expense of our utilons” didn’t make sense to me.
The x that maximizes f(x) might not maximize f(x)+g(x).
One need not care about all hedons (or any), or care about them linearly.
What sets? What subsets? You can’t throw concepts like this without clarification and expect them to make sense.
Re: I’m going to use “utilons” to refer to value utility units and “hedons” to refer to experiential utility units.
This seems contrary to the usage of the LessWrong Wiki:
http://wiki.lesswrong.com/wiki/Utilon
http://wiki.lesswrong.com/wiki/Hedon
The Wiki has the better usage—much better usage.
To avoid confusion, I think I’m going to refer to Psychohistorian’s utilons as valutilons from now on.
Then what’s the difference between “pleasure unit” and “experiential utility unit”?
We can experience things other than pleasure.
Yeah, I’m pretty sure my usage is entirely consistent with the wiki usage, if not basically identical.
Interesting, I’d assumed your definitions of utilon were subtly different, but perhaps I was reading too much into your wording.
The wiki definition focuses on preference: utilons are the output of a set of vNM-consistent preferences over gambles.
Your definition focuses on “values”: utilons are a measure of the extent to which a given world history measures up according to your values.
These are not necessarily inconsistent, but I’d assumed (perhaps wrongly) that they differed in two respects.
Preferences are a simply binary relation, that does not allow degrees of intensity. (I can rank A>B, but I can’t say that I prefer A twice as much as B.) In contrast, the degree to which a world measures up to our values seems capable of degrees. (It could make sense for me to say that I value A twice as much as I value B.)
The preferences in question are over gambles over world histories, whereas I assumed that the values in question were over world histories directly.
I’ve started calling what-I-thought-you-meant “valutilons”, to avoid confusion between that concept and the definition of utilons that seems more common here (and which is reflected in the wiki). We’ll see how that goes.
Wiki says: hedons are “Utilons generated by fulfilling base desires”.
Article says: hedons are “experiential utility units”. Seems different to me.
If you are still talking about Hedons and Utilons—and if we go by the wiki, then no difference—since Hedons are a subset of Utilons, and are therefore measured in the same units.
Not true. Even according to the wiki’s usage.
What the Wiki says is: “Utilons generated by fulfilling base desires are hedons”. I think it follows from that that Utilons and Hedons have the same units.
I don’t much like the Wiki on these issues—but I do think it a better take on the definitions than this post.
I was objecting to the subset claim, not the claim about unit equivalence. (Mainly because somebody else had just made the same incorrect claim elsewhere in the comments to this post.)
As it happens, I’m also happy to object to claim about unit equivalence, whatever the wiki says. (On what seems to be the most common interpretation of utilons around these parts, they don’t even have a fixed origin or scale: the preference orderings they represent are invariant to affine transforms of the utilons.)
My original claim was about what the Wiki says. Outside that context we would have to start by stating definitions of Hedons and Utilons before there could be much in the way of sensible conversation.
I’m not convinced by your examples that people generally value utilons over hedons.
For your first example, you feel like you (and others, by generalization) would reject Omega’s deal, but how much can you trust this self-prediction? Especially given that this situation will never occur, you don’t have much incentive to predict correctly if the answer isn’t flattering.
For the drug use example, I can think of many other possible reasons that people would oppose drugs other than valuing utilons over hedons. Society might be split into two groups: drug-lovers and non-drug-lovers. If non-drug-lovers have more power, then the individually-maximizing non-drug-lovers will make sure that drugs are illegal, even if the net hedonic benefit of legalizing drugs is positive.
That’s why my argument focuses on arguments surrounding legalization rather than on the law itself. There are many potential reasons why drugs remain illegal, from your argument to well-intentioned utilitarianism to big pharma. However, when you look at arguments for legalization, you seldom hear a public figure say, “But people really like getting high!” Similarly, if you’re hearing an argument for, say, abstinence-only sex ed, you never hear someone say, “But teenagers really like having sex!” Even with more “neutral” topics like a junk food tax, arguments like “I don’t want the government telling me what to eat” seem far more common than “But some people really like deep fried lard!” Though that example I am less sure of, and it is certainly less consistent than the other two. In general though, you don’t see people arguing that hedons should be a meaningful factor in any policy, and I think this strongly indicates that our society does not assign a high value to the attaining of of hedons, in the way it assigns value to say, being thin or being wealthy.
″ Even with more “neutral” topics like a junk food tax, arguments like “I don’t want the government telling me what to eat” seem far more common than “But some people really like deep fried lard!”
I think this is mostly rationalization:
In a practical sense, we have a very strong drive to pleasure and enjoyment, but our Judeo-Christian tradition (like most other religions as well, but let’s keep it simple) makes a sport of downplaying pleasure as a factor in human happiness, even making it into something dirty or at least suspicious.
Fortunately, when the time of enlightenment came, it did not reestablish pleasure as a desirable goal, but opened a great back door for rationalization: the very concept of freedom. The long ascetic tradition going back several thousand years put a very strong barrier to publicly admitting this significant part of our driving force. Freedom was promoted instead. Of course “freedom” is a very fuzzy word. It can refer to several more or less disconnected fuzzy concepts like independence of foreign power, free practice of religion, personal liberties, etc.
Still “Freedom” is also a wildcard for saying: “Don’t mess with my hedons!”.
Of course, I won’t admit that I am softie and care about all those nice convenient or exciting stuff, but don’t dare to dispute my freedom to do whatever I want! (Unless it harms anyone else.)
So the concept of freedom is an ideal invention for our anyways irrational and hypocritical society: it allows public discussion to covertly recognize the value of individual pleasures by referring to this established, noble, abstract concept that fortunately made it into the set of few keywords that command immediate respect and unquestioned reverence.
I know I’ve read a number of economists doing utilitarian analyses of drug legalization that take into account the enjoyment people get from drugs. Jacob Sullum’s “Saying Yes” is basically a defense of drug use.
I argue in favor of keeping your damn dirty hands off my fatty food on the basis of my enjoyment of it. I also enjoy rock’n’roll, but don’t care much about sex’n’drugs (though I think those should be legal too).
How can you enjoy one without the others?
This is a good objection. I can see another reason why this is a poor example.
Our morals evolved in a society that (to begin with) has no Omegas. If you have an opportunity to hurt a lot of people and profit from it, it’s a very safe bet that someone will find out one day that you did it, and you will be punished proportionally. So our instincts (morals, whatever) tell us very strongly not to do this. The proposed secrecy is an added hint (to our subconscious thinking) that this action is not accepted by society, so it’s very dangerous.
Rejecting the proposal is unnecessary, excessive caution. If people were more rational, and more serious about maximizing hedons (rather than, say, concentrating on minimizing risk once a suitable lifelong level of hedons has been reached), then more people would accept Omega’s proposal!
“dead in an hour with P=~1-1.88*10^165” should probably have 10^(-165) so that P is just less than 1.
Why doesn’t this post show up under “new” anymore?
[And what possible reason did someone have for down-voting that question?]
It shows up for me...
If you downvoted the post, it wouldn’t show up for you, depending on your account settings.