Well, also part of it is that for most people, utility isn’t linear in money.
Imagine someone starving, on the verge of death or such. This offer is more or less their very last chance at this particular moment to be able to survive.
500$ with certainty means high probability of immediate survival. 1 million dollars with 15% chance means ~15% chance of survival.
500$ can potentially get enough meals and so on to buy enough time to get more help.
Again, not everyone is in this situation, obviously. But this is a simple construct to demo that utility isn’t linear in money and that picking the 500$ can, at least in some cases, be rather more rational than the initial naive computation may suggest at first. (Shut up and multiply UTILITIES and probabilities, rather than money and probability. :))
Having said all that, probably for most people in that study, picking 500$ was the wrong choice. :)
Well, also part of it is that for most people, utility isn’t linear in money.
Yeah. This assumption of linearity is annoyingly common; I wish more people were aware of the problems with it when contructing their various thought experiments. Not just with money, either.
I don’t think you can model my preferences with excepted value computation based on a money → utility mapping.
E.g. I’d definitely prefer 100M@100% to any amount of money at less than 100%. Still I’d prefer 101M@100%.
I think that my preference is quite defensible from a rational point of view, still there is no real valued money to utility mapping that could make it fit into an expected utility-maximization framework.
Well, you can use money to do stuff that does have value to you. So while there isn’t a simple utility(money) computation, in principle one might have utility(money|current state of the world)
ie, there’s a sufficiently broad set of things you can do with money such that more money will, for a very wide range of amounts of money, give you more opportunity to bring reality into states higher in your preference ranking.
and wait… are you saying you’d prefer 100 million dollars at probability =1 to, say, 100 billion dollars at probability = .99999?
This kind of super-cautious mindset can’t be modeled with any real valued money X (current state of the world) → utility type of mapping.
If you would trade a .99999 probability of $100M for a .99997 probability of $100B, then you’re correct—you have no consistent utility function, and hence you can be money-pumped by the Allais Paradox.
And as I’ve argued before, that only follows if the a) the subject is given an arbitrarily large number of repeats of that choice, and b) their preference for one over the other is interpreted as them writing an arbitrarily large number of option contracts trading one for the other.
If—as is the case when people actually answer the Allais problem as presented—they merely show a one-shot preference for one over the other, it does not follow that they have an inconsistent utility function, or that they can be money-pumped. When you do the experiment again and again, you get the expected value. When you don’t, you don’t.
If making the “wrong” choice when presented with two high-probability, high-payoff lottery tickets is exploitation, I don’t want to be empowered. (You can quote me on that.)
Yes, but I can’t find it at the moment—it came up later, and apparently people do get money-pumped even on repeated versions. The point about what inferences you can draw from a one-shot stands though.
If making the “wrong” choice when presented with two high-probability, high-payoff lottery tickets is exploitation, I don’t want to be empowered. (You can quote me on that.)
Not very quotable, but I may be tempted to do so anyway.
Aw, come on! Don’t you see? “If X is wrong, I don’t want to be right”, but then using exploitation and empowerment as the opposites instead?
Anyway, do you get the general point about how the money pump only manifests in multiple trials over the same person, which weren’t studied in the experiments, and how Eliezer_Yudkowsky’s argument subtly equates a one-time preference with a many-time preference for writing lots of option contracts?
The above example had no consistent (real valued) utility function regardless off my 100M@.99999 vs. 100B@.99997 preference.
BTW, whatever would that preference be (I am a bit unsure, but I think I’d still take the 100M as not doing so would triple my chances of losing it) I did not really get the conclusion of the essay. At least I could not follow why being money-pumped (according to that definition of “money pumped”) is so undesirable from any rational point of view.
This kind of super-cautious mindset can’t be modeled with any real valued money X (current state of the world) → utility type of mapping.
Yes it can: use the mapping U:money->utils such that U(x) is increasing for x<$100M (probably concave) and U(x) = C = const for x>=$100M. Then expected utility EU($100M@100%) = C*1 = C, and also EU($100B@90%) = C*0.9 < EU($100M@100%). But one of the consequences of expected utility representation is that now you must be indifferent between 20% chance at $100M and 20% chance at $100B.
Call me a chicken, but yes: I would not risk going out empty handed even in 1 out of 100000 if I could have left with $100M.
This kind of super-cautious mindset can’t be modeled with any real valued money X (current state of the world) → utility type of mapping.
Like Vladimir Nesov pointed out, that is false—not the preference being expressed, of course, but the statement that the preference can’t be modeled with the mapping.
Now first let me make it clear that I disapprove of the atmosphere you find in some academic science departments where making a false statement is taken to be a mortifying sin. That kind of attitude is a big barrier to teaching and to learning. Since teaching and learning is a big part of what we want to do here, we should not think poorly of a participant for making a false statement.
But I am a little worried that in 88 hours since the false statement was made, no one downvoted the false statement (or if they did, the vote was canceled out by an upvote). And I am a little worried that in the 81 hours since his reply, no one upvoted Nesov’s reply in which he explains why the statement is false. (I have just cast my votes on these 2 comments.)
It is good to have an atmosphere of respect for people even if they make a mistake, but it is bad IMHO when most readers ignore a false statement like the one we have here when there is no doubt about its falseness (it is not open to interpretation) and it involves knowledge central to the mission of the community (e.g., like the one we have here about the most elementary decision theory). Note that elementary decision theory is central to the rationality mission of Less Wrong and to the improve-the-global-situation-with-AI mission of Less Wrong.
Moreover, if you not only read a comment, but also decide to reply to it, well, then IMHO, you should take particular care to make sure you understand the comment, especially when the comment is as short and unnuanced as the one under discussion. But before Nesov’s reply, two people replied to the comment under discussion without showing any sign that they recognise that the one statement of fact made in the comment is false. One reply (upvoted 3 times) reads, ‘The technical term is “risk-averse”, not “chicken”’. The other introduces the Allais paradox, which is irrelevant to why the statement is false.
I do not mean to single out this comment and these 2 replies or the people who wrote them: the only reason I am drawing attention to them is to illustrate something that happens regularly. And I definitely realize that it probably happens a lot less here on Less Wrong than it does in any other conversation on the internet that ranges over as many subject relevant to the human condition as the conversation on Less Wrong does. And a significant reason for that is the hard work Eliezer and others put into the development of the software behind the site.
But I suspect that one of the best opportunities for creating a conversation that is even better than the conversation we are all in right now is to make the response by the community to false statements (the kind not open to interpretation) more salient and more consistent. Wikipedia’s response to false statements gives me the impression of rising to the level of saliency and consistency I am talking about, but of course the software behind Wikipedia does not support conversation as well as the software behind Less Wrong does. (And more importantly but more subtly, Wikipedia is badly governed: much of the goodwill and reputation enjoyed by Wikipedia will probably be captured by the ideological and personal agendas of Wikipedia’s insiders.)
I disagree that false statements are the sorts of things that should be downvoted. I’m all about this being a place where people can happily be false and get corrected, and that means the ’I want to see fewer comments like this” interpretation suggests that I should not downvote comments merely for containing falsehoods.
“I’m all about this being a place where people can happily be false and get corrected.”
I am, too, until the false statements start drown out the relevant true information so that the most rational readers decide to stop coming here anymore or until the volume of false statements overwhelm the community’s ability to respond to false statements. But, yeah, I am with you.
And you make me realize that downvoting is probably not the right response to a false statement. I just think that there should be a response that is not as demanding of the reader’s time and attention as reading the false statement, then reading the responses to the false statement. (Also, it would be nice to give a prospective responder a way to respond that is less demanding of their time than the only way currently available, namely, to compose a comment in reply to the false statement.)
My original statement was mathematically true. Maybe Vladimir was sloppy reading it (his utilty function satisfied only half of the requirements), but I would not downvote him for that.
Well, also part of it is that for most people, utility isn’t linear in money.
Imagine someone starving, on the verge of death or such. This offer is more or less their very last chance at this particular moment to be able to survive.
500$ with certainty means high probability of immediate survival. 1 million dollars with 15% chance means ~15% chance of survival.
500$ can potentially get enough meals and so on to buy enough time to get more help.
Again, not everyone is in this situation, obviously. But this is a simple construct to demo that utility isn’t linear in money and that picking the 500$ can, at least in some cases, be rather more rational than the initial naive computation may suggest at first. (Shut up and multiply UTILITIES and probabilities, rather than money and probability. :))
Having said all that, probably for most people in that study, picking 500$ was the wrong choice. :)
Yeah. This assumption of linearity is annoyingly common; I wish more people were aware of the problems with it when contructing their various thought experiments. Not just with money, either.
I don’t think you can model my preferences with excepted value computation based on a money → utility mapping.
E.g. I’d definitely prefer 100M@100% to any amount of money at less than 100%. Still I’d prefer 101M@100%.
I think that my preference is quite defensible from a rational point of view, still there is no real valued money to utility mapping that could make it fit into an expected utility-maximization framework.
Well, you can use money to do stuff that does have value to you. So while there isn’t a simple utility(money) computation, in principle one might have utility(money|current state of the world)
ie, there’s a sufficiently broad set of things you can do with money such that more money will, for a very wide range of amounts of money, give you more opportunity to bring reality into states higher in your preference ranking.
and wait… are you saying you’d prefer 100 million dollars at probability =1 to, say, 100 billion dollars at probability = .99999?
Call me a chicken, but yes: I would not risk going out empty handed even in 1 out of 100000 if I could have left with $100M.
This kind of super-cautious mindset can’t be modeled with any real valued money X (current state of the world) → utility type of mapping.
The technical term is “risk-averse”, not “chicken”.
If you would trade a .99999 probability of $100M for a .99997 probability of $100B, then you’re correct—you have no consistent utility function, and hence you can be money-pumped by the Allais Paradox.
And as I’ve argued before, that only follows if the a) the subject is given an arbitrarily large number of repeats of that choice, and b) their preference for one over the other is interpreted as them writing an arbitrarily large number of option contracts trading one for the other.
If—as is the case when people actually answer the Allais problem as presented—they merely show a one-shot preference for one over the other, it does not follow that they have an inconsistent utility function, or that they can be money-pumped. When you do the experiment again and again, you get the expected value. When you don’t, you don’t.
If making the “wrong” choice when presented with two high-probability, high-payoff lottery tickets is exploitation, I don’t want to be empowered. (You can quote me on that.)
This is what I’m thinking, too. Curious, since you say you’ve argued this before, did Eliezer ever address this argument anywhere?
Yes, but I can’t find it at the moment—it came up later, and apparently people do get money-pumped even on repeated versions. The point about what inferences you can draw from a one-shot stands though.
Not very quotable, but I may be tempted to do so anyway.
Aw, come on! Don’t you see? “If X is wrong, I don’t want to be right”, but then using exploitation and empowerment as the opposites instead?
Anyway, do you get the general point about how the money pump only manifests in multiple trials over the same person, which weren’t studied in the experiments, and how Eliezer_Yudkowsky’s argument subtly equates a one-time preference with a many-time preference for writing lots of option contracts?
Yep.
Rockin.
The above example had no consistent (real valued) utility function regardless off my 100M@.99999 vs. 100B@.99997 preference.
BTW, whatever would that preference be (I am a bit unsure, but I think I’d still take the 100M as not doing so would triple my chances of losing it) I did not really get the conclusion of the essay. At least I could not follow why being money-pumped (according to that definition of “money pumped”) is so undesirable from any rational point of view.
Yes it can: use the mapping U:money->utils such that U(x) is increasing for x<$100M (probably concave) and U(x) = C = const for x>=$100M. Then expected utility EU($100M@100%) = C*1 = C, and also EU($100B@90%) = C*0.9 < EU($100M@100%). But one of the consequences of expected utility representation is that now you must be indifferent between 20% chance at $100M and 20% chance at $100B.
I also made the requirement that 101M@100% should be preferred to 100M@100%.
Your utility function of U(x)=C for x>100M can’t satisfy that.
Like Vladimir Nesov pointed out, that is false—not the preference being expressed, of course, but the statement that the preference can’t be modeled with the mapping.
Now first let me make it clear that I disapprove of the atmosphere you find in some academic science departments where making a false statement is taken to be a mortifying sin. That kind of attitude is a big barrier to teaching and to learning. Since teaching and learning is a big part of what we want to do here, we should not think poorly of a participant for making a false statement.
But I am a little worried that in 88 hours since the false statement was made, no one downvoted the false statement (or if they did, the vote was canceled out by an upvote). And I am a little worried that in the 81 hours since his reply, no one upvoted Nesov’s reply in which he explains why the statement is false. (I have just cast my votes on these 2 comments.)
It is good to have an atmosphere of respect for people even if they make a mistake, but it is bad IMHO when most readers ignore a false statement like the one we have here when there is no doubt about its falseness (it is not open to interpretation) and it involves knowledge central to the mission of the community (e.g., like the one we have here about the most elementary decision theory). Note that elementary decision theory is central to the rationality mission of Less Wrong and to the improve-the-global-situation-with-AI mission of Less Wrong.
Moreover, if you not only read a comment, but also decide to reply to it, well, then IMHO, you should take particular care to make sure you understand the comment, especially when the comment is as short and unnuanced as the one under discussion. But before Nesov’s reply, two people replied to the comment under discussion without showing any sign that they recognise that the one statement of fact made in the comment is false. One reply (upvoted 3 times) reads, ‘The technical term is “risk-averse”, not “chicken”’. The other introduces the Allais paradox, which is irrelevant to why the statement is false.
I do not mean to single out this comment and these 2 replies or the people who wrote them: the only reason I am drawing attention to them is to illustrate something that happens regularly. And I definitely realize that it probably happens a lot less here on Less Wrong than it does in any other conversation on the internet that ranges over as many subject relevant to the human condition as the conversation on Less Wrong does. And a significant reason for that is the hard work Eliezer and others put into the development of the software behind the site.
But I suspect that one of the best opportunities for creating a conversation that is even better than the conversation we are all in right now is to make the response by the community to false statements (the kind not open to interpretation) more salient and more consistent. Wikipedia’s response to false statements gives me the impression of rising to the level of saliency and consistency I am talking about, but of course the software behind Wikipedia does not support conversation as well as the software behind Less Wrong does. (And more importantly but more subtly, Wikipedia is badly governed: much of the goodwill and reputation enjoyed by Wikipedia will probably be captured by the ideological and personal agendas of Wikipedia’s insiders.)
I disagree that false statements are the sorts of things that should be downvoted. I’m all about this being a place where people can happily be false and get corrected, and that means the ’I want to see fewer comments like this” interpretation suggests that I should not downvote comments merely for containing falsehoods.
“I’m all about this being a place where people can happily be false and get corrected.”
I am, too, until the false statements start drown out the relevant true information so that the most rational readers decide to stop coming here anymore or until the volume of false statements overwhelm the community’s ability to respond to false statements. But, yeah, I am with you.
And you make me realize that downvoting is probably not the right response to a false statement. I just think that there should be a response that is not as demanding of the reader’s time and attention as reading the false statement, then reading the responses to the false statement. (Also, it would be nice to give a prospective responder a way to respond that is less demanding of their time than the only way currently available, namely, to compose a comment in reply to the false statement.)
My original statement was mathematically true. Maybe Vladimir was sloppy reading it (his utilty function satisfied only half of the requirements), but I would not downvote him for that.