The problem is that you’re losing money doing it once.
Again, if suddenly being offered the choice of 1A/1B then 2A/2B as described here, but being “inconsistent”, is what you call “losing money”, then I don’t want to gain money!
If they are willing to trade A for B in a one-shot game, they shouldn’t be willing to pay more for A than for B in a one-shot
But that’s not what’s happening the paradox. They’re (doing something isomorphic to) preferring A to B once and then p*B to p*A once. At no point do they “pay” more for B than A while preferring A to B. At no point does anyone make or offer the money-pumping trades with the subjects, nor have they obligated themselves to do so!
Consider Eliezer’s final remarks in The Allais Paradox (I link purely for the convenience of those coming in in the middle):
Suppose that at 12:00PM I roll a hundred-sided die. If the die shows a number greater than 34, the game terminates. Otherwise, at 12:05PM I consult a switch with two settings, A and B. If the setting is A, I pay you $24,000. If the setting is B, I roll a 34-sided die and pay you $27,000 unless the die shows “34”, in which case I pay you nothing.
Let’s say you prefer 1A over 1B, and 2B over 2A, and you would pay a single penny to indulge each preference. The switch starts in state A. Before 12:00PM, you pay me a penny to throw the switch to B. The die comes up 12. After 12:00PM and before 12:05PM, you pay me a penny to throw the switch to A.
I have taken your two cents on the subject.
You’re right insofar as Eliezer invokes the Axiom of Independence when he resolves the Allais Paradox using expected value; I do not yet see any way in which Stuart_Armstrong’s criteria rule out the preferences (1A > 1B)u(2A < 2B). However, in the scenario Eliezer describes, an agent with those preferences either loses one cent or two cents relative to the agent with (1A > 1B)u(2A > 2B).
Your preferences between A and B might reasonably change if you actually receive the money from either gamble, so that you have more money in your bank account now than you did before. However, that’s not what’s happening; the experimenter can use you as a money pump without ever actually paying out on either gamble.
Yes, I know that a money pump doesn’t involve doing the gamble itself. You don’t have to repeat yourself, but apparently, I do have to repeat myself when I say:
The money pump does require that the experimenter make actual futher trades with you, not just imagine hypothetical ones. The subjects didn’t make these trades, and if they saw many more lottery tickets potentially coming into play, so as to smooth out returns, they would quickly revert to standard EU maximization, as predicted by Armstrongs’s derivation.
“Potentially coming into play, so as to smooth out returns” requires that there be the possibility of the subject actually taking more than one gamble, which never happens. If you mean that people might get suspicious after the tenth time the experimenter takes their money and gives them nothing in return, and thereafter stop doing it, I agree with you; however, all this proves is that making the original trade was stupid, and that people are able to learn to not make stupid decisions given sufficient repetition.
“Potentially coming into play, so as to smooth out returns” requires that there be the possibility of the subject actually taking more than one gamble, which never happens.
The possibility has to happen, if you’re cycling all these tickets through the subject’s hands. What, are they fake tickets that can’t actually be used now?
There are factors that come into play when you get to do lots of runs, but aren’t present with only one run. A subject’s choice in a one-shot scenario does not imply that they’ll make the money-losing trades you describe. They might, but you would have to actually test it out. They don’t become irrational until such a thing actually happens.
“What, are they fake tickets that can’t actually be used now?”
No, they’re just the same tickets. There’s only ever one of each. If I sell you a chocolate bar, trade the chocolate bar for a bag of Skittles, buy the bag of Skittles, and repeat ten thousand times, this does not mean I have ten thousand of each; I’m just re-using the same ones.
“They might, but you would have to actually test it out. They don’t become irrational until such a thing actually happens.”
We did test it out, and yes, people did act as money pumps. See The Construction of Preference by Sarah Lichtenstein and Paul Slovic.
You can also listen to an interview with one of Sarah Lichtenstein’s subjects who refused to make his preferences consistent even after the money-pump aspect was explained:
You can also listen to an interview with one of Sarah Lichtenstein’s subjects who refused to make his preferences consistent even after the money-pump aspect was explained:
Admitting that the set of preferences is inconsistent, but refusing to fix it is not so bad a conclusion—maybe he’d just make it worse (eg, by raising the bid on B to 550). At times he seems to admit that the overall pattern is irrational (“It shows my reasoning process isn’t too good”). At other times, he doesn’t admit the problem, but I think you’re too harsh on him in framing it as refusal.
I may be misunderstanding, but he seems to say that the game doesn’t allow him to bid higher than 400 on B. If he values B higher than 400 (yes, an absurd mistake), but sells it for 401, merely because he wasn’t allowed to value it higher, then that seems to me to be the biggest mistake. It fits the book’s title, though.
Maybe he just means that his sense of math is that the cap should be 400, which would be the lone example of math helping him. He seems torn between authority figures, the “rationality” of non-circular preferences and the unnamed math of expected values. I’m somewhat surprised that he doesn’t see them as the same oracle. Maybe he was scarred by childhood math teachers, and a lone psychologist can’t match that intimidation?
That sounds to me as though he is using expected utility to come up with his numbers, but doesn’t understand expected utility, so when asked which he prefers he uses some other emotional system.
Again, if suddenly being offered the choice of 1A/1B then 2A/2B as described here, but being “inconsistent”, is what you call “losing money”, then I don’t want to gain money!
But that’s not what’s happening the paradox. They’re (doing something isomorphic to) preferring A to B once and then p*B to p*A once. At no point do they “pay” more for B than A while preferring A to B. At no point does anyone make or offer the money-pumping trades with the subjects, nor have they obligated themselves to do so!
Consider Eliezer’s final remarks in The Allais Paradox (I link purely for the convenience of those coming in in the middle):
You’re right insofar as Eliezer invokes the Axiom of Independence when he resolves the Allais Paradox using expected value; I do not yet see any way in which Stuart_Armstrong’s criteria rule out the preferences (1A > 1B)u(2A < 2B). However, in the scenario Eliezer describes, an agent with those preferences either loses one cent or two cents relative to the agent with (1A > 1B)u(2A > 2B).
Your preferences between A and B might reasonably change if you actually receive the money from either gamble, so that you have more money in your bank account now than you did before. However, that’s not what’s happening; the experimenter can use you as a money pump without ever actually paying out on either gamble.
Yes, I know that a money pump doesn’t involve doing the gamble itself. You don’t have to repeat yourself, but apparently, I do have to repeat myself when I say:
The money pump does require that the experimenter make actual futher trades with you, not just imagine hypothetical ones. The subjects didn’t make these trades, and if they saw many more lottery tickets potentially coming into play, so as to smooth out returns, they would quickly revert to standard EU maximization, as predicted by Armstrongs’s derivation.
“Potentially coming into play, so as to smooth out returns” requires that there be the possibility of the subject actually taking more than one gamble, which never happens. If you mean that people might get suspicious after the tenth time the experimenter takes their money and gives them nothing in return, and thereafter stop doing it, I agree with you; however, all this proves is that making the original trade was stupid, and that people are able to learn to not make stupid decisions given sufficient repetition.
The possibility has to happen, if you’re cycling all these tickets through the subject’s hands. What, are they fake tickets that can’t actually be used now?
There are factors that come into play when you get to do lots of runs, but aren’t present with only one run. A subject’s choice in a one-shot scenario does not imply that they’ll make the money-losing trades you describe. They might, but you would have to actually test it out. They don’t become irrational until such a thing actually happens.
“What, are they fake tickets that can’t actually be used now?”
No, they’re just the same tickets. There’s only ever one of each. If I sell you a chocolate bar, trade the chocolate bar for a bag of Skittles, buy the bag of Skittles, and repeat ten thousand times, this does not mean I have ten thousand of each; I’m just re-using the same ones.
“They might, but you would have to actually test it out. They don’t become irrational until such a thing actually happens.”
We did test it out, and yes, people did act as money pumps. See The Construction of Preference by Sarah Lichtenstein and Paul Slovic.
You can also listen to an interview with one of Sarah Lichtenstein’s subjects who refused to make his preferences consistent even after the money-pump aspect was explained:
http://www.decisionresearch.org/publications/books/construction-preference/listen.html
That is an incredible interview.
Admitting that the set of preferences is inconsistent, but refusing to fix it is not so bad a conclusion—maybe he’d just make it worse (eg, by raising the bid on B to 550). At times he seems to admit that the overall pattern is irrational (“It shows my reasoning process isn’t too good”). At other times, he doesn’t admit the problem, but I think you’re too harsh on him in framing it as refusal.
I may be misunderstanding, but he seems to say that the game doesn’t allow him to bid higher than 400 on B. If he values B higher than 400 (yes, an absurd mistake), but sells it for 401, merely because he wasn’t allowed to value it higher, then that seems to me to be the biggest mistake. It fits the book’s title, though.
Maybe he just means that his sense of math is that the cap should be 400, which would be the lone example of math helping him. He seems torn between authority figures, the “rationality” of non-circular preferences and the unnamed math of expected values. I’m somewhat surprised that he doesn’t see them as the same oracle. Maybe he was scarred by childhood math teachers, and a lone psychologist can’t match that intimidation?
That sounds to me as though he is using expected utility to come up with his numbers, but doesn’t understand expected utility, so when asked which he prefers he uses some other emotional system.