Summary: you’re ignoring the fact that utilities can change; it’s easy to come up with an example of a case where both receiving an item A tomorrow and receiving an item B tomorrow are preferable to an equal chance of receiving either tomorrow. You’re also ignoring the fact we’re not mathematical oracles, which irritates me but doesn’t assist my point.
Once I feel more on top of myself, I’ll probably revise it and post it.
Summary: you’re ignoring the fact that utilities can change; it’s easy to come up with an example of a case where both receiving an item A tomorrow and receiving an item B tomorrow are preferable to an equal chance of receiving either tomorrow.
I’m not ignoring these facts as in “I don’t know them”. I’m ignoring them as in “simplifying them away” so as to get rigorous results. Rigorous results that can then be built upon in more general cases.
The big question is, why do we prefer A and B to the chance of either? There are some objective reasons for this that do not violate any of the axioms: take A being a huge cake and B being a video game, and you lack both a fridge and a games consol, and these can only be bought today. So you’d have to buy them both, and your choices are AuF or BuGC tomorrow versus ((A+B)/2) u F u GC, where u denotes union. Other situations can be modelled similarly.
In fact, whether you are money pumpable or not really comes down to how you model the situation:
Maybe uncertainty makes you nervous, and you lose happiness over this. Then either I’m weakly money pumping you if I act on these preferences, or I’m objectively granting you the removal of your worry as a service. Most people at the time feel that I’m granting them a service, but afterwards they feel I money pumped them. Especially if I repeat it.
Which is a note of caution on the subtlety of blindly applying the results of my post directly to the real world.
But if your utilities are simply changing for no valid reason, then you are completly money pumpable.
You’re also ignoring the fact we’re not mathematical oracles, which irritates me but doesn’t assist my point.
First assume humans are perfect spheres. Then assume they’re mathematical oracles.
Maybe uncertainty makes you nervous, and you lose happiness over this. Then either I’m weakly money pumping you if I act on these preferences, or I’m objectively granting you the removal of your worry as a service. Most people at the time feel that I’m granting them a service, but afterwards they feel I money pumped them. Especially if I repeat it.
Does this mean we should start treating certain types of money pumping as payment for a service rather than something rational agents always avoid?
Which is a note of caution on the subtlety of blindly applying the results of my post directly to the real world.
When Less Wrongers say that expected utility is the sole fundamental decision-making method used by practical rational agents (as opposed to ones that require impossible computation ability), are they blindly applying the results of your post directly to the real world, or is there more to it?
You’re also ignoring the fact we’re not mathematical oracles, which irritates me but doesn’t assist my point.
First assume humans are perfect spheres. Then assume they’re mathematical oracles. Get results in this case. Add the complexities of reality afterwards.
AIXI, Bayes, Solomonoff, CEV. I’m ready for my complexities now.
This is part of the process of adding the complexities. Now you know precisely in what way’s you can be exploited when you abbandon which axiom. Yet people do abandon these axioms, and are only moderately exploited. Thus we can assume that real decision theories, when itterated in the presence of people trying to money pump you, tends approximately to the expected utility hypothesis.
This dramatically reduces the number of reasonable/probable decision theories out there.
Maybe uncertainty makes you nervous, [...]. Then either I’m weakly money pumping you [...], or I’m objectively granting you the removal of your worry as a service. Most people at the time feel that I’m granting them a service, but afterwards they feel I money pumped them. Especially if I repeat it.
Does this mean we should start treating certain types of money pumping as payment for a service rather than something rational agents always avoid?
The name of the service is “insurance”. This is a business in which customers repeatedly make bets that they wish they hadn’t made in retrospect, but it still makes sense to make the bet ex ante.
Please forgive the nitpicking but as an actuary, I do try to make this point whenever I feel it’s helpful to do so:
Insurance is not betting. Insurance is removing variation and chance from your life, not introducing variation and chance to your life. A bet introduces risk where there was none before. Insurance removes risk when it already exists.
A good point to remember, and I’d say the most useful way to think of it.
The problematic word seems to be “bet,” and while I agree that most bets do increase variation, I feel like Chris/Stuart take bet to mean “an amount of money that pays returns when one outcome happens and not when another does.” This adequately captures both traditional bets (bets that some thing will happen because one believes the probability of it happening is higher than one’s betting partner believes it is) and insurance or hedging bets.
I work on prediction markets, so I see it all as bets, and am used to thinking that both participants in a purely financial trade can gain from it, even though many people on the outside of the deal see it as zero sum. Sometimes you increase your variance because you think it’s worth increasing your expected return, other times you reduce your variation.
Actually, for most insurances, it makes no sense to do the bet at any point. Aggregating the risk over your lifetime, you’re better off not paying the insurance (this doesn’t apply to insurance for major disasters).
This is part of the process of adding the complexities. Now you know precisely in what way’s you can be exploited when you abbandon which axiom. Yet people do abandon these axioms, and are only moderately exploited.
Is my post not an example of someone abandoning the axiom of independence and not being exploited?
Thus we can assume that real decision theories, when itterated in the presence of people trying to money pump you, tends approximately to the expected utility hypothesis.
I assume you mean “tends to something which is approximately the expected utility hypothesis”.
I wrote a critical response to this and posted it to my drafts: http://lesswrong.com/lw/1ea/money_pumping_averted/
Summary: you’re ignoring the fact that utilities can change; it’s easy to come up with an example of a case where both receiving an item A tomorrow and receiving an item B tomorrow are preferable to an equal chance of receiving either tomorrow. You’re also ignoring the fact we’re not mathematical oracles, which irritates me but doesn’t assist my point.
Once I feel more on top of myself, I’ll probably revise it and post it.
I’m not ignoring these facts as in “I don’t know them”. I’m ignoring them as in “simplifying them away” so as to get rigorous results. Rigorous results that can then be built upon in more general cases.
The big question is, why do we prefer A and B to the chance of either? There are some objective reasons for this that do not violate any of the axioms: take A being a huge cake and B being a video game, and you lack both a fridge and a games consol, and these can only be bought today. So you’d have to buy them both, and your choices are AuF or BuGC tomorrow versus ((A+B)/2) u F u GC, where u denotes union. Other situations can be modelled similarly.
In fact, whether you are money pumpable or not really comes down to how you model the situation:
Maybe uncertainty makes you nervous, and you lose happiness over this. Then either I’m weakly money pumping you if I act on these preferences, or I’m objectively granting you the removal of your worry as a service. Most people at the time feel that I’m granting them a service, but afterwards they feel I money pumped them. Especially if I repeat it.
Which is a note of caution on the subtlety of blindly applying the results of my post directly to the real world.
But if your utilities are simply changing for no valid reason, then you are completly money pumpable.
First assume humans are perfect spheres. Then assume they’re mathematical oracles.
Get results in this case.
Add the complexities of reality afterwards.
Does this mean we should start treating certain types of money pumping as payment for a service rather than something rational agents always avoid?
When Less Wrongers say that expected utility is the sole fundamental decision-making method used by practical rational agents (as opposed to ones that require impossible computation ability), are they blindly applying the results of your post directly to the real world, or is there more to it?
AIXI, Bayes, Solomonoff, CEV. I’m ready for my complexities now.
This is part of the process of adding the complexities. Now you know precisely in what way’s you can be exploited when you abbandon which axiom. Yet people do abandon these axioms, and are only moderately exploited. Thus we can assume that real decision theories, when itterated in the presence of people trying to money pump you, tends approximately to the expected utility hypothesis.
This dramatically reduces the number of reasonable/probable decision theories out there.
The name of the service is “insurance”. This is a business in which customers repeatedly make bets that they wish they hadn’t made in retrospect, but it still makes sense to make the bet ex ante.
Please forgive the nitpicking but as an actuary, I do try to make this point whenever I feel it’s helpful to do so:
Insurance is not betting. Insurance is removing variation and chance from your life, not introducing variation and chance to your life. A bet introduces risk where there was none before. Insurance removes risk when it already exists.
End of nitpicking.
That’s exactly the same as hedging bets.
Which is why hedging is understood by people who hedge as insurance (unlike the bet they are trying to hedge).
A good point to remember, and I’d say the most useful way to think of it.
The problematic word seems to be “bet,” and while I agree that most bets do increase variation, I feel like Chris/Stuart take bet to mean “an amount of money that pays returns when one outcome happens and not when another does.” This adequately captures both traditional bets (bets that some thing will happen because one believes the probability of it happening is higher than one’s betting partner believes it is) and insurance or hedging bets.
Agreed.
I work on prediction markets, so I see it all as bets, and am used to thinking that both participants in a purely financial trade can gain from it, even though many people on the outside of the deal see it as zero sum. Sometimes you increase your variance because you think it’s worth increasing your expected return, other times you reduce your variation.
Actually, for most insurances, it makes no sense to do the bet at any point. Aggregating the risk over your lifetime, you’re better off not paying the insurance (this doesn’t apply to insurance for major disasters).
Is my post not an example of someone abandoning the axiom of independence and not being exploited?
I assume you mean “tends to something which is approximately the expected utility hypothesis”.