Showing that it can’t be pumped just means that it’s consistent. It doesn’t mean it’s correct. Consistently wrong
choices cost utility, and are not rational.
To be clear: you mean that my choices somehow cost utility, even if they’re consistent?
I would greatly love an example that compares a plain Bayesian analysis with an interval analysis.
It’s a good idea. But at the moment I think more basic questions are in dispute.
Instead of using a prior probability for events, can we not use an interval of probabilities?
Intervals of probability seem to reduce to probability if you consider the origin of the interval. Suppose in the Ellsberg paradox that the proportion of blue and green balls was determined, initially, by a coin flip (or series of coin flips). In this view, there is no ambiguity at all, just classical probabilities—so you seem to posit some distinction based on how something was setup. Where do you draw the line; when does something become genuinely ambiguous?
The boots and the mother example can all be dealt with using standard Bayesian techniques (you take utility over worlds, and worlds with one boot are not very valuable, worlds with two are; and the memories of the kids are relevant to their happiness), and you can re-express what is intuitively an “interval of probability” as a Bayesian behaviour over multiple, non-independent bets.
To be clear: you mean that my choices somehow cost utility, even if they’re consistent?
You would pay to remove ambiguity. And ambiguity removal doesn’t increase expected utility, so Bayesian agents would outperform you in situations where some agents had ambiguity-reducing knowledge.
Suppose in the Ellsberg paradox that the proportion of blue and green balls was determined, initially, by a coin flip (or series of coin flips). In this view, there is no ambiguity at all, just classical probabilities
Correct.
Where do you draw the line
1) I have no reason to think A is more likely than B and I have no reason to think B is more likely than A
2) I have good reason to think A is as likely as B.
These are different of course. I argue the difference matters.
The boots and the mother example can all be dealt with using standard Bayesian techniques
Correct. See last paragraph of the post.
You would pay to remove ambiguity. And ambiguity removal doesn’t increase expected utility, so Bayesian agents would outperform you in situations where some agents had ambiguity-reducing knowledge.
If you mean something like: red has probability 1⁄3, and green has probability 1⁄3 “on average”, then I dispute “on average”—that is circular.
The advantage of a money pump or “Dutch book” argument is that you don’t need such assumptions to show that the behaviour in question is suboptimal. (Un)fortunately there is a gap between Bayesian reasoning and what money pump arguments can justify.
(Incidentally, if you determine the contents of the urn by flipping a coin for each of the 60 balls to determine whether it is green or blue, then this matters to the Bayesian too—this gives you the binomial prior, whereas I think most Bayesians would want to use the uniform prior by default. Doesn’t affect the first draw, but it would affect multiple draws.)
Incidentally, if you determine the contents of the urn by flipping a coin for each of the 60 balls to determine whether it is green or blue, then this matters to the Bayesian too—this gives you the binomial prior, whereas I think most Bayesians would want to use the uniform prior by default. Doesn’t affect the first draw, but it would affect multiple draws
But it still remains that in a many circumstances (such as single draws in this setup), there exists information that a Bayesian will find useless and an ambiguity-averter will find valuable. If agents have the opportunity to sell this information, the Bayesian will get a free bonus.
From a more financial persepective, the ambiguity-averter gives up the opportunity to be a market-maker: a Bayesian can quote a price and be willing to either buy and sell at that price (plus a small fee), wherease the ambiguity-averter’s required spread is pushed up by the ambiguity (so all other agents will shop with the Bayesian).
Also, the ambiguity-averter has to keep track of more connected trades than a Bayesian does. Yes, for shoes, whether other deals are offered becomes relevant; but trades that are truly independent of each other (in utility terms) can be treated so by a Bayesian but not by an ambiguity-averter.
But it still remains that in a many circumstances (such as single draws in this setup), there exists information that a Bayesian will find useless and an ambiguity-averter will find valuable. If agents have the opportunity to sell this information, the Bayesian will get a free bonus.
How does this work, then? Can you justify that the bonus is free without circularity?
From a more financial persepective, the ambiguity-averter gives up the opportunity to be a market-maker: a Bayesian can quote a price and be willing to either buy and sell at that price (plus a small fee), wherease the ambiguity-averter’s required spread is pushed up by the ambiguity (so all other agents will shop with the Bayesian).
Sure. There may be circularity concerns here as well though. Also, if one expects there to be a market for something, that should be accounted for. In the extreme case, I have no inherent use for cash, my utility consists entirely in the expected market.
Also, the ambiguity-averter has to keep track of more connected trades than a Bayesian does. Yes, for shoes, whether other deals are offered becomes relevant; but trades that are truly independent of each other (in utility terms) can be treated so by a Bayesian but not by an ambiguity-averter.
I also gave the example of risk-aversion though. If trades pay in cash, risk-averse Bayesians can’t totally separate them either. But generally I won’t dispute that the ideal use of this method is more complex than the ideal Bayesian reasoner.
I wonder if you can express your result in a simpler fashion… Model your agent as a combination of a buying agent and a selling agent. The buying agent will always pay less than a Bayesian, the selling agent will always sell for more. Hence (a bit of hand waving here) the combined agent will never lose money to a money pump. The problem is that it won’t pick up ‘free’ money.
How does this work, then? Can you justify that the bonus is free without circularity?
For two agents, I can.
Imagine a setup with two agents, otherwise identical, except that one owns a 1/2+-1/4 bet and the other owns 1⁄2. A government agency wishes to promote trade, and so will offer 0.1 to any agents that do trade (a one-off gift).
If the two agents are Bayesian, they will trade; if they are ambiguity averse, they won’t. So the final setup is strictly identical to the start one (two identical agents, one owning 1/2+- 1⁄4, one owning 1⁄2) except that the Bayesian are each 0.1 richer.
Right, except this doesn’t seem to have anything to do with ambiguity aversion.
Imagine that one agent owns $100 and the other owns a rock. A government agency wishes to promote trade, and so will offer $10 to any agents that do trade (a one-off gift). If the two agents believe that a rock is worth more than $90, they will trade; if they don’t, they won’t, etc etc
But it has everything to do with ambiguity aversion: the trade only fails because of it. If we reach into the system, and remove ambiguity aversion for this one situation, then we end up unarguably better (because of the symmetry).
Yes, sometimes the subsidy will be so high that even the ambiguity averse will trade, or sometimes so low that even Bayesians won’t trade; but there will always be a middle ground where Bayesians win.
As I said elsewhere, ambiguity aversion seems like the combination of an agent who will always buy below the price a Bayesian would pay, and another who will always sell above the price a Bayesian would pay. Seen like that, your case that they cannot be arbitraged is plausible. But a rock cannot be arbitraged either, so that’s not sufficient.
This example hits the ambiguity averter exactly where it hurts, exploiting the fact that there are deals they will not undertake either as buyer or seller.
I say this has nothing to do with ambiguity aversion, because we can replace (1/2, 1/2+-1/4, 1⁄10) with all sorts of things which don’t involve uncertainty. We can make anyone “leave money on the table”. In my previous message, using ($100, a rock, $10), I “proved” that a rock ought to be worth at least $90.
If this is still unclear, then I offer your example back to you with one minor change: the trading incentive is still 1⁄10, and one agent still has 1/2+-1/4, but instead the other agent has 1⁄4. The Bayesian agent holding 1/2+-1/4 thinks it’s worth more than 1⁄4 plus 1⁄10, so it refuses to trade. Whereas the ambiguity averse agents are under no such illustion.
So, the boot’s on the other foot: we trade, and you don’t. If your example was correct, then mine would be too. But presumably you don’t agree that you are “leaving money on the table”.
Yes. I mean that, when your choice is different from what standard (or for some cases, timeless) decision theory calculates for the same prior beliefs and outcome->utility mapping, you’re losing utility. I can’t tell if you think that this theory does have different outcomes, or if you think that this is “just” a simplification that gives the same outcomes.
From Manfred’s comments (with which I agree), it looks like yes, you lose utility by failing to buy a bet that has positive EV. You lose half as much if you flip a coin, because sometimes the coin is right...
To be clear: you mean that my choices somehow cost utility, even if they’re consistent?
It’s a good idea. But at the moment I think more basic questions are in dispute.
Intervals of probability seem to reduce to probability if you consider the origin of the interval. Suppose in the Ellsberg paradox that the proportion of blue and green balls was determined, initially, by a coin flip (or series of coin flips). In this view, there is no ambiguity at all, just classical probabilities—so you seem to posit some distinction based on how something was setup. Where do you draw the line; when does something become genuinely ambiguous?
The boots and the mother example can all be dealt with using standard Bayesian techniques (you take utility over worlds, and worlds with one boot are not very valuable, worlds with two are; and the memories of the kids are relevant to their happiness), and you can re-express what is intuitively an “interval of probability” as a Bayesian behaviour over multiple, non-independent bets.
You would pay to remove ambiguity. And ambiguity removal doesn’t increase expected utility, so Bayesian agents would outperform you in situations where some agents had ambiguity-reducing knowledge.
Correct.
1) I have no reason to think A is more likely than B and I have no reason to think B is more likely than A
2) I have good reason to think A is as likely as B.
These are different of course. I argue the difference matters.
Correct. See last paragraph of the post.
If you mean something like: red has probability 1⁄3, and green has probability 1⁄3 “on average”, then I dispute “on average”—that is circular.
The advantage of a money pump or “Dutch book” argument is that you don’t need such assumptions to show that the behaviour in question is suboptimal. (Un)fortunately there is a gap between Bayesian reasoning and what money pump arguments can justify.
(Incidentally, if you determine the contents of the urn by flipping a coin for each of the 60 balls to determine whether it is green or blue, then this matters to the Bayesian too—this gives you the binomial prior, whereas I think most Bayesians would want to use the uniform prior by default. Doesn’t affect the first draw, but it would affect multiple draws.)
But it still remains that in a many circumstances (such as single draws in this setup), there exists information that a Bayesian will find useless and an ambiguity-averter will find valuable. If agents have the opportunity to sell this information, the Bayesian will get a free bonus.
From a more financial persepective, the ambiguity-averter gives up the opportunity to be a market-maker: a Bayesian can quote a price and be willing to either buy and sell at that price (plus a small fee), wherease the ambiguity-averter’s required spread is pushed up by the ambiguity (so all other agents will shop with the Bayesian).
Also, the ambiguity-averter has to keep track of more connected trades than a Bayesian does. Yes, for shoes, whether other deals are offered becomes relevant; but trades that are truly independent of each other (in utility terms) can be treated so by a Bayesian but not by an ambiguity-averter.
How does this work, then? Can you justify that the bonus is free without circularity?
Sure. There may be circularity concerns here as well though. Also, if one expects there to be a market for something, that should be accounted for. In the extreme case, I have no inherent use for cash, my utility consists entirely in the expected market.
I also gave the example of risk-aversion though. If trades pay in cash, risk-averse Bayesians can’t totally separate them either. But generally I won’t dispute that the ideal use of this method is more complex than the ideal Bayesian reasoner.
I wonder if you can express your result in a simpler fashion… Model your agent as a combination of a buying agent and a selling agent. The buying agent will always pay less than a Bayesian, the selling agent will always sell for more. Hence (a bit of hand waving here) the combined agent will never lose money to a money pump. The problem is that it won’t pick up ‘free’ money.
For two agents, I can.
Imagine a setup with two agents, otherwise identical, except that one owns a 1/2+-1/4 bet and the other owns 1⁄2. A government agency wishes to promote trade, and so will offer 0.1 to any agents that do trade (a one-off gift).
If the two agents are Bayesian, they will trade; if they are ambiguity averse, they won’t. So the final setup is strictly identical to the start one (two identical agents, one owning 1/2+- 1⁄4, one owning 1⁄2) except that the Bayesian are each 0.1 richer.
Right, except this doesn’t seem to have anything to do with ambiguity aversion.
Imagine that one agent owns $100 and the other owns a rock. A government agency wishes to promote trade, and so will offer $10 to any agents that do trade (a one-off gift). If the two agents believe that a rock is worth more than $90, they will trade; if they don’t, they won’t, etc etc
But it has everything to do with ambiguity aversion: the trade only fails because of it. If we reach into the system, and remove ambiguity aversion for this one situation, then we end up unarguably better (because of the symmetry).
Yes, sometimes the subsidy will be so high that even the ambiguity averse will trade, or sometimes so low that even Bayesians won’t trade; but there will always be a middle ground where Bayesians win.
As I said elsewhere, ambiguity aversion seems like the combination of an agent who will always buy below the price a Bayesian would pay, and another who will always sell above the price a Bayesian would pay. Seen like that, your case that they cannot be arbitraged is plausible. But a rock cannot be arbitraged either, so that’s not sufficient.
This example hits the ambiguity averter exactly where it hurts, exploiting the fact that there are deals they will not undertake either as buyer or seller.
No, (un)fortunately it is not so.
I say this has nothing to do with ambiguity aversion, because we can replace (1/2, 1/2+-1/4, 1⁄10) with all sorts of things which don’t involve uncertainty. We can make anyone “leave money on the table”. In my previous message, using ($100, a rock, $10), I “proved” that a rock ought to be worth at least $90.
If this is still unclear, then I offer your example back to you with one minor change: the trading incentive is still 1⁄10, and one agent still has 1/2+-1/4, but instead the other agent has 1⁄4. The Bayesian agent holding 1/2+-1/4 thinks it’s worth more than 1⁄4 plus 1⁄10, so it refuses to trade. Whereas the ambiguity averse agents are under no such illustion.
So, the boot’s on the other foot: we trade, and you don’t. If your example was correct, then mine would be too. But presumably you don’t agree that you are “leaving money on the table”.
Yes. I mean that, when your choice is different from what standard (or for some cases, timeless) decision theory calculates for the same prior beliefs and outcome->utility mapping, you’re losing utility. I can’t tell if you think that this theory does have different outcomes, or if you think that this is “just” a simplification that gives the same outcomes.
I replied to Manfred with the Ellsberg example having 31 instead of 30 red balls. Does that count as different? If so, do I lose utility?
From Manfred’s comments (with which I agree), it looks like yes, you lose utility by failing to buy a bet that has positive EV. You lose half as much if you flip a coin, because sometimes the coin is right...