Well, it would push me away from ambiguity aversion, I would become indifferent between a bet on red and a bet on green, etc.
Put it another way: a frequentist could say to you: “Your Bayesian behaviour is a perfect frequentist model of a situation where:
You choose a bet
An urn is selected uniformly at random from the fictional population
An outcome occurs.
It seems totally unreasonable to apply it in the Ellsberg situation or similar ones. For instance, you would then not react if you were in fact told the distribution.”
And actually, as it happens, this isn’t too far from the sort of things you do hear in frequentist complaints about Bayesianism. You presumably reject this frequentist argument against you.
There seems to be an issue of magnitude here. There are 3 possible ways the urn can be filled:
It could be selected uniformly at random
It could be selected through some unknown process: uniformly at random, biased against me, biased towards blue, biased towards green, always exactly 30⁄30, etc.
It could be selected so as to exactly minimize my profits
2 seems a lot more like 1 than it does like 3. Even without using any Bayesian reasoning, a range is a lot more like the middle of the range than it is like one end of the range.
(This argument seems to suggest a “common-sense human” position between high ambiguity aversion and no ambiguity aversion, but most of us would find that untenable.)
An alternative way of talking about it:
The point I am making is that it is much more clear which direction my new information is supposed to influence you then your information is supposed to influence me. If a variable x is in the range [0,1], finding out that it is actually 0 is very strongly biasing information. For instance, almost every value x could have been before is strictly higher than the new known value. But finding out that it is 1⁄2 does not have a clear direction of bias. Maybe it should make you switch to more confidently betting x is high, maybe it should make you switch to more confidently betting x is low. I don’t know, it depends on details of the case, and is not very robust to slight changes in the situation.
(This argument seems to suggest a “common-sense human” position between high ambiguity aversion and no ambiguity aversion, but most of us would find that untenable.)
Well then, P(green) = 1⁄3 +- 1⁄3 would be extreme ambiguity aversion (such as would match the adversary I think you are proposing), and P(green) = 1⁄3 exactly would be no ambiguity aversion , so something like P(green) = 1⁄3 +- 1⁄9 would be such a compromise, no? And why is that untenable?
To clarify: the aversary you have in mind, what powers does it have, exactly?
Generally speaking, an adversary would affect my behaviour, unless the loss of ambiguity aversion from the fact that all probabilities are known were exactly balanced by the gain in ambiguity aversion from the fact that said probabilities are under control of a (limited) aversary.
(Which is similar to saying that finding out the true distribution from which the urn was drawn would indeed affect your behaviour, unless you happened to find that the distribution was the prior you had in mind anyway.)
I don’t get what this range signifies. There should be a data point about how ambiguous it is, which you could use or not use to influence actions. (For instance, if someone says they looekd in the urn and it seemed about even, that reduced ambiguity.) But then you want to convert that into a range, which does not refer to the actual range of frequencies (which could be 1⁄3 +- 1⁄3) and is dependent on your degree of aversion, but then you want to convert that into a decision?
Well, in terms of decisions, P(green) = 1⁄3 +- 1⁄9 means that I’d buy a bet on green for the price of a true randomised bet with probability 2⁄9, and sell for 4⁄9, with the caveats mentioned.
We might say that the price of a left boot is $15 +- $5 and the price of a right boot is $15 -+ $5.
Yes. So basically you are biting a certain bullet that most of us are unwilling to bite, of not having a procedure to determine your decisions and just kind of choosing a number in the middle of your range of choices that seems reasonable.
You’re also biting a bullet where you have a certain kind of discontinuity in your preferences with very small bets, I think.
How do you choose the interval? I have not been able to see any method other than choosing something that sounds good (choosing the minimum and maximum conceivable would lead to silly Pascal’s Wager—type things, and probably total paralysis.)
The discontinuity: Suppose you are asked to put a fair price f(N) on a bet that returns N if A occurs and 1 if it does not. The function f will have a sharp bend at 1, equivalent to a discontinuity in the derivative.
An alternative ambiguity aversion function, more complicated to define, would give a smooth bend.
How do you choose the interval? I have not been able to see any method other than choosing something that
sounds good
Heh. I’m the one being accused of huffing priors? :-)
Okay, granted, there are methods like maximum entropy for Bayesian priors that can be applied in some situations, and the Ellsberg urn is such a situation.
Yes, you are correct about the discontinuity in the derivative.
Well, it would push me away from ambiguity aversion, I would become indifferent between a bet on red and a bet on green, etc.
Put it another way: a frequentist could say to you: “Your Bayesian behaviour is a perfect frequentist model of a situation where:
You choose a bet
An urn is selected uniformly at random from the fictional population
An outcome occurs.
It seems totally unreasonable to apply it in the Ellsberg situation or similar ones. For instance, you would then not react if you were in fact told the distribution.”
And actually, as it happens, this isn’t too far from the sort of things you do hear in frequentist complaints about Bayesianism. You presumably reject this frequentist argument against you.
And I reject your Bayesian argument against me.
There seems to be an issue of magnitude here. There are 3 possible ways the urn can be filled:
It could be selected uniformly at random
It could be selected through some unknown process: uniformly at random, biased against me, biased towards blue, biased towards green, always exactly 30⁄30, etc.
It could be selected so as to exactly minimize my profits
2 seems a lot more like 1 than it does like 3. Even without using any Bayesian reasoning, a range is a lot more like the middle of the range than it is like one end of the range.
(This argument seems to suggest a “common-sense human” position between high ambiguity aversion and no ambiguity aversion, but most of us would find that untenable.)
An alternative way of talking about it:
The point I am making is that it is much more clear which direction my new information is supposed to influence you then your information is supposed to influence me. If a variable x is in the range [0,1], finding out that it is actually 0 is very strongly biasing information. For instance, almost every value x could have been before is strictly higher than the new known value. But finding out that it is 1⁄2 does not have a clear direction of bias. Maybe it should make you switch to more confidently betting x is high, maybe it should make you switch to more confidently betting x is low. I don’t know, it depends on details of the case, and is not very robust to slight changes in the situation.
Well then, P(green) = 1⁄3 +- 1⁄3 would be extreme ambiguity aversion (such as would match the adversary I think you are proposing), and P(green) = 1⁄3 exactly would be no ambiguity aversion , so something like P(green) = 1⁄3 +- 1⁄9 would be such a compromise, no? And why is that untenable?
To clarify: the aversary you have in mind, what powers does it have, exactly?
Generally speaking, an adversary would affect my behaviour, unless the loss of ambiguity aversion from the fact that all probabilities are known were exactly balanced by the gain in ambiguity aversion from the fact that said probabilities are under control of a (limited) aversary.
(Which is similar to saying that finding out the true distribution from which the urn was drawn would indeed affect your behaviour, unless you happened to find that the distribution was the prior you had in mind anyway.)
I don’t get what this range signifies. There should be a data point about how ambiguous it is, which you could use or not use to influence actions. (For instance, if someone says they looekd in the urn and it seemed about even, that reduced ambiguity.) But then you want to convert that into a range, which does not refer to the actual range of frequencies (which could be 1⁄3 +- 1⁄3) and is dependent on your degree of aversion, but then you want to convert that into a decision?
Well, in terms of decisions, P(green) = 1⁄3 +- 1⁄9 means that I’d buy a bet on green for the price of a true randomised bet with probability 2⁄9, and sell for 4⁄9, with the caveats mentioned.
We might say that the price of a left boot is $15 +- $5 and the price of a right boot is $15 -+ $5.
Yes. So basically you are biting a certain bullet that most of us are unwilling to bite, of not having a procedure to determine your decisions and just kind of choosing a number in the middle of your range of choices that seems reasonable.
You’re also biting a bullet where you have a certain kind of discontinuity in your preferences with very small bets, I think.
I don’t understand what you mean in the first paragraph. I’ve given an exact procedure for my decisions.
What kind of discontinuities to you have in mind?
How do you choose the interval? I have not been able to see any method other than choosing something that sounds good (choosing the minimum and maximum conceivable would lead to silly Pascal’s Wager—type things, and probably total paralysis.)
The discontinuity: Suppose you are asked to put a fair price f(N) on a bet that returns N if A occurs and 1 if it does not. The function f will have a sharp bend at 1, equivalent to a discontinuity in the derivative.
An alternative ambiguity aversion function, more complicated to define, would give a smooth bend.
Heh. I’m the one being accused of huffing priors? :-)
Okay, granted, there are methods like maximum entropy for Bayesian priors that can be applied in some situations, and the Ellsberg urn is such a situation.
Yes, you are correct about the discontinuity in the derivative.
Yes. Because you’re huffing priors. Twice as much, in fact—we have to make up one number, you have to make up two.