OK, now I understand why this is a necessary part of the framework.
I do think there is a problem with strictly choosing the lesser of the two utilities. For example, you would choose 1U with certainty over something like 10U ± 10U. You said that you would be still make the ambiguity-adverse choice if a few red balls were taken out, but what if almost all of them were removed?
On a more abstract note, your stated reasons for your decision seem to be that you actually care about what might have happened for reasons other than the possibility of it actually happening (does this make sense and accurately describe your position?). I don’t think humans actually care about such things. Probability is in the mind; a difference in what might have happened is a difference in states of knowledge about states of knowledge. A sentence like “I know now that my irresponsible actions could have resulted in injuries or deaths” isn’t actually true given determinism, it’s about what you know believe you should have known in the past. [1] [2]
Getting back to the topic, people’s desires about counterfactuals are desires about their own minds. What Irina and Joey’s mother wants is to not intend to favour either of her children. [3] In reality, the coin is just as determininstic as her decision. Her preference for randomness is about her mind, not reality.
[1] True randomness like that postulated by some interpretations of QM is different and I’m not saying that people absolutely couldn’t have preferences about truly random couterfactuals. Such a world would have to be pretty weird though. It would have to be timeful, for instance, since the randomness would have to be fundamentally indeterminite before it happens, rather than just not known yet, and timeful physics doesn’t even make sense to me.
[2] This is itself a counterfactual, but that’s irrelevent for this context.
[3] Well, my model of her prefers flipping a coin to drawing green or blue balls from an urn, but my model of her does not agree with me on a lot of things. If she were a Bayesian decision theorist, I would expect her to be indifferent between the coin and the urn, but prefer either to having to choose for herself.
For example, you would choose 1U with certainty over something like 10U ± 10U. You said that you would be still
make the ambiguity-adverse choice if a few red balls were taken out, but what if almost all of them were removed?
If I had set P(green) = 1⁄3 +- 1⁄3, then yes. But in this case I’m not ambiguity averse to the extreme, like I mentioned. P(green) = 1⁄3 +- 1⁄9 was what I had, i.e. (1/2 +- 1⁄6)(2/3). The tie point would be 20 red balls, i.e. 1⁄4 exactly versus (1/2 +- 1⁄6)(3/4).
On a more abstract note, your stated reasons for your decision seem to be that you actually care about what might have happened for reasons other than the possibility of it actually happening (does this make sense and accurately describe your position?).
It makes sense, but I don’t feel this really describes me. I’m not sure how to clarify. Maybe an analogy:
What Irina and Joey’s mother wants is to not intend to favour either of her children.
Maybe. Though I put it to you that the mother wants nothing more than what is “best for her children”. Even if we did agree with her about what his best for each child separately, we might still disagree with her about what is “best for her children”.
Perhaps I just want the “best chance of winning”.
(ADDED:) If it helps, I don’t think the fact that it is she making the decision is the issue—she would wish the same thing to happen if her children were in someone else’s care.
If I had set P(green) = 1⁄3 +- 1⁄3, then yes. But in this case I’m not ambiguity averse to the extreme, like I mentioned. P(green) = 1⁄3 +- 1⁄9 was what I had, i.e. (1/2 +- 1⁄6)(2/3). The tie point would be 20 red balls, i.e. 1⁄4 exactly versus (1/2 +- 1⁄6)(3/4).
Well utility is invariant under positive affine transformations, so you could have 30U +- 10U and shift the origin so you have 10U +- 10U. More intuitively, if you have 30U +- 10U, you can regard this as 20U + (20U,0U) and you would be willing to trade this for 21U, but you’re guaranteed the first 20U and you would think it’s excessive to trade (20U,0U) for just 1U.
Maybe. Though I put it to you that the mother wants nothing more than what is “best for her children”. Even if we did agree with her about what his best for each child separately, we might still disagree with her about what is “best for her children”.
Perhaps I just want the “best chance of winning”.
Interesting.
(ADDED:) If it helps, I don’t think the fact that it is she making the decision is the issue—she would wish the same thing to happen if her children were in someone else’s care.
What if they were in the care of her future self who already flipped the coin? Why is this different?
Bonus scenario: There are two standard Elisberg-paradox urns, each paired with a coin. You are asked to pick one to get a reward for iff ((green and heads) or (blue and tails)). At first you are indifferent, as both are identical. However, before you make your selection, one of the coins is flipped. Are you still indifferent?
you would think it’s excessive to trade (20U,0U) for just 1U.
What bet did you have in mind that was worth (20U,0) ?
One of the simplest examples, if P(green) = 1⁄3 +- 1⁄9, would be 70U if green, −20U if not green. Does it still seem excessive to be neutral to that bet, and to trade it for a certain 1U (with the caveats mentioned)
What if they were in the care of her future self who already flipped the coin? Why is this different?
This I don’t understand. She is her future self isn’t she?
Bonus scenario:
Oh boy!
There are two standard Elisberg-paradox urns, each paired with a coin. You are asked to pick one to get
a reward for iff ((green and heads) or (blue and tails)). At first you are indifferent, as both are identical.
However, before you make your selection, one of the coins is flipped. Are you still indifferent?
So there are two urns, one coin is going to be flipped. No matter what I’m offered a randomised bet on the second urn. If the coin comes up heads I’ll be offered a bet on green on the first urn, if the coin comes up tails I’ll be offered a bet on blue on the first urn. So looks like my options are:
A) choose urn 1 either way
B) choose urn 1 (i.e. green) if the coin comes up heads, choose urn 2 if the coin comes up tails
C) choose urn 2 if the coin comes up heads, choose urn 1 (i.e. blue) if the coin comes up tails
D) choose urn 2 either way
And to be pedantic:
E) flip my own coin to randomise between options B and C.
I am indifferent between A, D, and E, which I prefer to B or C.
Generally, we seem to be really overanalysing the phrase “ought to flip a coin”.
Huh, my explanations in that last post were really bad. I may have used a level of detail calibrated for simpler points, or I may have just not given enough thought to my level of detail in the first place.
you would think it’s excessive to trade (20U,0U) for just 1U.
What bet did you have in mind that was worth (20U,0) ? One of the simplest examples, if P(green) = 1⁄3 +- 1⁄9, would be 70U if green, −20U if not green. Does it still seem excessive to be neutral to that bet, and to trade it for a certain 1U (with the caveats mentioned)
What if I told you that the balls were either all green or all blue? Would you regard that as (20U,0U) (that was basically the bet I was imagining but, on reflection, it is not obvious that you would assign it that expected utility)? Would you think it equivalent to the (20U,0U) bet you mentioned and not preferrable to 1U?
There are two standard Elisberg-paradox urns, each paired with a coin. You are asked to pick one to get a reward for iff ((green and heads) or (blue and tails)). At first you are indifferent, as both are identical. However, before you make your selection, one of the coins is flipped. Are you still indifferent?
So looks like my options are:
A) choose urn 1 either way
B) choose urn 1 (i.e. green) if the coin comes up heads, choose urn 2 if the coin comes up tails
C) choose urn 2 if the coin comes up heads, choose urn 1 (i.e. blue) if the coin comes up tails
D) choose urn 2 either way
And to be pedantic: E) flip my own coin to randomise between options B and C.
I am indifferent between A, D, and E, which I prefer to B or C.
So in the standard Ellisberg paradox, you wouldn’t act nonbayesianally if you were told “The reason I’m asking you to choose between red and green rather than red and blue is because of a coin flip.”, but you’d still prefer red if all three options were allowed? I guess that is at least consistent.
What if they were in the care of her future self who already flipped the coin? Why is this different?
This I don’t understand. She is her future self isn’t she?
This is getting at a similar idea as the last one. What seems like the same option, like green or Irina, becomes more valuable when there is an interval due to a random event, even though the random event has already occurred and the result is now known with certainty. This seems to be going against the whole idea of probability being about mental states; even though the uncertainty has been resolved, its status as ‘random’ still matters.
What if I told you that the balls were either all green or all blue?
Hmm. Well, with the interval prior I had in mind (footnote 7), this would result in very high (but not complete) ambiguity. My guess is that’s a limitation of two dimensions—it’ll handle updating on draws from the urn but not “internals” like that. But I’m guessing. (1/2 +- 1⁄6) seems like a reasonable prior interval for a structureless event.
So in the standard Ellisberg paradox, you wouldn’t act nonbayesianally if you were told “The reason I’m asking you to choose between red and green rather than red and blue is because of a coin flip.”
If I take the statement at face value, sure.
but you’d still prefer red if all three options were allowed?
Yes, but again I could flip a coin to decide between green and blue then.
This seems to be going against the whole idea of probability being about mental states;
Well, okay. I don’t think this method has any metaphysical consequences, so I should be able to adopt your stance on probability. I’d say (for the sake of argument) that the probability intervals are still about the mental states that I think you mean. However these mental states still leave the correct course of action underdetermined, and the virtual interval represents one degree of freedom. There is no rule for selecting the prior virtual interval. 0 is the obvious value, but any initial value is still dynamically consistent.
My guess is that’s a limitation of two dimensions—it’ll handle updating on draws from the urn but not “internals” like that. But I’m guessing. (1/2 +- 1⁄6) seems like a reasonable prior interval for a structureless event.
Would a single ball that is either green or blue work?
0 is the obvious value, but any initial value is still dynamically consistent.
I agree that your decision procedure is consistent, not susceptible to Dutch books, etc.
I don’t think this method has any metaphysical consequences, so I should be able to adopt your stance on probability. I’d say (for the sake of argument) that the probability intervals are still about the mental states that I think you mean.
I don’t think this is true. Whether or not you flip the coin, you have the same information about the number of green balls in the urn, so, while the total information is different, the part about the green balls is the same. In order to follow your decision algorithm while believing that probability is about incomplete information, you have to always use all your knowledge in decisions, even knowledge that, like the coin flip, is ‘uncorrelated’, if I can use that word for something that isn’t being assigned a probability, with what you are betting on. This is consistent with the letter of what I wrote, but I think that a bet that is about whether a green ball will be drawn next should use your knowledge about the number of green balls in the urn, not your entire mental state.
Would a single ball that is either green or blue work?
That still seems like a structureless event. No abstract example comes to mind, but there must be concrete cases where Bayesians disagree wildly about the prior probability of an event (95%). Some of these cases should be candidates for very high (but not complete) ambiguity.
I think that a bet that is about whether a green ball will be drawn next should use your knowledge about the
number of green balls in the urn, not your entire mental state.
I think you’re really saying two things: the correct decision is a function of (present, relevant) probability, and probability is in the mind. I’d say the former is your key proposition. It would be sufficient to rule out that an agent’s internal variables, like the virtual interval, could have any effect. I’d say the metaphysical status of probability is a red herring (but I would also accept a 50:50 green-blue herring).
Of course even for Bayesians there are equiprobable options, so decisions can’t be entirely a function of probability. More precisely, your key proposition is that probabilistic indecision has to be transitive. Ambiguity would be an example of intransitive indecision.
Would a single ball that is either green or blue work?
That still seems like a structureless event.
Okay.
I think you’re really saying two things: the correct decision is a function of (present, relevant) probability, and probability is in the mind.
Of course even for Bayesians there are equiprobable options, so decisions can’t be entirely a function of probability.
Well, once you assign probabilities to everything, you’re mostly a Bayesian already. I think the best summary would be that when one must make a decision under uncertainty, preference between actions should depend on and only on one’s knowledge about the possible outcomes.
More precisely, your key proposition is that probabilistic indecision has to be transitive. Ambiguity would be an example of intransitive indecision.
Aren’t you violating the axiom of independence but not the axiom of transitivity?
I’d say the former is your key proposition. It would be sufficient to rule out that an agent’s internal variables, like the virtual interval, could have any effect.
I’m not really sure what a lot of this means. The virtual interval seems to me to be subjectively objective in the same way probability is. Also, do you mean ‘could have any effect’ in the normative sense of an effect on what the right choice is?
I think the best summary would be that when one must make a decision under uncertainty, preference between actions should depend on and only on one’s knowledge about the possible outcomes.
To quote the article you linked: “Jaynes certainly believed very firmly that probability was in the mind … there was only one correct prior distribution to use, given your state of partial information at the start of the problem.”
I have not specified how prior intervals are chosen. I could (for the sake of argument) claim that there was only one correct prior probability interval to assign to any event, given the state of partial information.
At no time is the agent deciding based on anything other than the (present, relevant) probability intervals, plus an internal state variable (the virtual interval).
Aren’t you violating the axiom of indepentence but not the axiom of transitivity?
My decisions violate rule 2 but not rule 1. Unambiguous interval comparison violates rule 1 and not rule 2. My decisions are not totally determined by unambiguous interval comparisons.
Perhaps an example: there is an urn with 29 red balls, 2 orange balls, and 60 balls that are either green or blue. The choice between a bet on red and on green is ambiguous. The choice between a bet on (red or orange) and on green is ambiguous. But the choice between a bet on (red or orange) and on red is perfectly clear. Ambiguity is intransitive.
Now it is true that I will still make a choice in ambiguous situations, but this choice depends on a “state variable”. In unambiguous situations the choice is “stateless”.
I’m not really sure what a lot of this means
Sorry about that. Maybe I’ve been clearer this time around?
To quote the article you linked: “Jaynes certainly believed very firmly that probability was in the mind … there was only one correct prior distribution to use, given your state of partial information at the start of the problem.”
I have not specified how prior intervals are chosen. I could (for the sake of argument) claim that there was only one correct prior probability interval to assign to any event, given the state of partial information.
At no time is the agent deciding based on anything other than the (present, relevant) probability intervals, plus an internal state variable (the virtual interval).
Well, you’d have to say how you choose the interval. Jaynes justified his prior distributions with symmetry principles and maximum entropy. So far, your proposals allow the interval to depend on a coin flip that has no effect on the utility or on the process that does determine the utility. That is not what predicting the results of actions looks like.
Now it is true that I will still make a choice in ambiguous situations, but this choice depends on a “state variable”. In unambiguous situations the choice is “stateless”.
Given an interval, your preferences obey transitivity even though ambiguity doesn’t, right? I don’t think that nontransitivity is the problem here; the thing I don’t like about your decision process is that it takes into account things that have nothing to do with the consequences of your actions.
I’m not really sure what a lot of this means
Sorry about that. Maybe I’ve been clearer this time around?
I only mean that middle paragraph, not the whole comment.
If there is nothing wrong with having a state variable, then sure, I can give a rule for initialising it, and call it “objective”. It is “objective” in that it looks like the sort of thing that Bayesians call “objective” priors.
Eg. you have an objective prior in mind for the Ellsberg urn, presumably uniform over the 61 configurations, perhaps based on max entropy. What if instead there had been one draw (with replacement) from the urn, and it had been green? You can’t apply max entropy now. That’s ok: apply max entropy “retroactively” and run the usual update process to get your initial probabilities.
So we could normally start the state variable at the “natural value” (virtual interval = 0 : and, yes, as it happens, this is also justified by symmetry in this case.) But if there is information to consider then we set it retroactively and run the decision method forward to get its starting value.
This has a similar claim to objectivity as the Bayesian process, so I still think the point of contention has to be in using stateful behaviour to resolve ambiguity.
Eg. you have an objective prior in mind for the Ellsberg urn, presumably uniform over the 61 configurations, perhaps based on max entropy.
Well, that would correspond to a complete absence of knowledge that would favour any configuration over any other, but I do endorse this basic framework for prior selection.
So we could normally start the state variable at the “natural value” (virtual interval = 0 : and, yes, as it happens, this is also justified by symmetry in this case.)
Doesn’t an interval of 0 just recover Bayesian inference?
OK, now I understand why this is a necessary part of the framework.
I do think there is a problem with strictly choosing the lesser of the two utilities. For example, you would choose 1U with certainty over something like 10U ± 10U. You said that you would be still make the ambiguity-adverse choice if a few red balls were taken out, but what if almost all of them were removed?
On a more abstract note, your stated reasons for your decision seem to be that you actually care about what might have happened for reasons other than the possibility of it actually happening (does this make sense and accurately describe your position?). I don’t think humans actually care about such things. Probability is in the mind; a difference in what might have happened is a difference in states of knowledge about states of knowledge. A sentence like “I know now that my irresponsible actions could have resulted in injuries or deaths” isn’t actually true given determinism, it’s about what you know believe you should have known in the past. [1] [2]
Getting back to the topic, people’s desires about counterfactuals are desires about their own minds. What Irina and Joey’s mother wants is to not intend to favour either of her children. [3] In reality, the coin is just as determininstic as her decision. Her preference for randomness is about her mind, not reality.
[1] True randomness like that postulated by some interpretations of QM is different and I’m not saying that people absolutely couldn’t have preferences about truly random couterfactuals. Such a world would have to be pretty weird though. It would have to be timeful, for instance, since the randomness would have to be fundamentally indeterminite before it happens, rather than just not known yet, and timeful physics doesn’t even make sense to me.
[2] This is itself a counterfactual, but that’s irrelevent for this context.
[3] Well, my model of her prefers flipping a coin to drawing green or blue balls from an urn, but my model of her does not agree with me on a lot of things. If she were a Bayesian decision theorist, I would expect her to be indifferent between the coin and the urn, but prefer either to having to choose for herself.
If I had set P(green) = 1⁄3 +- 1⁄3, then yes. But in this case I’m not ambiguity averse to the extreme, like I mentioned. P(green) = 1⁄3 +- 1⁄9 was what I had, i.e. (1/2 +- 1⁄6)(2/3). The tie point would be 20 red balls, i.e. 1⁄4 exactly versus (1/2 +- 1⁄6)(3/4).
It makes sense, but I don’t feel this really describes me. I’m not sure how to clarify. Maybe an analogy:
Maybe. Though I put it to you that the mother wants nothing more than what is “best for her children”. Even if we did agree with her about what his best for each child separately, we might still disagree with her about what is “best for her children”.
Perhaps I just want the “best chance of winning”.
(ADDED:) If it helps, I don’t think the fact that it is she making the decision is the issue—she would wish the same thing to happen if her children were in someone else’s care.
Well utility is invariant under positive affine transformations, so you could have 30U +- 10U and shift the origin so you have 10U +- 10U. More intuitively, if you have 30U +- 10U, you can regard this as 20U + (20U,0U) and you would be willing to trade this for 21U, but you’re guaranteed the first 20U and you would think it’s excessive to trade (20U,0U) for just 1U.
Interesting.
What if they were in the care of her future self who already flipped the coin? Why is this different?
Bonus scenario: There are two standard Elisberg-paradox urns, each paired with a coin. You are asked to pick one to get a reward for iff ((green and heads) or (blue and tails)). At first you are indifferent, as both are identical. However, before you make your selection, one of the coins is flipped. Are you still indifferent?
What bet did you have in mind that was worth (20U,0) ? One of the simplest examples, if P(green) = 1⁄3 +- 1⁄9, would be 70U if green, −20U if not green. Does it still seem excessive to be neutral to that bet, and to trade it for a certain 1U (with the caveats mentioned)
This I don’t understand. She is her future self isn’t she?
Oh boy!
So there are two urns, one coin is going to be flipped. No matter what I’m offered a randomised bet on the second urn. If the coin comes up heads I’ll be offered a bet on green on the first urn, if the coin comes up tails I’ll be offered a bet on blue on the first urn. So looks like my options are:
A) choose urn 1 either way
B) choose urn 1 (i.e. green) if the coin comes up heads, choose urn 2 if the coin comes up tails
C) choose urn 2 if the coin comes up heads, choose urn 1 (i.e. blue) if the coin comes up tails
D) choose urn 2 either way
And to be pedantic: E) flip my own coin to randomise between options B and C.
I am indifferent between A, D, and E, which I prefer to B or C.
Generally, we seem to be really overanalysing the phrase “ought to flip a coin”.
Huh, my explanations in that last post were really bad. I may have used a level of detail calibrated for simpler points, or I may have just not given enough thought to my level of detail in the first place.
What if I told you that the balls were either all green or all blue? Would you regard that as (20U,0U) (that was basically the bet I was imagining but, on reflection, it is not obvious that you would assign it that expected utility)? Would you think it equivalent to the (20U,0U) bet you mentioned and not preferrable to 1U?
So in the standard Ellisberg paradox, you wouldn’t act nonbayesianally if you were told “The reason I’m asking you to choose between red and green rather than red and blue is because of a coin flip.”, but you’d still prefer red if all three options were allowed? I guess that is at least consistent.
This is getting at a similar idea as the last one. What seems like the same option, like green or Irina, becomes more valuable when there is an interval due to a random event, even though the random event has already occurred and the result is now known with certainty. This seems to be going against the whole idea of probability being about mental states; even though the uncertainty has been resolved, its status as ‘random’ still matters.
Hmm. Well, with the interval prior I had in mind (footnote 7), this would result in very high (but not complete) ambiguity. My guess is that’s a limitation of two dimensions—it’ll handle updating on draws from the urn but not “internals” like that. But I’m guessing. (1/2 +- 1⁄6) seems like a reasonable prior interval for a structureless event.
If I take the statement at face value, sure.
Yes, but again I could flip a coin to decide between green and blue then.
Well, okay. I don’t think this method has any metaphysical consequences, so I should be able to adopt your stance on probability. I’d say (for the sake of argument) that the probability intervals are still about the mental states that I think you mean. However these mental states still leave the correct course of action underdetermined, and the virtual interval represents one degree of freedom. There is no rule for selecting the prior virtual interval. 0 is the obvious value, but any initial value is still dynamically consistent.
Would a single ball that is either green or blue work?
I agree that your decision procedure is consistent, not susceptible to Dutch books, etc.
I don’t think this is true. Whether or not you flip the coin, you have the same information about the number of green balls in the urn, so, while the total information is different, the part about the green balls is the same. In order to follow your decision algorithm while believing that probability is about incomplete information, you have to always use all your knowledge in decisions, even knowledge that, like the coin flip, is ‘uncorrelated’, if I can use that word for something that isn’t being assigned a probability, with what you are betting on. This is consistent with the letter of what I wrote, but I think that a bet that is about whether a green ball will be drawn next should use your knowledge about the number of green balls in the urn, not your entire mental state.
That still seems like a structureless event. No abstract example comes to mind, but there must be concrete cases where Bayesians disagree wildly about the prior probability of an event (95%). Some of these cases should be candidates for very high (but not complete) ambiguity.
I think you’re really saying two things: the correct decision is a function of (present, relevant) probability, and probability is in the mind. I’d say the former is your key proposition. It would be sufficient to rule out that an agent’s internal variables, like the virtual interval, could have any effect. I’d say the metaphysical status of probability is a red herring (but I would also accept a 50:50 green-blue herring).
Of course even for Bayesians there are equiprobable options, so decisions can’t be entirely a function of probability. More precisely, your key proposition is that probabilistic indecision has to be transitive. Ambiguity would be an example of intransitive indecision.
Okay.
Well, once you assign probabilities to everything, you’re mostly a Bayesian already. I think the best summary would be that when one must make a decision under uncertainty, preference between actions should depend on and only on one’s knowledge about the possible outcomes.
Aren’t you violating the axiom of independence but not the axiom of transitivity?
I’m not really sure what a lot of this means. The virtual interval seems to me to be subjectively objective in the same way probability is. Also, do you mean ‘could have any effect’ in the normative sense of an effect on what the right choice is?
To quote the article you linked: “Jaynes certainly believed very firmly that probability was in the mind … there was only one correct prior distribution to use, given your state of partial information at the start of the problem.”
I have not specified how prior intervals are chosen. I could (for the sake of argument) claim that there was only one correct prior probability interval to assign to any event, given the state of partial information.
At no time is the agent deciding based on anything other than the (present, relevant) probability intervals, plus an internal state variable (the virtual interval).
My decisions violate rule 2 but not rule 1. Unambiguous interval comparison violates rule 1 and not rule 2. My decisions are not totally determined by unambiguous interval comparisons.
Perhaps an example: there is an urn with 29 red balls, 2 orange balls, and 60 balls that are either green or blue. The choice between a bet on red and on green is ambiguous. The choice between a bet on (red or orange) and on green is ambiguous. But the choice between a bet on (red or orange) and on red is perfectly clear. Ambiguity is intransitive.
Now it is true that I will still make a choice in ambiguous situations, but this choice depends on a “state variable”. In unambiguous situations the choice is “stateless”.
Sorry about that. Maybe I’ve been clearer this time around?
Well, you’d have to say how you choose the interval. Jaynes justified his prior distributions with symmetry principles and maximum entropy. So far, your proposals allow the interval to depend on a coin flip that has no effect on the utility or on the process that does determine the utility. That is not what predicting the results of actions looks like.
Given an interval, your preferences obey transitivity even though ambiguity doesn’t, right? I don’t think that nontransitivity is the problem here; the thing I don’t like about your decision process is that it takes into account things that have nothing to do with the consequences of your actions.
I only mean that middle paragraph, not the whole comment.
If there is nothing wrong with having a state variable, then sure, I can give a rule for initialising it, and call it “objective”. It is “objective” in that it looks like the sort of thing that Bayesians call “objective” priors.
Eg. you have an objective prior in mind for the Ellsberg urn, presumably uniform over the 61 configurations, perhaps based on max entropy. What if instead there had been one draw (with replacement) from the urn, and it had been green? You can’t apply max entropy now. That’s ok: apply max entropy “retroactively” and run the usual update process to get your initial probabilities.
So we could normally start the state variable at the “natural value” (virtual interval = 0 : and, yes, as it happens, this is also justified by symmetry in this case.) But if there is information to consider then we set it retroactively and run the decision method forward to get its starting value.
This has a similar claim to objectivity as the Bayesian process, so I still think the point of contention has to be in using stateful behaviour to resolve ambiguity.
Well, that would correspond to a complete absence of knowledge that would favour any configuration over any other, but I do endorse this basic framework for prior selection.
Doesn’t an interval of 0 just recover Bayesian inference?