No, (un)fortunately it is not so.
I say this has nothing to do with ambiguity aversion, because we can replace (1/2, 1/2+-1/4, 1⁄10) with all sorts of things which don’t involve uncertainty. We can make anyone “leave money on the table”. In my previous message, using ($100, a rock, $10), I “proved” that a rock ought to be worth at least $90.
If this is still unclear, then I offer your example back to you with one minor change: the trading incentive is still 1⁄10, and one agent still has 1/2+-1/4, but instead the other agent has 1⁄4. The Bayesian agent holding 1/2+-1/4 thinks it’s worth more than 1⁄4 plus 1⁄10, so it refuses to trade. Whereas the ambiguity averse agents are under no such illustion.
So, the boot’s on the other foot: we trade, and you don’t. If your example was correct, then mine would be too. But presumably you don’t agree that you are “leaving money on the table”.
If there is nothing wrong with having a state variable, then sure, I can give a rule for initialising it, and call it “objective”. It is “objective” in that it looks like the sort of thing that Bayesians call “objective” priors.
Eg. you have an objective prior in mind for the Ellsberg urn, presumably uniform over the 61 configurations, perhaps based on max entropy. What if instead there had been one draw (with replacement) from the urn, and it had been green? You can’t apply max entropy now. That’s ok: apply max entropy “retroactively” and run the usual update process to get your initial probabilities.
So we could normally start the state variable at the “natural value” (virtual interval = 0 : and, yes, as it happens, this is also justified by symmetry in this case.) But if there is information to consider then we set it retroactively and run the decision method forward to get its starting value.
This has a similar claim to objectivity as the Bayesian process, so I still think the point of contention has to be in using stateful behaviour to resolve ambiguity.