It is the most commonly dropped axiom. Dropping it has the advantage of allowing you use the framework to model a wider range of intelligent agents—increasing the scope of the model.
Recall that the way we rationalized away money risk aversion was to claim that money units become less useful as our wealth increases. Is there some rationalization which shows that utility units become less pleasing as happiness increases.
The independence axiom says “no”—I think—though it is “just” an axiom.
For the last question, if you drop axioms you are still usually left with expected utility maximisation—though it depends on exactly how much you drop at once. Maybe it will just be utility maximisation that is left—for example.
That’s the issue of the usefulness of the Axiom of Independence—I believe.
You can drop that—though you are still usually left with expected utility maximisation.
Then you become a money pump.
It is the most commonly dropped axiom. Dropping it has the advantage of allowing you use the framework to model a wider range of intelligent agents—increasing the scope of the model.
What is the issue? Where, in my account, does AoI come into play? And why do you suggest that AoI only sometimes makes a difference?
My comments about independence were triggered by:
The independence axiom says “no”—I think—though it is “just” an axiom.
For the last question, if you drop axioms you are still usually left with expected utility maximisation—though it depends on exactly how much you drop at once. Maybe it will just be utility maximisation that is left—for example.