This is not a complete answer, it’s just a way of thinking about the matter that was helpful to me in the past, and so might be to you too:
Saying that you ought to maximise the expected value of a real valued function of everything still leaves a huge amount of freedom; you can encode what you want by picking the right function over the right things.
So you can think of it as a language: a conventional way of expressing decision strategies. If you can write a decision strategy as argmaxaE[U|do(A=a)], then you have written the problem in the language of utility.
Like any generic language, this won’t stop you from expressing anything in general, but it will make certain things easier to express than others. If you know at least two languages, you’ll have sometimes encountered short words that can’t be efficaciously translated to a single word in the other language.
Similarly, thinking that you ought to maximise expected utility, and then asking “what is my utility then?”, naturally suggests to your mind certain kinds of strategies rather than others.
Some decisions may need many epicycles to be cast as utility maximisation. That this indicates a problem with utility maximisation, with the specific decision, or with the utility function, is left to your judgement.
There is currently not a theory of decision that just works for everything, so there is not a totally definitive argument for maximum expected utility. You’ll have to learn when and how you can not apply it with further experience.
Thank you for your insight. The problem with this view of utility “just as a language” is that sometimes I feel that the conclusion of utility maximization are not “rational” and I cannot figure out why they should be indeed rational if the language is not saying anything that is meaningful to my intuition.
if the language is not saying anything that is meaningful to my intuition.
When you learn a new language, you eventually form new intuitions. If you stick to existing intuitions, you do not grow. Current intuition does not generalize to the utmost of your potential ability.
When I was toddler, I never proceeded to grow new concepts by rigorous construction; yet I ended up mostly knowing what was around me. Then, to go further, I employed abstract thought, and had to mold and hew my past intuitions. Some things I intuitively perceived, turned out likely false; hallucinations.
Later, when I was learning Serious Math, I forgot that learning does not work by a straight stream of logic and proofs, and instead demanded that what I was reading both match my intuitions, and be properly formal and justified. Quite the ask!
The problem with this view of utility “just as a language”
My opinion is that if you think the problem lays in seeing it as a language, a new lens to the world, because specifically of the new language not matching your present intuition, you are pointing at the wrong problem.
If instead you meant to prosaically plead for object-level explanations that would clarify, oh uhm sorry I don’t actually know, I’m an improvised teacher, I actually have no clue, byeeeeee
This is not a complete answer, it’s just a way of thinking about the matter that was helpful to me in the past, and so might be to you too:
Saying that you ought to maximise the expected value of a real valued function of everything still leaves a huge amount of freedom; you can encode what you want by picking the right function over the right things.
So you can think of it as a language: a conventional way of expressing decision strategies. If you can write a decision strategy as argmaxaE[U|do(A=a)], then you have written the problem in the language of utility.
Like any generic language, this won’t stop you from expressing anything in general, but it will make certain things easier to express than others. If you know at least two languages, you’ll have sometimes encountered short words that can’t be efficaciously translated to a single word in the other language.
Similarly, thinking that you ought to maximise expected utility, and then asking “what is my utility then?”, naturally suggests to your mind certain kinds of strategies rather than others.
Some decisions may need many epicycles to be cast as utility maximisation. That this indicates a problem with utility maximisation, with the specific decision, or with the utility function, is left to your judgement.
There is currently not a theory of decision that just works for everything, so there is not a totally definitive argument for maximum expected utility. You’ll have to learn when and how you can not apply it with further experience.
Thank you for your insight. The problem with this view of utility “just as a language” is that sometimes I feel that the conclusion of utility maximization are not “rational” and I cannot figure out why they should be indeed rational if the language is not saying anything that is meaningful to my intuition.
When you learn a new language, you eventually form new intuitions. If you stick to existing intuitions, you do not grow. Current intuition does not generalize to the utmost of your potential ability.
When I was toddler, I never proceeded to grow new concepts by rigorous construction; yet I ended up mostly knowing what was around me. Then, to go further, I employed abstract thought, and had to mold and hew my past intuitions. Some things I intuitively perceived, turned out likely false; hallucinations.
Later, when I was learning Serious Math, I forgot that learning does not work by a straight stream of logic and proofs, and instead demanded that what I was reading both match my intuitions, and be properly formal and justified. Quite the ask!
My opinion is that if you think the problem lays in seeing it as a language, a new lens to the world, because specifically of the new language not matching your present intuition, you are pointing at the wrong problem.
If instead you meant to prosaically plead for object-level explanations that would clarify, oh uhm sorry I don’t actually know, I’m an improvised teacher, I actually have no clue, byeeeeee