Real-life expected utility maximization [response to XiXiDu]

This was supposed to be a comment under XiXiDu’s recent post but it got a bit unwieldy so I’m posting it top-level.

XiXiDu starts his post with:

I would like to ask for help on how to use expected utility maximization, in practice, to maximally achieve my goals.

I think the best single-sentence answer is: don’t.

The usual way of making decisions is to come up with intuitive evaluations of various options and go with the one that feels most attractive. Sometimes you will feel (intuitively) that it would be good to spend some more time thinking about decision. So you’ll put your initial intuitions into words (which are chosen by another intuitive black-box), come up with a causal models of your situation (generated by yet another intuitive module), then experience intuitive feelings about those thoughts, maybe come up with alternative thoughts and compare them (intuitively) or maybe turn those feelings-about-thoughts into second-order thoughts and continue the process until you run out of time, get bored (intuitively) or deliberatively decide that you’ve analyzed enough (by having run another progression of interweaving thoughts and intuitions in parallel to the first one).

In a sense, all thinking is intuition. You don’t get to jump out of the system. There’s no choice between using intuition and using some kind of completely different process called deliberative reasoning but rather a choice between using a small amount of object-level intuition vs lots of intuition turned upon itself.

That doesn’t mean that we can’t improve our thinking processes. Just that we do it by gaining knowledge and experience which then shape our intuitive thinking and not by somehow fundamentally altering their nature. An engineer and a composer both rely on intuition but it’s the engineer that will succeed in building an internal combustion engine and the composer that will succeed in designing an aesthetically pleasing progression of sounds.

Mathematics is often pointed to as the foremost example of strict, logical thinking. Yet, mathematicians rely on intuition too. Mathematical proofs are considered trustworthy because the rules of proof formation are sufficiently simple that humans can train themselves to reliably distinguish proofs from non-proofs. A mathematician looks at a line in a proof and asks herself ‘is that a correct application of logical inference rules?’ She either spots a violation or gets a feeling that it’s in fact correct. There’s a very high chance she got it right but no mystical state of pure logic that guarantees it. And of course, while proofs have to obey formal rules, the only rule for how you’re supposed to think when trying to come up with one is ‘anything goes’.

So how do you use the principle of expected utility maximization to maximally achieve your goals?

Sometimes, in very specific circumstances, you can use it directly but that doesn’t mean you turn into an idealized expected utility maximizer. You are applying your domain-specific skill of mathematics to a specific formalism. Which seems useful to you because earlier you used your domain-specific skill of seeing useful connections between reality and mathematical models.

Or you can ignore it completely and focus on more practical-sounding advice based on the long list of biases catalogued by science. For example, you can learn the rule ‘If I want to believe that someone has some persistent trait based on a single observation, that’s highly suspicious (fundamental attribution error). Doubly so if that belief would make me feel smug.’ It seems that this has nothing to do with any idealized formalism. But to declare something a bias you need some standard against which you can compare observed behavior. If people thought that it’s pointless to come up with idealized models of correct belief formation or decision making because we can never completely avoid intuition, then they might not have bothered with researching cognitive biases. So in a way, expected utility maximization (or Bayesian induction) is a prerequisite idea to all those practically applicable results.

And in general, the more complete your knowledge of a body of ideas, the better you can apply them in real life. So knowing the general principle that binds the more practically-oriented facts together can be helpful in ways that depends on the specific way you look at the world and think about things. This is, once again, the skill of seeing useful connections between mathematical models and reality. If you happen to identify a specific way in which your actions deviate from the model of expected utility maximization, fix it. If you don’t, there’s no point in worrying that you’re not doing it right just because you can’t account for all that goes on in your head in formal terms.