No, of course it’s not for “running your life”, that would be the approach of constructing a complete model (the right stance for FAI, the wrong one for human rationality). It’s for mending errors in your mind that runs your life.
The special place of expected utility maximization comes from the conjecture that any restriction for coherence of thought can be restated in terms of expected utility maximization. My example can obviously be translated as well, by assigning utility to outcome given possible states of binary X and Y, and probability to X. This form won’t be the most convenient, the original one may be better, but it’s still equivalent, the structure of what’s required of coherent opinion is no stronger.
As I said, it’s just a special case, with utility maximization not being the best form for thinking about it (as you noted, simple logic suffices here). The conjecture is that everything in decision-making is a special case of utility maximization.
No, of course it’s not for “running your life”, that would be the approach of constructing a complete model (the right stance for FAI, the wrong one for human rationality). It’s for mending errors in your mind that runs your life.
The special place of expected utility maximization comes from the conjecture that any restriction for coherence of thought can be restated in terms of expected utility maximization. My example can obviously be translated as well, by assigning utility to outcome given possible states of binary X and Y, and probability to X. This form won’t be the most convenient, the original one may be better, but it’s still equivalent, the structure of what’s required of coherent opinion is no stronger.
It doesn’t matter what utilities you assign to outcomes X and Y, what you have caught by saying
is an error of logic. The person here believes ¬X and X <== Y and Y.
As I said, it’s just a special case, with utility maximization not being the best form for thinking about it (as you noted, simple logic suffices here). The conjecture is that everything in decision-making is a special case of utility maximization.
Sure—Just like every program can be written out in brainfuck. But you wouldn’t actually use that in real life, because it is not efficient.