this is a device for the sanity of mind, and it can be applied at any level of granularity, with concepts however fuzzy and imprecise.
This would be nice if it were true! I presume you mean utility function maximization by “this”?
The thing that “is a device for the sanity of mind, and it can be applied at any level of granularity, with concepts however fuzzy and imprecise” is rationality in the broad sense. The example you give:
If you believe that choosing Y requires X to be true, you don’t believe X to be true, but you choose Y,
is not an instance of maximizing a utility function.
I am not attacking rational thinking in general here. Only the specific practise of trying to run your life by maximizing a utility function.
No, of course it’s not for “running your life”, that would be the approach of constructing a complete model (the right stance for FAI, the wrong one for human rationality). It’s for mending errors in your mind that runs your life.
The special place of expected utility maximization comes from the conjecture that any restriction for coherence of thought can be restated in terms of expected utility maximization. My example can obviously be translated as well, by assigning utility to outcome given possible states of binary X and Y, and probability to X. This form won’t be the most convenient, the original one may be better, but it’s still equivalent, the structure of what’s required of coherent opinion is no stronger.
As I said, it’s just a special case, with utility maximization not being the best form for thinking about it (as you noted, simple logic suffices here). The conjecture is that everything in decision-making is a special case of utility maximization.
This would be nice if it were true! I presume you mean utility function maximization by “this”?
The thing that “is a device for the sanity of mind, and it can be applied at any level of granularity, with concepts however fuzzy and imprecise” is rationality in the broad sense. The example you give:
is not an instance of maximizing a utility function.
I am not attacking rational thinking in general here. Only the specific practise of trying to run your life by maximizing a utility function.
No, of course it’s not for “running your life”, that would be the approach of constructing a complete model (the right stance for FAI, the wrong one for human rationality). It’s for mending errors in your mind that runs your life.
The special place of expected utility maximization comes from the conjecture that any restriction for coherence of thought can be restated in terms of expected utility maximization. My example can obviously be translated as well, by assigning utility to outcome given possible states of binary X and Y, and probability to X. This form won’t be the most convenient, the original one may be better, but it’s still equivalent, the structure of what’s required of coherent opinion is no stronger.
It doesn’t matter what utilities you assign to outcomes X and Y, what you have caught by saying
is an error of logic. The person here believes ¬X and X <== Y and Y.
As I said, it’s just a special case, with utility maximization not being the best form for thinking about it (as you noted, simple logic suffices here). The conjecture is that everything in decision-making is a special case of utility maximization.
Sure—Just like every program can be written out in brainfuck. But you wouldn’t actually use that in real life, because it is not efficient.