By being internally inconsistent, and only saved by your mistakes in A.
For example it can be argued that proper D should treat risk of dying in all possible ways the same way. If person’s D considers dying of shark attack worse than dying of infection (given similar level of suffering etc), and their A has completely wrong idea of how likely shark attacks and infections are, they might take precautions about sharks and infections that are exactly correct. If they find out what A is really like, and start using it, their decisions suddenly become inconsistent.
Of course you can argue from fundamentalist position that utility function is “never wrong”, but if you can be trivially Dutch booked, or have ridiculously inconsistent preferences between essentially equivalent outcomes (like dying), then it’s “wrong” as far as I’m concerned.
If you buy into fundamentalist interpretations of utility functions then it’s not. If you don’t, then it is—to me there should be some difference in something “meaningful” for there to be difference in preferences, otherwise it’s not a good utility function.
Even with fundamentalist interpretation you get known inconsistencies with probabilities, so it doesn’t save you.
I think that the strongest critique of D is that most people choose things that they later honestly claim were not “what they actually wanted”, i.e. D acts something like a stable utility function D_u with a time and mood dependent error term D_error added to it. It causes many people much suffering that their own actions don’t live up to the standards of what they consider to be their true goals.
Probabilistic inconsistencies in action are probably less of a problem for humans, though not completely absent.
Even more to the point, imagine D to be split into two parts, a utility function and a goal-seeking function. Then even if the utility function is never “wrong,” per se, the goal-seeking function could suboptimally use A to pursue the goals. Our D-functions routinely make poor decisions of the second sort, e.g. akrasia.
By being internally inconsistent, and only saved by your mistakes in A.
For example it can be argued that proper D should treat risk of dying in all possible ways the same way. If person’s D considers dying of shark attack worse than dying of infection (given similar level of suffering etc), and their A has completely wrong idea of how likely shark attacks and infections are, they might take precautions about sharks and infections that are exactly correct. If they find out what A is really like, and start using it, their decisions suddenly become inconsistent.
Of course you can argue from fundamentalist position that utility function is “never wrong”, but if you can be trivially Dutch booked, or have ridiculously inconsistent preferences between essentially equivalent outcomes (like dying), then it’s “wrong” as far as I’m concerned.
It is not logically inconsistent to prefer dying in one way over another.
If you buy into fundamentalist interpretations of utility functions then it’s not. If you don’t, then it is—to me there should be some difference in something “meaningful” for there to be difference in preferences, otherwise it’s not a good utility function.
Even with fundamentalist interpretation you get known inconsistencies with probabilities, so it doesn’t save you.
I think that the strongest critique of D is that most people choose things that they later honestly claim were not “what they actually wanted”, i.e. D acts something like a stable utility function D_u with a time and mood dependent error term D_error added to it. It causes many people much suffering that their own actions don’t live up to the standards of what they consider to be their true goals.
Probabilistic inconsistencies in action are probably less of a problem for humans, though not completely absent.
Even more to the point, imagine D to be split into two parts, a utility function and a goal-seeking function. Then even if the utility function is never “wrong,” per se, the goal-seeking function could suboptimally use A to pursue the goals. Our D-functions routinely make poor decisions of the second sort, e.g. akrasia.