I think I see some other purpose to thinking that you have a numerically well-defined utility function. It’s a pet theory of mine, but here we go:
It pays off to do reasoning with the “mathematical” reasoning. This “mathematical” reasoning is the one that kicks in when I ask you what 67 + 49 is, it is the thing that kicks in when i say “if x < y and y < z is x < z?” Even putting your decision problem into just a vague algebraic structure will let you reason comparatively about them, even if you cannot for the life of you assign any concrete values.
This is probably doubly true for a good understanding of Bayesian probability; you can assign a vague feeling of probability a letter, and another vague probability another letter and then, with mathematical reasoning ponder what to do in order to fulfil your vague sense of utility function.
I think I might write some serious articles about mental models and mathematical reasoning sometime.
I think I see some other purpose to thinking that you have a numerically well-defined utility function. It’s a pet theory of mine, but here we go:
It pays off to do reasoning with the “mathematical” reasoning. This “mathematical” reasoning is the one that kicks in when I ask you what 67 + 49 is, it is the thing that kicks in when i say “if x < y and y < z is x < z?” Even putting your decision problem into just a vague algebraic structure will let you reason comparatively about them, even if you cannot for the life of you assign any concrete values.
This is probably doubly true for a good understanding of Bayesian probability; you can assign a vague feeling of probability a letter, and another vague probability another letter and then, with mathematical reasoning ponder what to do in order to fulfil your vague sense of utility function.
I think I might write some serious articles about mental models and mathematical reasoning sometime.