My claim is that expected utility theory is and should be silent on the design of human-appropriate utility functions, but that decision theory should include a component focused on the design of human-appropriate utility functions.
What do you mean by “the design of human-appropriate utility functions”?
Can you give me two examples of useful results he derives from the axioms? That’ll help me target my response.
Actually, let me show you a section of Peterson (2009), which is an updated and (I think) clearer presentation of his axiomatic ex ante approach. It is a bit informal, but is mercifully succinct. (The longer, formal presentation is in Peterson 2008). Here is a PDF I made of the relevant section of Peterson (2009). It’s a bit blurry, but it’s readable.
What do you mean by “the design of human-appropriate utility functions”?
A utility function that accurately reflects the beliefs and values of the human it’s designed for. Someone looking for guidance would get assistance in discovering what their beliefs and values about the situation are, rather than just math help and a consistency check. Similarly, someone could accidentally write a utility function that drowns them in vinegar, and it would be nice if the decision-making apparatus noticed and didn’t.
That’s my interpretation of “he’s just saying that it would also be nice to have a decision theory that can tell you what you should choose given what you believe and what you value.”
Actually, let me show you a section of Peterson (2009), which is an updated and (I think) clearer presentation of his axiomatic ex ante approach.
This looks like it boils down to “the utility of an act is the weighted sum of the utility of its consequences.” It’s not clear to me what good formulating it like that does, and I don’t like that axiom 4 from the 2009 version looks circular. (You’re allowed to adjust the utility of different equiprobable outcomes so long as the total utility of the act is preserved. But, uh, aren’t we trying to prove that we can calculate the utility of an act with multiple possible outcome utilities, and haven’t we only assumed that it works for acts with only one possible outcome utility?)
What do you mean by “the design of human-appropriate utility functions”?
Actually, let me show you a section of Peterson (2009), which is an updated and (I think) clearer presentation of his axiomatic ex ante approach. It is a bit informal, but is mercifully succinct. (The longer, formal presentation is in Peterson 2008). Here is a PDF I made of the relevant section of Peterson (2009). It’s a bit blurry, but it’s readable.
A utility function that accurately reflects the beliefs and values of the human it’s designed for. Someone looking for guidance would get assistance in discovering what their beliefs and values about the situation are, rather than just math help and a consistency check. Similarly, someone could accidentally write a utility function that drowns them in vinegar, and it would be nice if the decision-making apparatus noticed and didn’t.
That’s my interpretation of “he’s just saying that it would also be nice to have a decision theory that can tell you what you should choose given what you believe and what you value.”
This looks like it boils down to “the utility of an act is the weighted sum of the utility of its consequences.” It’s not clear to me what good formulating it like that does, and I don’t like that axiom 4 from the 2009 version looks circular. (You’re allowed to adjust the utility of different equiprobable outcomes so long as the total utility of the act is preserved. But, uh, aren’t we trying to prove that we can calculate the utility of an act with multiple possible outcome utilities, and haven’t we only assumed that it works for acts with only one possible outcome utility?)
Was Thm 4.1 an example of a useful result?