He uses the axioms required to derive the useful results he aimed for, given his approach to formalizing decision problems, and no more than that.
Can you give me two examples of useful results he derives from the axioms? That’ll help me target my response. (I should note that the commentary in the grandparent is targeted at the 2004 paper in the context of the other things you’ve quoted on this page; if there’s relevant material in one of the other links I probably missed it.)
I don’t think Peterson denies the usefulness of traditional axiomatic decision theory for checking the consistency of one’s preferences, he’s just saying that it would also be nice to have a decision theory that can tell you what you should choose given what you believe and what you value.
Agreed. In this comment I want to differentiate between “decision theory” and a component of it, “expected utility theory” (I didn’t differentiate between them in the grandparent). The first studies how to make decisions, and the second studies a particular mathematical technique to isolate the highest scoring of a set of alternative actions. My claim is that expected utility theory is and should be silent on the design of human-appropriate utility functions, but that decision theory should include a component focused on the design of human-appropriate utility functions. That component will be primarily researched by psychologists- what makes humans happy, what do humans want, how do we align those, what common mistakes do humans make, what intuitions do humans have and when are those useful, and so on.
Peterson’s axioms look to me like trying to shoehorn human-appropriate utility functions into expected utility theory, which doesn’t seem to augment the math of calculating expected utilities or augment the actual design of human-appropriate utility functions. As far as I can tell, that field is too young to profit from an axiomatic approach.
But I said “profit” from axioms and you said “justified” with axioms, and those are different things. It’s not clear to me that Peterson’s axioms are useful at justifying the use of expected utility theory, and my hesitance hinges on the phrase “given what you believe and what you value” from the parent. That means that’s Peterson’s decision theory takes your beliefs and values as inputs and outputs decisions- which is exactly what traditional decision theory does, and so they look the same to me (and if they’re different, I think it’s because Peterson made his worse, not better). The underlying problem as I see it is that beliefs and values are not given, they have to be extracted- and traditional decision theory underestimated the difficulty of that extraction.
(Side note: decision theory underestimating the difficulty and decision theorists underestimating the difficulty are very different things. Indeed, it’s likely that decision theorists realized the problem was very hard, and so left it to the reader so they wouldn’t have to do it!)
Then the question is how much Peterson 2004 helps its readers extract their beliefs and values. As far as I can tell, there’s very little normative or prescriptive content.
My claim is that expected utility theory is and should be silent on the design of human-appropriate utility functions, but that decision theory should include a component focused on the design of human-appropriate utility functions.
What do you mean by “the design of human-appropriate utility functions”?
Can you give me two examples of useful results he derives from the axioms? That’ll help me target my response.
Actually, let me show you a section of Peterson (2009), which is an updated and (I think) clearer presentation of his axiomatic ex ante approach. It is a bit informal, but is mercifully succinct. (The longer, formal presentation is in Peterson 2008). Here is a PDF I made of the relevant section of Peterson (2009). It’s a bit blurry, but it’s readable.
What do you mean by “the design of human-appropriate utility functions”?
A utility function that accurately reflects the beliefs and values of the human it’s designed for. Someone looking for guidance would get assistance in discovering what their beliefs and values about the situation are, rather than just math help and a consistency check. Similarly, someone could accidentally write a utility function that drowns them in vinegar, and it would be nice if the decision-making apparatus noticed and didn’t.
That’s my interpretation of “he’s just saying that it would also be nice to have a decision theory that can tell you what you should choose given what you believe and what you value.”
Actually, let me show you a section of Peterson (2009), which is an updated and (I think) clearer presentation of his axiomatic ex ante approach.
This looks like it boils down to “the utility of an act is the weighted sum of the utility of its consequences.” It’s not clear to me what good formulating it like that does, and I don’t like that axiom 4 from the 2009 version looks circular. (You’re allowed to adjust the utility of different equiprobable outcomes so long as the total utility of the act is preserved. But, uh, aren’t we trying to prove that we can calculate the utility of an act with multiple possible outcome utilities, and haven’t we only assumed that it works for acts with only one possible outcome utility?)
Can you give me two examples of useful results he derives from the axioms? That’ll help me target my response. (I should note that the commentary in the grandparent is targeted at the 2004 paper in the context of the other things you’ve quoted on this page; if there’s relevant material in one of the other links I probably missed it.)
Agreed. In this comment I want to differentiate between “decision theory” and a component of it, “expected utility theory” (I didn’t differentiate between them in the grandparent). The first studies how to make decisions, and the second studies a particular mathematical technique to isolate the highest scoring of a set of alternative actions. My claim is that expected utility theory is and should be silent on the design of human-appropriate utility functions, but that decision theory should include a component focused on the design of human-appropriate utility functions. That component will be primarily researched by psychologists- what makes humans happy, what do humans want, how do we align those, what common mistakes do humans make, what intuitions do humans have and when are those useful, and so on.
Peterson’s axioms look to me like trying to shoehorn human-appropriate utility functions into expected utility theory, which doesn’t seem to augment the math of calculating expected utilities or augment the actual design of human-appropriate utility functions. As far as I can tell, that field is too young to profit from an axiomatic approach.
But I said “profit” from axioms and you said “justified” with axioms, and those are different things. It’s not clear to me that Peterson’s axioms are useful at justifying the use of expected utility theory, and my hesitance hinges on the phrase “given what you believe and what you value” from the parent. That means that’s Peterson’s decision theory takes your beliefs and values as inputs and outputs decisions- which is exactly what traditional decision theory does, and so they look the same to me (and if they’re different, I think it’s because Peterson made his worse, not better). The underlying problem as I see it is that beliefs and values are not given, they have to be extracted- and traditional decision theory underestimated the difficulty of that extraction.
(Side note: decision theory underestimating the difficulty and decision theorists underestimating the difficulty are very different things. Indeed, it’s likely that decision theorists realized the problem was very hard, and so left it to the reader so they wouldn’t have to do it!)
Then the question is how much Peterson 2004 helps its readers extract their beliefs and values. As far as I can tell, there’s very little normative or prescriptive content.
What do you mean by “the design of human-appropriate utility functions”?
Actually, let me show you a section of Peterson (2009), which is an updated and (I think) clearer presentation of his axiomatic ex ante approach. It is a bit informal, but is mercifully succinct. (The longer, formal presentation is in Peterson 2008). Here is a PDF I made of the relevant section of Peterson (2009). It’s a bit blurry, but it’s readable.
A utility function that accurately reflects the beliefs and values of the human it’s designed for. Someone looking for guidance would get assistance in discovering what their beliefs and values about the situation are, rather than just math help and a consistency check. Similarly, someone could accidentally write a utility function that drowns them in vinegar, and it would be nice if the decision-making apparatus noticed and didn’t.
That’s my interpretation of “he’s just saying that it would also be nice to have a decision theory that can tell you what you should choose given what you believe and what you value.”
This looks like it boils down to “the utility of an act is the weighted sum of the utility of its consequences.” It’s not clear to me what good formulating it like that does, and I don’t like that axiom 4 from the 2009 version looks circular. (You’re allowed to adjust the utility of different equiprobable outcomes so long as the total utility of the act is preserved. But, uh, aren’t we trying to prove that we can calculate the utility of an act with multiple possible outcome utilities, and haven’t we only assumed that it works for acts with only one possible outcome utility?)
Was Thm 4.1 an example of a useful result?