The poll question takes the Axiom to be a normative principle, not a day to day recipe for every decision. I agree that the case for it as a normative principle is better than taking it as a prescription. I just don’t think it’s a completely convincing case.
the Axiom of Independence implies dynamic consistency, but not vice versa. If we were to replace the Axiom of Independence with some sort of Axiom of Dynamic Consistency, we would no longer be able to derive expected utility theory. (Similarly with dutch book/money pump arguments, there are many ways to avoid them besides being an expected utility maximizer.)
If a Dutchman throws a book at you—duck! You don’t need to be the sort of agent to whom expected utility theory applies.
The deep reason why utility theory fails to be required by rationality, is that there is no general separability between the decision process itself and the “outcomes” that agents care about. I’m putting “outcomes” in scare quotes because the term strongly suggests that what matters is the destination, not the journey (where the journey includes the decision process and its features such as risk).
There are many particular occasions, at least for many agents (including me), on which there is such separability. That’s why I find expected utility theory useful. But rationally required? Not so much.
Here’s a toy version of the journey/destination problem. (I think I’m borrowing from Kaj Sotala, who probably said it better, but I can’t find the original.) Suppose I sell my convertible Monday for $5000 and buy an SUV for $5010. On Tuesday I sell the SUV for $5000 and buy a Harley for $5010. On Wednesday I sell the Harley for $5000 and buy the original convertible back for $5010. Oh no, I’ve been money pumped! Except, wait—I got to drive a different vehicle each day, something that I enjoy. I’m out $30, but that might be a small price to pay for the privilege. This example doesn’t involve risk per se, but does illustrate the care needed to avoid defining “outcomes” in such a way as to avoid begging questions against an agent’s values.
Thanks for all of this, I wasn’t aware of any of these things.
The poll question takes the Axiom to be a normative principle, not a day to day recipe for every decision.
This may sound nitpicky, but poll questions don’t take anything to be anything; people do. I wonder if your results won’t be skewed by the people who actually make the mistake that you didn’t make but that I thought you made, or ignored by people like me who think they know more and that the question is silly but who actually know less and don’t understand the question. I almost skipped the poll entirely, and would never have read your wonderful comment. Maybe you could add some elaboration in the OP, or suggest that voters read this thread? Not sure.
Sure, if there were more people answering the poll, there’d probably be some that took the Axiom of Independence, and/or expected utility theory, in the way you worried about. It’s a fair point. But so far I’m the only skeptical vote.
The poll question takes the Axiom to be a normative principle, not a day to day recipe for every decision. I agree that the case for it as a normative principle is better than taking it as a prescription. I just don’t think it’s a completely convincing case.
I agree with Wei Dai’s remark that
If a Dutchman throws a book at you—duck! You don’t need to be the sort of agent to whom expected utility theory applies.
The deep reason why utility theory fails to be required by rationality, is that there is no general separability between the decision process itself and the “outcomes” that agents care about. I’m putting “outcomes” in scare quotes because the term strongly suggests that what matters is the destination, not the journey (where the journey includes the decision process and its features such as risk).
There are many particular occasions, at least for many agents (including me), on which there is such separability. That’s why I find expected utility theory useful. But rationally required? Not so much.
Here’s a toy version of the journey/destination problem. (I think I’m borrowing from Kaj Sotala, who probably said it better, but I can’t find the original.) Suppose I sell my convertible Monday for $5000 and buy an SUV for $5010. On Tuesday I sell the SUV for $5000 and buy a Harley for $5010. On Wednesday I sell the Harley for $5000 and buy the original convertible back for $5010. Oh no, I’ve been money pumped! Except, wait—I got to drive a different vehicle each day, something that I enjoy. I’m out $30, but that might be a small price to pay for the privilege. This example doesn’t involve risk per se, but does illustrate the care needed to avoid defining “outcomes” in such a way as to avoid begging questions against an agent’s values.
Thanks for all of this, I wasn’t aware of any of these things.
This may sound nitpicky, but poll questions don’t take anything to be anything; people do. I wonder if your results won’t be skewed by the people who actually make the mistake that you didn’t make but that I thought you made, or ignored by people like me who think they know more and that the question is silly but who actually know less and don’t understand the question. I almost skipped the poll entirely, and would never have read your wonderful comment. Maybe you could add some elaboration in the OP, or suggest that voters read this thread? Not sure.
Sure, if there were more people answering the poll, there’d probably be some that took the Axiom of Independence, and/or expected utility theory, in the way you worried about. It’s a fair point. But so far I’m the only skeptical vote.