prefers more X to less X. Instead of having a utility function they have a system of comparing quantities of X, Y, and Z.
Looks like an example might help you to connect this to what I was talking about.
Imagine sqrt(X). Normally people just pick the positive square root or the negative square root—but imagine the whole thing, the parabola-turned-sideways, the thing that isn’t a function.
Now. Is it a valid question to ask whether sqrt(5) is greater or less than sqrt(6)?
-
What a decision-maker with circular preferences can have is local preferences—they can make a single decision at any one moment. But they can’t know how they’d feel about some hypothetical situation unless they also knew how they’d get there. Which sounds strange, I know, because it seems like they should feel sad about coming back to the same place over and over, with slowly diminishing amounts of X. But that’s anthropomorphism talking.
Why would they make that single decision to trade X for Y based on the comparison between X and Y instead of the comparison between X and the less X they know they’ll get? I’m saying that at the moment want Y more than X, but they also want more X more than less X. And they know what kind of decisions they will make in the future.
Now I actually think both positions are answers to an invalid question.
Did you know that the concept of utility was originally used to describe preferences over choices, not states of the world? Might be worth reading up on. One important idea is that you can write preferences entirely in a language of what decisions you make, not how much you value different amounts of X. This is the language that circular preferences can be written in. Pop quiz: if your utility was the angle of a wheel (so, if it turns twice, your utility is 4 pi), could you prove that writing utility in terms of the physical state of the wheel breaks down, but that you can still always have preferences over choices?
Looks like an example might help you to connect this to what I was talking about.
Imagine sqrt(X). Normally people just pick the positive square root or the negative square root—but imagine the whole thing, the parabola-turned-sideways, the thing that isn’t a function.
Now. Is it a valid question to ask whether sqrt(5) is greater or less than sqrt(6)?
-
What a decision-maker with circular preferences can have is local preferences—they can make a single decision at any one moment. But they can’t know how they’d feel about some hypothetical situation unless they also knew how they’d get there. Which sounds strange, I know, because it seems like they should feel sad about coming back to the same place over and over, with slowly diminishing amounts of X. But that’s anthropomorphism talking.
Why would they make that single decision to trade X for Y based on the comparison between X and Y instead of the comparison between X and the less X they know they’ll get? I’m saying that at the moment want Y more than X, but they also want more X more than less X. And they know what kind of decisions they will make in the future.
Now I actually think both positions are answers to an invalid question.
Did you know that the concept of utility was originally used to describe preferences over choices, not states of the world? Might be worth reading up on. One important idea is that you can write preferences entirely in a language of what decisions you make, not how much you value different amounts of X. This is the language that circular preferences can be written in. Pop quiz: if your utility was the angle of a wheel (so, if it turns twice, your utility is 4 pi), could you prove that writing utility in terms of the physical state of the wheel breaks down, but that you can still always have preferences over choices?