With respect, that doesn’t seem to meet my request. Like Cyan, I’m tempted to conclude that you are using a non-standard definition of “utility function”.
ETA: Oh, wait… perhaps I’ve misunderstood you. Are you trying to say that you can represent these preferences with a function that assigns: u(A:B)>u(x:B) for x in {B,C}; u(B:C)>u(x:C) for x in {A,C} etc? If so, then you’re right that you can encode these preferences into a utility function; but you’ve done so by redefining things such that the preferences no longer violate transitivity; so Cyan’s original point stands.
Cyan claimed some agent’s behaviour corresponded to intransitive preferences.
My example is the one that is most frequently given as an example of circular preferences. If this doesn’t qualify, then what behaviour are we talking about?
What is this behaviour pattern that supposedly can’t be represented by a utility function due to intransitive preferences?
Suppose I am in Alaska. If told I can either stay or go to Boston, I choose to stay. If told I can either stay or go to California, I choose California. If told I must leave for either Boston or California, I choose Boston. These preferences are intransitive, and AFAICT, cannot be represented by a utility function. To do so would require u(A:A)>u(B:A)>u(C:A)>u(A:A).
More generally, it is true that one can often redefine states of the world such that apparently intransitive preferences can be rendered transitive, and thus amenable to a utility representation. Whether it’s wise or useful to do so will depend on the context.
You are not getting this :-( You have just given me a description of the agents preferences. From there you are not far from an algorithm that describes them.
Your agent just chooses differently depending on the options it is presented with. Obviously, the sense data relating to what it was told about its options is one of the inputs to its utility function—something like this:
If O=(A,C) then u(C)=1; else if O=(B,C) then u(B)=1.
Sure, you can do that (though you’ll also need to specify what happens when O=(A,B,C) or any larger set of options, which will probably get pretty cumbersome pretty quickly). But the resulting algorithm doesn’t fall within the standard definition of a utility function, the whole point of which is to enable us to describe preferences without needing to refer to a specific choice set.
If you want to use a different definition of “utility function” that’s fine. But you should probably (a) be aware that you’re departing from the standard technical usage, and (b) avoid disputing claims put forward by others that are perfectly valid on the basis of that standard technical usage.
P.S. Just because someone disagrees with you, doesn’t mean they don’t get it. ;)
A utility function just maps states down to a one-dimensional spectrum of utility.
That is a simple-enough concept, and I doubt it is the source of disagreement.
The difference boils down to what the utility function is applied to. If the inputs to the utility function are “Alaska”, “Boston” and “California”, then a utilitarian representation of circular driving behaviour is impossible.
However, in practice, agents know more than just what they want. They know what they have got. Also, they know how bored they are. So, expanding the set of inputs to the utility function to include other aspects of the agent’s state provides a utilitarian resolution. This does not represent a non-standard definition or theory—it is just including more of the agent’s state in the inputs to the utility function.
I agree with the substance of everything you have just said, and maintain that the only real point on which we disagree is whether the standard technical usage of “utility function” allows the choice set to be considered as part of the state description.
Anything else you want to include, go for it. But I maintain that, while it is clearly formally possible to include the choice set in in the state description, this is not part of standard usage, and therefore, your objection to Cyan’s original comment (which is a well-established result based on the standard usage) was misplaced.
I have no substantive problem in principle with including choice sets in the state description; maybe the broader definition of “utility function” that encompasses this is even a “better” definition.
ETA: The last sentence of this comment previously said something like “but I’m not sure what you gain by doing so”. I thought I had managed to edit it before anyone would have seen it, but it looks like Tim’s response below was to that earlier version.
ETA2: On further reflection, I think it’s the standard definition of transitive in this context that excludes the choice set from the state description, not the definition of utility function. Which I think basically gets me to where Cyan was some time ago.
You get to model humans with a utility function for one thing. Modelling human behaviour is a big part of point of utilitarian models—and human decisions really do depend on the range choices they are given in a weird way that can’t be captured without this information.
Also, the formulation is neater. You get to write u(state) - instead of u(state—minus a bunch of things which are to be ignored).
Fair enough. Unfortunately you also gain confusion from people using terms in different ways, but we seem to have made it to roughly the same place in the end.
Also, the formulation is neater. You get to write u(state) - instead of u(state—minus a bunch of things which are to be ignored).
This is a quibble, and I guess it kind of depends what you mean by neater, but this claim strikes me as odd. Any actual description of (state including choice set) is going to be more complicated than the corresponding description of (state excluding choice set). Indeed, I took that to be part of your original point: you can represent almost anything if you’re willing to complicate the state descriptions sufficiently.
I mean you can say that the agent’s utility function takes as its input its entire state—not some subset of it. The description of the entire state is longer, but the specification of what is included is shorter.
So your position isn’t so much “intransitive preferences are representable in utility functions” as it is “all preferences are transitive because we can always make them contingent on the choice offered”.
I think the point is that any decision algorithm, even one which has intransitive preferences over world-states, can be described as optimization of a utility function. However, the objects to which utility are assigned may be ridiculously complicated constructs rather than the things we think should determine our actions.
To show this is trivially true, take your decision algorithm and consider the utility function “1 for acting in accordance with this algorithm, 0 for not doing so”. Tim is giving an example where it doesn’t have to be this ridiculous, but still has to be meta compared to object-level preferences.
Still (I say), if it’s less complicated to describe the full range of human behavior by an algorithm that doesn’t break down into utility function plus optimizer, then we’re better off doing so (as a descriptive strategy).
I think “circular preferences” is a useful concept—but I deny that it means that a utilitarian explanation is impossible. See my A, B, C example of what are conventionally referred to as being circular preferences—and then see how that can still be represented within a utilitarian framework.
This really is the conventional example of circular preferences—e.g. see:
“If you drive from San Jose to San Francisco to Oakland to San Jose, over and over again, you may have fun driving, but you aren’t going anywhere.”
I think there are two [ETA: three] distinct claims about apparently circular preferences that need to be (but are perhaps not always) adequately distinguished.
One is that apparently circular preferences are not able to be represented by a utility function. As Tim rightly points out, much of the time this isn’t really true: if you extend your state-descriptions sufficiently, they usually can be.
A different claim is that, even if they can be represented by a utility function, such preferences are irrational. Usually, the (implicit or explicit) argument here is that, while you could augment your state description to make the resulting preferences transitive, you shouldn’t do so, because the additional factors are irrelevant to the decision. Whether this is a reasonable argument or not depends on the context.
ETA:
Yet another claim is that circular preferences prevent you from building, out of a set of binary preferences, a utility function that could be expected to predict choice in non-binary contexts. If you prefer Spain from the set {Spain,Greece}, Greece from the set {Greece,Turkey}, and Turkey from the set {Turkey,Spain}, then there’s no telling what you’ll do if presented with the choice set {Spain,Greece,Turkey}. If you instead preferred Spain from the final set {Spain,Turkey} (while maintaining your other preferences), then it’s a pretty good shot you’ll also prefer Spain from {Spain,Greece,Turkey}.
With respect, that doesn’t seem to meet my request. Like Cyan, I’m tempted to conclude that you are using a non-standard definition of “utility function”.
ETA: Oh, wait… perhaps I’ve misunderstood you. Are you trying to say that you can represent these preferences with a function that assigns: u(A:B)>u(x:B) for x in {B,C}; u(B:C)>u(x:C) for x in {A,C} etc? If so, then you’re right that you can encode these preferences into a utility function; but you’ve done so by redefining things such that the preferences no longer violate transitivity; so Cyan’s original point stands.
Cyan claimed some agent’s behaviour corresponded to intransitive preferences. My example is the one that is most frequently given as an example of circular preferences. If this doesn’t qualify, then what behaviour are we talking about?
What is this behaviour pattern that supposedly can’t be represented by a utility function due to intransitive preferences?
Suppose I am in Alaska. If told I can either stay or go to Boston, I choose to stay. If told I can either stay or go to California, I choose California. If told I must leave for either Boston or California, I choose Boston. These preferences are intransitive, and AFAICT, cannot be represented by a utility function. To do so would require u(A:A)>u(B:A)>u(C:A)>u(A:A).
More generally, it is true that one can often redefine states of the world such that apparently intransitive preferences can be rendered transitive, and thus amenable to a utility representation. Whether it’s wise or useful to do so will depend on the context.
You are not getting this :-( You have just given me a description of the agents preferences. From there you are not far from an algorithm that describes them.
Your agent just chooses differently depending on the options it is presented with. Obviously, the sense data relating to what it was told about its options is one of the inputs to its utility function—something like this:
If O=(A,C) then u(C)=1; else if O=(B,C) then u(B)=1.
Sure, you can do that (though you’ll also need to specify what happens when O=(A,B,C) or any larger set of options, which will probably get pretty cumbersome pretty quickly). But the resulting algorithm doesn’t fall within the standard definition of a utility function, the whole point of which is to enable us to describe preferences without needing to refer to a specific choice set.
If you want to use a different definition of “utility function” that’s fine. But you should probably (a) be aware that you’re departing from the standard technical usage, and (b) avoid disputing claims put forward by others that are perfectly valid on the basis of that standard technical usage.
P.S. Just because someone disagrees with you, doesn’t mean they don’t get it. ;)
A utility function just maps states down to a one-dimensional spectrum of utility.
That is a simple-enough concept, and I doubt it is the source of disagreement.
The difference boils down to what the utility function is applied to. If the inputs to the utility function are “Alaska”, “Boston” and “California”, then a utilitarian representation of circular driving behaviour is impossible.
However, in practice, agents know more than just what they want. They know what they have got. Also, they know how bored they are. So, expanding the set of inputs to the utility function to include other aspects of the agent’s state provides a utilitarian resolution. This does not represent a non-standard definition or theory—it is just including more of the agent’s state in the inputs to the utility function.
I agree with the substance of everything you have just said, and maintain that the only real point on which we disagree is whether the standard technical usage of “utility function” allows the choice set to be considered as part of the state description.
Anything else you want to include, go for it. But I maintain that, while it is clearly formally possible to include the choice set in in the state description, this is not part of standard usage, and therefore, your objection to Cyan’s original comment (which is a well-established result based on the standard usage) was misplaced.
I have no substantive problem in principle with including choice sets in the state description; maybe the broader definition of “utility function” that encompasses this is even a “better” definition.
ETA: The last sentence of this comment previously said something like “but I’m not sure what you gain by doing so”. I thought I had managed to edit it before anyone would have seen it, but it looks like Tim’s response below was to that earlier version.
ETA2: On further reflection, I think it’s the standard definition of transitive in this context that excludes the choice set from the state description, not the definition of utility function. Which I think basically gets me to where Cyan was some time ago.
You get to model humans with a utility function for one thing. Modelling human behaviour is a big part of point of utilitarian models—and human decisions really do depend on the range choices they are given in a weird way that can’t be captured without this information.
Also, the formulation is neater. You get to write u(state) - instead of u(state—minus a bunch of things which are to be ignored).
Fair enough. Unfortunately you also gain confusion from people using terms in different ways, but we seem to have made it to roughly the same place in the end.
This is a quibble, and I guess it kind of depends what you mean by neater, but this claim strikes me as odd. Any actual description of (state including choice set) is going to be more complicated than the corresponding description of (state excluding choice set). Indeed, I took that to be part of your original point: you can represent almost anything if you’re willing to complicate the state descriptions sufficiently.
I mean you can say that the agent’s utility function takes as its input its entire state—not some subset of it. The description of the entire state is longer, but the specification of what is included is shorter.
So your position isn’t so much “intransitive preferences are representable in utility functions” as it is “all preferences are transitive because we can always make them contingent on the choice offered”.
I think the point is that any decision algorithm, even one which has intransitive preferences over world-states, can be described as optimization of a utility function. However, the objects to which utility are assigned may be ridiculously complicated constructs rather than the things we think should determine our actions.
To show this is trivially true, take your decision algorithm and consider the utility function “1 for acting in accordance with this algorithm, 0 for not doing so”. Tim is giving an example where it doesn’t have to be this ridiculous, but still has to be meta compared to object-level preferences.
Still (I say), if it’s less complicated to describe the full range of human behavior by an algorithm that doesn’t break down into utility function plus optimizer, then we’re better off doing so (as a descriptive strategy).
I think “circular preferences” is a useful concept—but I deny that it means that a utilitarian explanation is impossible. See my A, B, C example of what are conventionally referred to as being circular preferences—and then see how that can still be represented within a utilitarian framework.
This really is the conventional example of circular preferences—e.g. see:
“If you drive from San Jose to San Francisco to Oakland to San Jose, over and over again, you may have fun driving, but you aren’t going anywhere.”
http://lesswrong.com/lw/n3/circular_altruism/
“This almost inevitably leads to circular preferences wherein you prefer Spain to Greece, Greece to Turkey but Turkey to Spain.”—http://www.cparish.co.uk/cpapriover.html
Circular preferences in agents are often cited as something utilitarianism can’t deal with—but it’s simply a fallacy.
I think there are two [ETA: three] distinct claims about apparently circular preferences that need to be (but are perhaps not always) adequately distinguished.
One is that apparently circular preferences are not able to be represented by a utility function. As Tim rightly points out, much of the time this isn’t really true: if you extend your state-descriptions sufficiently, they usually can be.
A different claim is that, even if they can be represented by a utility function, such preferences are irrational. Usually, the (implicit or explicit) argument here is that, while you could augment your state description to make the resulting preferences transitive, you shouldn’t do so, because the additional factors are irrelevant to the decision. Whether this is a reasonable argument or not depends on the context.
ETA:
Yet another claim is that circular preferences prevent you from building, out of a set of binary preferences, a utility function that could be expected to predict choice in non-binary contexts. If you prefer Spain from the set {Spain,Greece}, Greece from the set {Greece,Turkey}, and Turkey from the set {Turkey,Spain}, then there’s no telling what you’ll do if presented with the choice set {Spain,Greece,Turkey}. If you instead preferred Spain from the final set {Spain,Turkey} (while maintaining your other preferences), then it’s a pretty good shot you’ll also prefer Spain from {Spain,Greece,Turkey}.
Which pretty much mauls the definition of transitive beyond recognition.