So your position isn’t so much “intransitive preferences are representable in utility functions” as it is “all preferences are transitive because we can always make them contingent on the choice offered”.
I think the point is that any decision algorithm, even one which has intransitive preferences over world-states, can be described as optimization of a utility function. However, the objects to which utility are assigned may be ridiculously complicated constructs rather than the things we think should determine our actions.
To show this is trivially true, take your decision algorithm and consider the utility function “1 for acting in accordance with this algorithm, 0 for not doing so”. Tim is giving an example where it doesn’t have to be this ridiculous, but still has to be meta compared to object-level preferences.
Still (I say), if it’s less complicated to describe the full range of human behavior by an algorithm that doesn’t break down into utility function plus optimizer, then we’re better off doing so (as a descriptive strategy).
I think “circular preferences” is a useful concept—but I deny that it means that a utilitarian explanation is impossible. See my A, B, C example of what are conventionally referred to as being circular preferences—and then see how that can still be represented within a utilitarian framework.
This really is the conventional example of circular preferences—e.g. see:
“If you drive from San Jose to San Francisco to Oakland to San Jose, over and over again, you may have fun driving, but you aren’t going anywhere.”
I think there are two [ETA: three] distinct claims about apparently circular preferences that need to be (but are perhaps not always) adequately distinguished.
One is that apparently circular preferences are not able to be represented by a utility function. As Tim rightly points out, much of the time this isn’t really true: if you extend your state-descriptions sufficiently, they usually can be.
A different claim is that, even if they can be represented by a utility function, such preferences are irrational. Usually, the (implicit or explicit) argument here is that, while you could augment your state description to make the resulting preferences transitive, you shouldn’t do so, because the additional factors are irrelevant to the decision. Whether this is a reasonable argument or not depends on the context.
ETA:
Yet another claim is that circular preferences prevent you from building, out of a set of binary preferences, a utility function that could be expected to predict choice in non-binary contexts. If you prefer Spain from the set {Spain,Greece}, Greece from the set {Greece,Turkey}, and Turkey from the set {Turkey,Spain}, then there’s no telling what you’ll do if presented with the choice set {Spain,Greece,Turkey}. If you instead preferred Spain from the final set {Spain,Turkey} (while maintaining your other preferences), then it’s a pretty good shot you’ll also prefer Spain from {Spain,Greece,Turkey}.
So your position isn’t so much “intransitive preferences are representable in utility functions” as it is “all preferences are transitive because we can always make them contingent on the choice offered”.
I think the point is that any decision algorithm, even one which has intransitive preferences over world-states, can be described as optimization of a utility function. However, the objects to which utility are assigned may be ridiculously complicated constructs rather than the things we think should determine our actions.
To show this is trivially true, take your decision algorithm and consider the utility function “1 for acting in accordance with this algorithm, 0 for not doing so”. Tim is giving an example where it doesn’t have to be this ridiculous, but still has to be meta compared to object-level preferences.
Still (I say), if it’s less complicated to describe the full range of human behavior by an algorithm that doesn’t break down into utility function plus optimizer, then we’re better off doing so (as a descriptive strategy).
I think “circular preferences” is a useful concept—but I deny that it means that a utilitarian explanation is impossible. See my A, B, C example of what are conventionally referred to as being circular preferences—and then see how that can still be represented within a utilitarian framework.
This really is the conventional example of circular preferences—e.g. see:
“If you drive from San Jose to San Francisco to Oakland to San Jose, over and over again, you may have fun driving, but you aren’t going anywhere.”
http://lesswrong.com/lw/n3/circular_altruism/
“This almost inevitably leads to circular preferences wherein you prefer Spain to Greece, Greece to Turkey but Turkey to Spain.”—http://www.cparish.co.uk/cpapriover.html
Circular preferences in agents are often cited as something utilitarianism can’t deal with—but it’s simply a fallacy.
I think there are two [ETA: three] distinct claims about apparently circular preferences that need to be (but are perhaps not always) adequately distinguished.
One is that apparently circular preferences are not able to be represented by a utility function. As Tim rightly points out, much of the time this isn’t really true: if you extend your state-descriptions sufficiently, they usually can be.
A different claim is that, even if they can be represented by a utility function, such preferences are irrational. Usually, the (implicit or explicit) argument here is that, while you could augment your state description to make the resulting preferences transitive, you shouldn’t do so, because the additional factors are irrelevant to the decision. Whether this is a reasonable argument or not depends on the context.
ETA:
Yet another claim is that circular preferences prevent you from building, out of a set of binary preferences, a utility function that could be expected to predict choice in non-binary contexts. If you prefer Spain from the set {Spain,Greece}, Greece from the set {Greece,Turkey}, and Turkey from the set {Turkey,Spain}, then there’s no telling what you’ll do if presented with the choice set {Spain,Greece,Turkey}. If you instead preferred Spain from the final set {Spain,Turkey} (while maintaining your other preferences), then it’s a pretty good shot you’ll also prefer Spain from {Spain,Greece,Turkey}.
Which pretty much mauls the definition of transitive beyond recognition.