However, it is possible for preferences not to be representable by a utility function. An example is lexicographic preferences which are not continuous and cannot be represented by a continuous utility function.
Has it been shown that this is not the case for dust specks and torture?
In the real world, if you had lexicographic preferences you effectively wouldn’t care about the bottom level at all. You would always reject a chance to optimise for it, instead chasing the tiniest epsilon chance of affecting the top level. Lexicographic preferences are sometimes useful in abstract mathematical contexts where they can clean up technicalities, but would be meaningless in the fuzzy, messy actual world where there’s always a chance of affecting something.
I’ve always thought the problem with real world is that we cannot really optimize for anything in it, exactly because it is so messy and entangled.
I seem to have lexicographic preferences for quite a lot of things that cannot be sold, bought, or exchanged. For example, I would always prefer having one true friend to any number of moderately intelligent ardent followers. And I would always prefer a FAI to any number of human-level friends. It is not a difference in some abstract “quantity of happiness” that produces such preferences, those are qualitatively different life experiences.
Since I do not really know how to optimize for any of this, I’m not willing to reject human-level friends and even moderately intelligent ardent followers that come my way. But if I’m given a choice, it’s quite clear what my choice will be.
I don’t won’t to be rude, but your first example in particular looks like somewhere where its beneficial to signal lexicographic preferences.
Since I do not really know how to optimize for any of this
What do you mean you don’t know how to optimise for this! If you want and FAI then donating to SIAI almost certainly does more good than nothing, (even if they aren’t as effective as they could be they almost certainly don’t have zero effectiveness, if you think they have negative effectiveness then you should be persuading others not to donate). Any time spent acquiring/spending time with true friends would be better spent on earning money to donate (or encouraging others not to) if your preferences are truly lexicographic. This is what I mean when I say that in the real world, lexicographic preferences just cache out as not caring about the bottom at all.
You’ve also confused the issue by talking about personal preferences, which tend to be non-linear, rather than interpersonal. It may well be that the value of both ardent followers and true friends suffers diminishing returns as you get more of them, and probably tends towards an asymptote. The real question is not “do I prefer an FAI to any number of true friends” but “do I prefer a single true friend to any chance of an FAI, however small”, in which case the answer, for me at least, seems to be no.
I’m not sure how one could show such a thing in a way that can plausibly be applied to the Vast scale differences posited in the DSvT thought experiment.
When I try to come up with real-world examples of lexicographic preferences, it’s pretty clear to me that I’m rounding… that is, X is so much more important than Y that I can in effect neglect Y in any decision that involves a difference in X, no matter how much Y there is relative to X, for any values of X and Y worth considering.
But if someone seriously invites me to consider ludicrous values of Y (e.g., 3^^^3 dust specks), that strategy is no longer useful.
I’m quite sure I’m not rounding when I prefer hearing a Wagner opera to hearing any number of folk dance tunes, and when I prefer reading a Vernor Vinge novel to hearing any number of Wagner operas. See also this comment for another example.
It seems, lexicographic preferences arise when one has a choice between qualitatively different experiences. In such cases, any differences in quantity, however vast, are just irrelevant. An experience of long unbearable torture cannot be quantified in terms of minor discomforts.
It seems our introspective accounts of our mental processes are qualitatively different, then.
I’m willing to take your word for it that your experience of long unbearable torture cannot be “quantified” in terms of minor discomforts. If you wish to argue that mine can’t either, I’m willing to listen.
It is not a trivial task to define a utility function that could compare such incomparable qualia.
Wikipedia:
Has it been shown that this is not the case for dust specks and torture?
In the real world, if you had lexicographic preferences you effectively wouldn’t care about the bottom level at all. You would always reject a chance to optimise for it, instead chasing the tiniest epsilon chance of affecting the top level. Lexicographic preferences are sometimes useful in abstract mathematical contexts where they can clean up technicalities, but would be meaningless in the fuzzy, messy actual world where there’s always a chance of affecting something.
I’ve always thought the problem with real world is that we cannot really optimize for anything in it, exactly because it is so messy and entangled.
I seem to have lexicographic preferences for quite a lot of things that cannot be sold, bought, or exchanged. For example, I would always prefer having one true friend to any number of moderately intelligent ardent followers. And I would always prefer a FAI to any number of human-level friends. It is not a difference in some abstract “quantity of happiness” that produces such preferences, those are qualitatively different life experiences.
Since I do not really know how to optimize for any of this, I’m not willing to reject human-level friends and even moderately intelligent ardent followers that come my way. But if I’m given a choice, it’s quite clear what my choice will be.
I don’t won’t to be rude, but your first example in particular looks like somewhere where its beneficial to signal lexicographic preferences.
What do you mean you don’t know how to optimise for this! If you want and FAI then donating to SIAI almost certainly does more good than nothing, (even if they aren’t as effective as they could be they almost certainly don’t have zero effectiveness, if you think they have negative effectiveness then you should be persuading others not to donate). Any time spent acquiring/spending time with true friends would be better spent on earning money to donate (or encouraging others not to) if your preferences are truly lexicographic. This is what I mean when I say that in the real world, lexicographic preferences just cache out as not caring about the bottom at all.
You’ve also confused the issue by talking about personal preferences, which tend to be non-linear, rather than interpersonal. It may well be that the value of both ardent followers and true friends suffers diminishing returns as you get more of them, and probably tends towards an asymptote. The real question is not “do I prefer an FAI to any number of true friends” but “do I prefer a single true friend to any chance of an FAI, however small”, in which case the answer, for me at least, seems to be no.
I’m not sure how one could show such a thing in a way that can plausibly be applied to the Vast scale differences posited in the DSvT thought experiment.
When I try to come up with real-world examples of lexicographic preferences, it’s pretty clear to me that I’m rounding… that is, X is so much more important than Y that I can in effect neglect Y in any decision that involves a difference in X, no matter how much Y there is relative to X, for any values of X and Y worth considering.
But if someone seriously invites me to consider ludicrous values of Y (e.g., 3^^^3 dust specks), that strategy is no longer useful.
I’m quite sure I’m not rounding when I prefer hearing a Wagner opera to hearing any number of folk dance tunes, and when I prefer reading a Vernor Vinge novel to hearing any number of Wagner operas. See also this comment for another example.
It seems, lexicographic preferences arise when one has a choice between qualitatively different experiences. In such cases, any differences in quantity, however vast, are just irrelevant. An experience of long unbearable torture cannot be quantified in terms of minor discomforts.
It seems our introspective accounts of our mental processes are qualitatively different, then.
I’m willing to take your word for it that your experience of long unbearable torture cannot be “quantified” in terms of minor discomforts. If you wish to argue that mine can’t either, I’m willing to listen.