I’ve always thought the problem with real world is that we cannot really optimize for anything in it, exactly because it is so messy and entangled.
I seem to have lexicographic preferences for quite a lot of things that cannot be sold, bought, or exchanged. For example, I would always prefer having one true friend to any number of moderately intelligent ardent followers. And I would always prefer a FAI to any number of human-level friends. It is not a difference in some abstract “quantity of happiness” that produces such preferences, those are qualitatively different life experiences.
Since I do not really know how to optimize for any of this, I’m not willing to reject human-level friends and even moderately intelligent ardent followers that come my way. But if I’m given a choice, it’s quite clear what my choice will be.
I don’t won’t to be rude, but your first example in particular looks like somewhere where its beneficial to signal lexicographic preferences.
Since I do not really know how to optimize for any of this
What do you mean you don’t know how to optimise for this! If you want and FAI then donating to SIAI almost certainly does more good than nothing, (even if they aren’t as effective as they could be they almost certainly don’t have zero effectiveness, if you think they have negative effectiveness then you should be persuading others not to donate). Any time spent acquiring/spending time with true friends would be better spent on earning money to donate (or encouraging others not to) if your preferences are truly lexicographic. This is what I mean when I say that in the real world, lexicographic preferences just cache out as not caring about the bottom at all.
You’ve also confused the issue by talking about personal preferences, which tend to be non-linear, rather than interpersonal. It may well be that the value of both ardent followers and true friends suffers diminishing returns as you get more of them, and probably tends towards an asymptote. The real question is not “do I prefer an FAI to any number of true friends” but “do I prefer a single true friend to any chance of an FAI, however small”, in which case the answer, for me at least, seems to be no.
I’ve always thought the problem with real world is that we cannot really optimize for anything in it, exactly because it is so messy and entangled.
I seem to have lexicographic preferences for quite a lot of things that cannot be sold, bought, or exchanged. For example, I would always prefer having one true friend to any number of moderately intelligent ardent followers. And I would always prefer a FAI to any number of human-level friends. It is not a difference in some abstract “quantity of happiness” that produces such preferences, those are qualitatively different life experiences.
Since I do not really know how to optimize for any of this, I’m not willing to reject human-level friends and even moderately intelligent ardent followers that come my way. But if I’m given a choice, it’s quite clear what my choice will be.
I don’t won’t to be rude, but your first example in particular looks like somewhere where its beneficial to signal lexicographic preferences.
What do you mean you don’t know how to optimise for this! If you want and FAI then donating to SIAI almost certainly does more good than nothing, (even if they aren’t as effective as they could be they almost certainly don’t have zero effectiveness, if you think they have negative effectiveness then you should be persuading others not to donate). Any time spent acquiring/spending time with true friends would be better spent on earning money to donate (or encouraging others not to) if your preferences are truly lexicographic. This is what I mean when I say that in the real world, lexicographic preferences just cache out as not caring about the bottom at all.
You’ve also confused the issue by talking about personal preferences, which tend to be non-linear, rather than interpersonal. It may well be that the value of both ardent followers and true friends suffers diminishing returns as you get more of them, and probably tends towards an asymptote. The real question is not “do I prefer an FAI to any number of true friends” but “do I prefer a single true friend to any chance of an FAI, however small”, in which case the answer, for me at least, seems to be no.