Off the top of my head, right now I value things such as nature, literature, sex, democracy, the rule of law, the human species and so on, but if my descendants had none of those things and had replaced them with something totally different and utterly incomprehensible, that’d be fine with me as long as they were happy and didn’t suffer much.
Some are instrumental yes, though I guess that for “weak preferences”, it would be more accurate to say that I value some things for my own sake rather than for their sake. That is, I want to be able to experience them myself, but if others find them uninteresting and they vanish entirely after I’m gone, that’s cool.
(There has to be some existing standard term for this.)
That doesn’t sound complicated or mysterious at all—you value these for yourself, but not necessarily for everyone. So if other people lack these values, then that’s not far from your initial values, but if you lack them, then it is far.
This seems to remove the point of your initial answer?
So if other people lack these values, then that’s not far from your initial values, but if you lack them, then it is far.
Well, that depends on how you choose the similarity metric. Like, if you code “the distance between Kaj’s values and Stuart’s values” as the Jaccard distance between them, then you could make the distance between our values arbitrarily large by just adding values I have but you don’t, or vice versa. So if you happened to lack a lot of my values, then our values would be far.
Jaccard distance probably isn’t a great choice of metric for this purpose, but I don’t know what a good one would be.
If we make the (false) assumption that we both have utility/reward functions, and E_U(V) is the expected utility of utility V if we assume a U maximiser is maximising it, then we can measure the distance between utility U and V as d(U,V)=E_U(U)-E_V(U).
This is non-symmetric and doesn’t obey the triangle inequality, but it is a very natural measure—it represents the cost to U to replace a U-maximiser with a V-maximiser.
>and also other values for which I don’t care about how much they’d happen to drift.
Hum. In what way can you be said to have these values then? Maybe these are un-endorsed preferences? Do you have a specific example?
Off the top of my head, right now I value things such as nature, literature, sex, democracy, the rule of law, the human species and so on, but if my descendants had none of those things and had replaced them with something totally different and utterly incomprehensible, that’d be fine with me as long as they were happy and didn’t suffer much.
If I said that some of these were instrumental preferences, and some of these were weak preferences, would that cover it all?
Some are instrumental yes, though I guess that for “weak preferences”, it would be more accurate to say that I value some things for my own sake rather than for their sake. That is, I want to be able to experience them myself, but if others find them uninteresting and they vanish entirely after I’m gone, that’s cool.
(There has to be some existing standard term for this.)
That doesn’t sound complicated or mysterious at all—you value these for yourself, but not necessarily for everyone. So if other people lack these values, then that’s not far from your initial values, but if you lack them, then it is far.
This seems to remove the point of your initial answer?
Well, that depends on how you choose the similarity metric. Like, if you code “the distance between Kaj’s values and Stuart’s values” as the Jaccard distance between them, then you could make the distance between our values arbitrarily large by just adding values I have but you don’t, or vice versa. So if you happened to lack a lot of my values, then our values would be far.
Jaccard distance probably isn’t a great choice of metric for this purpose, but I don’t know what a good one would be.
If we make the (false) assumption that we both have utility/reward functions, and E_U(V) is the expected utility of utility V if we assume a U maximiser is maximising it, then we can measure the distance between utility U and V as d(U,V)=E_U(U)-E_V(U).
This is non-symmetric and doesn’t obey the triangle inequality, but it is a very natural measure—it represents the cost to U to replace a U-maximiser with a V-maximiser.