Some are instrumental yes, though I guess that for “weak preferences”, it would be more accurate to say that I value some things for my own sake rather than for their sake. That is, I want to be able to experience them myself, but if others find them uninteresting and they vanish entirely after I’m gone, that’s cool.
(There has to be some existing standard term for this.)
That doesn’t sound complicated or mysterious at all—you value these for yourself, but not necessarily for everyone. So if other people lack these values, then that’s not far from your initial values, but if you lack them, then it is far.
This seems to remove the point of your initial answer?
So if other people lack these values, then that’s not far from your initial values, but if you lack them, then it is far.
Well, that depends on how you choose the similarity metric. Like, if you code “the distance between Kaj’s values and Stuart’s values” as the Jaccard distance between them, then you could make the distance between our values arbitrarily large by just adding values I have but you don’t, or vice versa. So if you happened to lack a lot of my values, then our values would be far.
Jaccard distance probably isn’t a great choice of metric for this purpose, but I don’t know what a good one would be.
If we make the (false) assumption that we both have utility/reward functions, and E_U(V) is the expected utility of utility V if we assume a U maximiser is maximising it, then we can measure the distance between utility U and V as d(U,V)=E_U(U)-E_V(U).
This is non-symmetric and doesn’t obey the triangle inequality, but it is a very natural measure—it represents the cost to U to replace a U-maximiser with a V-maximiser.
Some are instrumental yes, though I guess that for “weak preferences”, it would be more accurate to say that I value some things for my own sake rather than for their sake. That is, I want to be able to experience them myself, but if others find them uninteresting and they vanish entirely after I’m gone, that’s cool.
(There has to be some existing standard term for this.)
That doesn’t sound complicated or mysterious at all—you value these for yourself, but not necessarily for everyone. So if other people lack these values, then that’s not far from your initial values, but if you lack them, then it is far.
This seems to remove the point of your initial answer?
Well, that depends on how you choose the similarity metric. Like, if you code “the distance between Kaj’s values and Stuart’s values” as the Jaccard distance between them, then you could make the distance between our values arbitrarily large by just adding values I have but you don’t, or vice versa. So if you happened to lack a lot of my values, then our values would be far.
Jaccard distance probably isn’t a great choice of metric for this purpose, but I don’t know what a good one would be.
If we make the (false) assumption that we both have utility/reward functions, and E_U(V) is the expected utility of utility V if we assume a U maximiser is maximising it, then we can measure the distance between utility U and V as d(U,V)=E_U(U)-E_V(U).
This is non-symmetric and doesn’t obey the triangle inequality, but it is a very natural measure—it represents the cost to U to replace a U-maximiser with a V-maximiser.