So if other people lack these values, then that’s not far from your initial values, but if you lack them, then it is far.
Well, that depends on how you choose the similarity metric. Like, if you code “the distance between Kaj’s values and Stuart’s values” as the Jaccard distance between them, then you could make the distance between our values arbitrarily large by just adding values I have but you don’t, or vice versa. So if you happened to lack a lot of my values, then our values would be far.
Jaccard distance probably isn’t a great choice of metric for this purpose, but I don’t know what a good one would be.
If we make the (false) assumption that we both have utility/reward functions, and E_U(V) is the expected utility of utility V if we assume a U maximiser is maximising it, then we can measure the distance between utility U and V as d(U,V)=E_U(U)-E_V(U).
This is non-symmetric and doesn’t obey the triangle inequality, but it is a very natural measure—it represents the cost to U to replace a U-maximiser with a V-maximiser.
Well, that depends on how you choose the similarity metric. Like, if you code “the distance between Kaj’s values and Stuart’s values” as the Jaccard distance between them, then you could make the distance between our values arbitrarily large by just adding values I have but you don’t, or vice versa. So if you happened to lack a lot of my values, then our values would be far.
Jaccard distance probably isn’t a great choice of metric for this purpose, but I don’t know what a good one would be.
If we make the (false) assumption that we both have utility/reward functions, and E_U(V) is the expected utility of utility V if we assume a U maximiser is maximising it, then we can measure the distance between utility U and V as d(U,V)=E_U(U)-E_V(U).
This is non-symmetric and doesn’t obey the triangle inequality, but it is a very natural measure—it represents the cost to U to replace a U-maximiser with a V-maximiser.