nevertheless it seems possible to state that some values are further away from this undefined starting point than others (paperclipers are very far, money-maximiser quite far, situations where recognisably human beings do recognisably human stuff are much closer)
Whether a value system recommends creating humans doing human stuff depends not just on the value system but also on the relative costs of creating humans doing human stuff versus creating other good things. So it seems like defining value distance requires either making some assumptions about the underlying universe, or somehow measuring the distance between utility functions and not just the distance between recommendations. Maybe you’d end up with something like “if a hedon is a thousand times cheaper than a unit of eudaimonia and the values recommend using the universe’s resources for hedons, that means the values are very distant from ours, but if a hedon is a million times cheaper than a unit of eudaimonia and the values recommend using the universe’s resources for hedons, the values could still be very close to ours”.
Whether a value system recommends creating humans doing human stuff depends not just on the value system but also on the relative costs of creating humans doing human stuff versus creating other good things. So it seems like defining value distance requires either making some assumptions about the underlying universe, or somehow measuring the distance between utility functions and not just the distance between recommendations. Maybe you’d end up with something like “if a hedon is a thousand times cheaper than a unit of eudaimonia and the values recommend using the universe’s resources for hedons, that means the values are very distant from ours, but if a hedon is a million times cheaper than a unit of eudaimonia and the values recommend using the universe’s resources for hedons, the values could still be very close to ours”.