I think I figured out a way to incorporate value drift into your framework: “I value the kind of future that is well-liked by the people actually living in it, as long as they arrived at their likes and dislikes by honest value drift starting from us”. Do you think anyone making such a statement is wrong?
“I value the kind of future that is well-liked by the people actually living in it...”
If what you value happens to be what’s valued by future people, then future people are simultaneously stipulated to have the same values as you do. You don’t need the disclaimers about “honest value drift”, and there is actually no value drift.
If there is genuine value drift, then after long enough you won’t like the same situations as the future people. If you postulate that you only care about the pattern of future people liking their situation, and not other properties of that situation, you are embracing a fake simplified preference, similarly to people who claim that they only value happiness or lack of suffering.
What’s “honest value drift” and what’s good about it? Normally “value drift” makes me think of our axiology randomly losing information over time; the kind of future-with-different-values I’d value is the kind that has different instrumental values from me (because I’m not omniscient and my terminal values may not be completely consistent) but is more optimized according to my actual terminal values, presumably because that future society will have gotten better at unmuddling its terminal values, knowing enough to agree more on instrumental values, and negotiating any remaining disagreements about terminal values.
I think I figured out a way to incorporate value drift into your framework: “I value the kind of future that is well-liked by the people actually living in it, as long as they arrived at their likes and dislikes by honest value drift starting from us”. Do you think anyone making such a statement is wrong?
It’s a start. But note that it is trivially satisfied by a world that has no one living in it.
If what you value happens to be what’s valued by future people, then future people are simultaneously stipulated to have the same values as you do. You don’t need the disclaimers about “honest value drift”, and there is actually no value drift.
If there is genuine value drift, then after long enough you won’t like the same situations as the future people. If you postulate that you only care about the pattern of future people liking their situation, and not other properties of that situation, you are embracing a fake simplified preference, similarly to people who claim that they only value happiness or lack of suffering.
What’s “honest value drift” and what’s good about it? Normally “value drift” makes me think of our axiology randomly losing information over time; the kind of future-with-different-values I’d value is the kind that has different instrumental values from me (because I’m not omniscient and my terminal values may not be completely consistent) but is more optimized according to my actual terminal values, presumably because that future society will have gotten better at unmuddling its terminal values, knowing enough to agree more on instrumental values, and negotiating any remaining disagreements about terminal values.