We can’t even specify most of our top-node values with any kind of precision or accuracy—why should we care if (a) they change or (b) a world that we personally do not live in becomes optimized for other values?
Where you don’t have any preference, you have indifference, and you are not indifferent all around. There is plenty of content to your values. Uncertainty and indifference are no foes to accuracy, they can be captured as precisely as any other concept.
Whether “you don’t personally live” in the future is one property of the future to consider: would you like that property to hold? An uncaring future won’t have you living in it, but a future that holds your values will try to arrange something at least as good, or rather much better.
As Poincaré said, “Every definition implies an axiom, since it asserts the existence of the object defined.” You can call a value a “single criterion that doesn’t tolerate exceptions and status quo assumptions”—but it’s not clear to me that I even have values, in that sense.
Of course, I will believe in the invisible, provided that it is implied. But why is it, in this case?
You also speak of the irrelevance (in this context) of the fact that these values might not even be feasibly computable. Or, even if we can identify them, there may be no feasible way to preserve them. But you’re talking about moral significance. Maybe we differ, but to me there is no moral significance attached to the destruction of an uncomputable preference by a course of events that I can’t control.
It might be sad/horrible to live to see such days (if only by definition—as above, if one can’t compute their top-node values then it’s possible that one can’t compute how horrible it would be), as you say. It also might not. Although I can’t speak personally for the values of a Stoic, they might be happy to… well, be happy.
Where you don’t have any preference, you have indifference, and you are not indifferent all around. There is plenty of content to your values. Uncertainty and indifference are no foes to accuracy, they can be captured as precisely as any other concept.
Whether “you don’t personally live” in the future is one property of the future to consider: would you like that property to hold? An uncaring future won’t have you living in it, but a future that holds your values will try to arrange something at least as good, or rather much better.
Also see Belief in the Implied Invisible. What you can’t observe is still there, and still has moral weight.
As Poincaré said, “Every definition implies an axiom, since it asserts the existence of the object defined.” You can call a value a “single criterion that doesn’t tolerate exceptions and status quo assumptions”—but it’s not clear to me that I even have values, in that sense.
Of course, I will believe in the invisible, provided that it is implied. But why is it, in this case?
You also speak of the irrelevance (in this context) of the fact that these values might not even be feasibly computable. Or, even if we can identify them, there may be no feasible way to preserve them. But you’re talking about moral significance. Maybe we differ, but to me there is no moral significance attached to the destruction of an uncomputable preference by a course of events that I can’t control.
It might be sad/horrible to live to see such days (if only by definition—as above, if one can’t compute their top-node values then it’s possible that one can’t compute how horrible it would be), as you say. It also might not. Although I can’t speak personally for the values of a Stoic, they might be happy to… well, be happy.