I, for one, have “terminal value” for traveling back in time and riding a dinosaur, in the sense that worlds consistent with that event are ranked above most others. Now, of course, the realization of that particular goal is impossible, but possibility is orthogonal to preference.
The fact is, most things are impossible, but there’s nothing wrong with having a general preference ordering over a superset of the set of physically possible worlds. Likewise, my probability distributions are over a superset of the actually physically possible outcomes.
When all the impossible things get eliminated and we move on like good rationalists, there are still choices to be made, and some things are still better than others. If I have to choose between a universe containing a bilion paperclips and a universe containing a single frozen dinosaur, my preference for ice cream over dirt is irrelevant, but I can still make a choice, and can still have a preference for the dinosaur (or the paperclips, whatever I happen to think is best).
I actually don’t know what you even mean by my values dissolving, though. Sometimes I learn things that change how I would make choices. Maybe some day I will learn something that turns me into a nihilist such that I would prefer to wail about the meaninglessness of all my desires, but it seems unlikely.
When all the impossible things get eliminated and we move on like good rationalists, there are still choices to be made, and some things are still better than others. If I have to choose between a universe containing a bilion paperclips and a universe containing a single frozen dinosaur, my preference for ice cream over dirt is irrelevant, but I can still make a choice, and can still have a preference for the dinosaur (or the paperclips, whatever I happen to think is best).
In contrast to this comment’s sister comment, I don’t think this addresses the question. Instead, it describes what it is like when the context for the question isn’t the case.
Actually, the converse of the answer provides some suggestion as to what it would be like if all our our values were found to be nonsensical...
It would mean we would find that we are indifferent to all choices—with the impossible eliminated, we are indifferent to all the choices possible.
We might find that we keep on making meaningless choices out of something a bit stronger than ‘habit’ (which is how I judge the universe we’re in) or we have the ability to rationally update our instrumental values in the context of our voided terminal values (for example, if we were able to edit our programs) so that we would after all not bother make any choices.
This is really not so far fetched, and it is not too difficult to come up with some examples. Suppose a person had a terminal goal to eat healthy. Each morning they make choices between eggs and oatmeal, etc. And then they discover they are actually a robot who draws energy from the environment automatically, and after all it is not necessary to eat. If all they cared about was to eat healthily, to optimize their physical well-being, if they then discovered there was no connection between eating and health, they should lose all interest in any choices about food. They would have no preference to eat, or not to eat, or about what they ate. (Unless you refer to another, new terminal value.)
Another example is that a person cares very much about their family, and must decide between spending money on an operation for their child or for food for their family. Then the person wakes up and finds that the entire scenario was just a dream, they don’t have a family. Even if they think about it a little longer and might decide, while awake, what would have been the best action to take, they no longer have much preference (if any) about what action they chose in the dream. In fact, any preference would stem from lingering feelings that the dream was real, or mattered, in some aspect, which is just to show the limitations of this example.
When all the impossible things get eliminated and we move on like good rationalists, there are still choices to be made, and some things are still better than others. If I have to choose between a universe containing a bilion paperclips and a universe containing a single frozen dinosaur, my preference for ice cream over dirt is irrelevant, but I can still make a choice, and can still have a preference for the dinosaur (or the paperclips, whatever I happen to think is best).
I agree and think that this part sums up a good response to the above question.
I, for one, have “terminal value” for traveling back in time and riding a dinosaur, in the sense that worlds consistent with that event are ranked above most others. Now, of course, the realization of that particular goal is impossible, but possibility is orthogonal to preference.
The fact is, most things are impossible, but there’s nothing wrong with having a general preference ordering over a superset of the set of physically possible worlds. Likewise, my probability distributions are over a superset of the actually physically possible outcomes.
When all the impossible things get eliminated and we move on like good rationalists, there are still choices to be made, and some things are still better than others. If I have to choose between a universe containing a bilion paperclips and a universe containing a single frozen dinosaur, my preference for ice cream over dirt is irrelevant, but I can still make a choice, and can still have a preference for the dinosaur (or the paperclips, whatever I happen to think is best).
I actually don’t know what you even mean by my values dissolving, though. Sometimes I learn things that change how I would make choices. Maybe some day I will learn something that turns me into a nihilist such that I would prefer to wail about the meaninglessness of all my desires, but it seems unlikely.
In contrast to this comment’s sister comment, I don’t think this addresses the question. Instead, it describes what it is like when the context for the question isn’t the case.
Actually, the converse of the answer provides some suggestion as to what it would be like if all our our values were found to be nonsensical...
It would mean we would find that we are indifferent to all choices—with the impossible eliminated, we are indifferent to all the choices possible.
We might find that we keep on making meaningless choices out of something a bit stronger than ‘habit’ (which is how I judge the universe we’re in) or we have the ability to rationally update our instrumental values in the context of our voided terminal values (for example, if we were able to edit our programs) so that we would after all not bother make any choices.
This is really not so far fetched, and it is not too difficult to come up with some examples. Suppose a person had a terminal goal to eat healthy. Each morning they make choices between eggs and oatmeal, etc. And then they discover they are actually a robot who draws energy from the environment automatically, and after all it is not necessary to eat. If all they cared about was to eat healthily, to optimize their physical well-being, if they then discovered there was no connection between eating and health, they should lose all interest in any choices about food. They would have no preference to eat, or not to eat, or about what they ate. (Unless you refer to another, new terminal value.)
Another example is that a person cares very much about their family, and must decide between spending money on an operation for their child or for food for their family. Then the person wakes up and finds that the entire scenario was just a dream, they don’t have a family. Even if they think about it a little longer and might decide, while awake, what would have been the best action to take, they no longer have much preference (if any) about what action they chose in the dream. In fact, any preference would stem from lingering feelings that the dream was real, or mattered, in some aspect, which is just to show the limitations of this example.
I agree and think that this part sums up a good response to the above question.