If the actual preference is neither acted upon, nor believed in, how is it a preference?
It is something you won’t regret giving as a goal to an obsessive world-rewriting robot that takes what you say its goals are really seriously and very literally, without any way for you to make corrections later. Most revealed preferences, you will regret, exactly for the reasons they differ from the actual preferences: on reflection, you’ll find that you’d rather go with something different.
That definition may be problematic in respect to life-and-death decisions such as cryonics: Once I am dead, I am not around to regret any decision. So any choice that leads to my death could not be considered bad.
For instance, I will never regret not having signed up for cryonics. I may however regret doing it if I get awakened in the future and my quality of life is too low. On the other hand, I am thinking about it out of sheer curiosity for the future. Thus, signing up would simply help me increasing my current utility by having a hope of more future utility. I am just noticing, this makes the decision accessible to your definition of preference again, by posing the question to myself: “If I signed up for cryonics today, would I regret the [cost of the] decision tomorrow?”
This, however, is not the usual meaning of the term “preference.” In the standard usage, this word refers to one’s favored option in a given set of available alternatives, not to the hypothetical most favorable physically possible state of the world (which, as you correctly note, is unlikely to be readily imaginable). If you insist on using the term with this meaning, fair enough; it’s just that your claims sound confusing when you don’t include an explanation about your non-standard usage.
That said, one problem I see with your concept of preference is that, presumably, the actions of the “obsessive world-rewriting robot” are supposed to modify the world around you to make it consistent with your preferences, not to modify your mind to make your preferences consistent with the world. However, it is not at all clear to me whether a meaningful boundary between these two sorts of actions can be drawn.
That said, one problem I see with your concept of preference is that, presumably, the actions of the “obsessive world-rewriting robot” are supposed to modify the world around you to make it consistent with your preferences, not to modify your mind to make your preferences consistent with the world. However, it is not at all clear to me whether a meaningful boundary between these two sorts of actions can be drawn.
Preference in this sense is a rigid designator, defined over the world but not determined by anything in the world, so modifying my mind couldn’t make my preference consistent with the world; a robot implementing my preference would have to understand this.
If the actual preference is neither acted upon, nor believed in, how is it a preference?
It is something you won’t regret giving as a goal to an obsessive world-rewriting robot that takes what you say its goals are really seriously and very literally, without any way for you to make corrections later. Most revealed preferences, you will regret, exactly for the reasons they differ from the actual preferences: on reflection, you’ll find that you’d rather go with something different.
See also this thread.
That definition may be problematic in respect to life-and-death decisions such as cryonics: Once I am dead, I am not around to regret any decision. So any choice that leads to my death could not be considered bad.
For instance, I will never regret not having signed up for cryonics. I may however regret doing it if I get awakened in the future and my quality of life is too low. On the other hand, I am thinking about it out of sheer curiosity for the future. Thus, signing up would simply help me increasing my current utility by having a hope of more future utility. I am just noticing, this makes the decision accessible to your definition of preference again, by posing the question to myself: “If I signed up for cryonics today, would I regret the [cost of the] decision tomorrow?”
This, however, is not the usual meaning of the term “preference.” In the standard usage, this word refers to one’s favored option in a given set of available alternatives, not to the hypothetical most favorable physically possible state of the world (which, as you correctly note, is unlikely to be readily imaginable). If you insist on using the term with this meaning, fair enough; it’s just that your claims sound confusing when you don’t include an explanation about your non-standard usage.
That said, one problem I see with your concept of preference is that, presumably, the actions of the “obsessive world-rewriting robot” are supposed to modify the world around you to make it consistent with your preferences, not to modify your mind to make your preferences consistent with the world. However, it is not at all clear to me whether a meaningful boundary between these two sorts of actions can be drawn.
Preference in this sense is a rigid designator, defined over the world but not determined by anything in the world, so modifying my mind couldn’t make my preference consistent with the world; a robot implementing my preference would have to understand this.