But also, what is the criterion by which you would change your (extrapolated) preferences?
It would probably be a higher-order preference, like being more fair, more consistent, etc.
Which tells you that under “normal” circumstances you won’t prefer to change your preferences.
That would require a lot of supplementaty assumptions. For instance, if I didn’t care about
consistency, i wouldn’t revise my prefernces to be more consistent. I might also
“stick” if I cared about consistency and knew myself to be consistent. But how often does
that happen?
My intuition is that if you have preferences over (the space of possible preferences over states of the world), that implicitly determines preferences over states of the world—call these “implicit preferences”. This is much like if you have a probability distribution over (the set of probability distributions over X), that determines a probability distribution over X (though this might require X to be finite or perhaps something weaker).
So when I say “your preferences” or “your extrapolated preferences” I’m referring to your implicit preferences. In other words, “your preferences” refers to what you your 1st order preferences over the state of the world would look like if you took into account all n-order preferences, not the current 1st order preferences with which you are currently operating.
Edit: Which is just another way of saying “what wedrifid said.”
One interpretation of CEV is that it’s supposed to find these implicit preferences, assuming that everyone has the same, or “similar enough”, implicit preferences.
One interpretation of CEV is that it’s supposed to find these implicit preferences, assuming that everyone has the same, or “similar enough”, implicit preferences.
Where does the “everyone” come in? Your initial statement of EY;s metaethics is that it is about my preferences, hoever implicit or extrapolated. Are individual’s extrapolated preferences supposed to converge or not? That’s a very important issue. If they do converge, then why the emphasis on the difference between should_Peter and should_Matt? If they don’t converge, how do you avoid Prudent Predation. The whle thing’s as clear as mud.
One part of EY’s theory is that all humans have similar enough implicit preferences that you can talk about implicit human preferences. CEV is supposed to find implicit human preferences.
Others have noted that there’s no reason why you can’t run CEV on other groups, or a single person, or perhaps only part of a single person. In which case, you can think of CEV(X) as a function that returns the implicit preferences of X, if they exist. This probably accounts for the ambiguity.
there’s no reason why you can’t run CEV on other groups, or a single person, or perhaps only part of a single person
There’s no reason you can’t as an exercise in bean counting or logic chopping,, but there is a question as to
what that would add up to metaethically. If individual extrapolations converge, all is good. If not, then
CEV is a form of ethical subjectivism, and if that is wrong, then CEV doens’t work. Traditional philosophical concerns have not been entirely sidestepped.
It would probably be a higher-order preference, like being more fair, more consistent, etc.
That would require a lot of supplementaty assumptions. For instance, if I didn’t care about consistency, i wouldn’t revise my prefernces to be more consistent. I might also “stick” if I cared about consistency and knew myself to be consistent. But how often does that happen?
My intuition is that if you have preferences over (the space of possible preferences over states of the world), that implicitly determines preferences over states of the world—call these “implicit preferences”. This is much like if you have a probability distribution over (the set of probability distributions over X), that determines a probability distribution over X (though this might require X to be finite or perhaps something weaker).
So when I say “your preferences” or “your extrapolated preferences” I’m referring to your implicit preferences. In other words, “your preferences” refers to what you your 1st order preferences over the state of the world would look like if you took into account all n-order preferences, not the current 1st order preferences with which you are currently operating.
Edit: Which is just another way of saying “what wedrifid said.”
One interpretation of CEV is that it’s supposed to find these implicit preferences, assuming that everyone has the same, or “similar enough”, implicit preferences.
Where does the “everyone” come in? Your initial statement of EY;s metaethics is that it is about my preferences, hoever implicit or extrapolated. Are individual’s extrapolated preferences supposed to converge or not? That’s a very important issue. If they do converge, then why the emphasis on the difference between should_Peter and should_Matt? If they don’t converge, how do you avoid Prudent Predation. The whle thing’s as clear as mud.
One part of EY’s theory is that all humans have similar enough implicit preferences that you can talk about implicit human preferences. CEV is supposed to find implicit human preferences.
Others have noted that there’s no reason why you can’t run CEV on other groups, or a single person, or perhaps only part of a single person. In which case, you can think of CEV(X) as a function that returns the implicit preferences of X, if they exist. This probably accounts for the ambiguity.
There’s no reason you can’t as an exercise in bean counting or logic chopping,, but there is a question as to what that would add up to metaethically. If individual extrapolations converge, all is good. If not, then CEV is a form of ethical subjectivism, and if that is wrong, then CEV doens’t work. Traditional philosophical concerns have not been entirely sidestepped.