Your example is an epistemic truth statement. Changing “I am good at mathematics” to “I am not good at mathematics” or vice versa does not change your utility function.
Just like saying “I am overweight” does not imply that you value being overweight, or that you don’t.
I understand your point that simply saying “I value X deeply” does not override all your previous utility assessments of X. However, I disagree on how to resolve that contradiction. You want to guard against it, you’d say “it’s wrong”. I’d embrace it as the more important utility function of your conscious mind.
You take the position of “What I consciously want to want does not matter, it only matters what I actually want, which can well be entirely different”.
My question is what elevates those subconscious and harder to access stored terminal values over those you consciously want to value.
Should it not be the opposite, since you typically have more control (and can exert more rationality) over your conscious mind than your unconscious wants and needs?
Rephrase: When there is a clear conflict between what your conscious mind wants to want, and what you subconsciously want, why should that contradiction not be resolved in favor of your consciously expressed needs, guiding your actions? Making them your actual utility function.
Wanting to want X is again distinct from believing that you want X. Perhaps you believe that you want to want X, but you don’t actually want to want X, you want to want Y instead, while currently you want Z and believe that you want W. (This is not about conscious vs. subconscious, this is about not confusing epistemic estimates of values with the values themselves, whatever nature each of these has.)
Good link. I agree with guarding against wrong epistemic estimates of values (good wording).
Our disagreement comes down to this (I think): “I want to want X” Is this
a) an epistemic estimate of a value
b) a value in itself, pattern matching “I want Y”, with Y being “to want X”
Consider a LW reader saying “I want to be a more rational reasoning agent”, when previously she did not (this does not fit “want to want”, but is also stating a potentially new element of a utility function potentially at odds with the the previous versions of the u.f.).
Could that reader be wrong about such? Or could there merely be a contradiction with the (consciously, how else) stated value versus other, contradictory values.
You’d say such a stated value can be wrong because it is merely an epistemic estimate of a value.
But why can you not introduce new values by wanting to want new values? Can you not (sorry) consciously try to modify your utility function at all? That would sound a bit fatalistic.
But why can you not introduce new values by wanting to want new values?
You can, it might be a bad idea (for some senses of “values”), and if you believe that you are doing that, it’s not necessarily true, even though it might be.
I’m not saying that it’s not possible to be correct, I’m saying that it’s possible to be mistaken, and in many situations where people claim to be correct about their values, there appears to be no reason to strongly expect that to be so, so they shouldn’t have that much certainty.
Thanks for the answer.
Your example is an epistemic truth statement. Changing “I am good at mathematics” to “I am not good at mathematics” or vice versa does not change your utility function.
Just like saying “I am overweight” does not imply that you value being overweight, or that you don’t.
I understand your point that simply saying “I value X deeply” does not override all your previous utility assessments of X. However, I disagree on how to resolve that contradiction. You want to guard against it, you’d say “it’s wrong”. I’d embrace it as the more important utility function of your conscious mind.
You take the position of “What I consciously want to want does not matter, it only matters what I actually want, which can well be entirely different”.
My question is what elevates those subconscious and harder to access stored terminal values over those you consciously want to value.
Should it not be the opposite, since you typically have more control (and can exert more rationality) over your conscious mind than your unconscious wants and needs?
Rephrase: When there is a clear conflict between what your conscious mind wants to want, and what you subconsciously want, why should that contradiction not be resolved in favor of your consciously expressed needs, guiding your actions? Making them your actual utility function.
Wanting to want X is again distinct from believing that you want X. Perhaps you believe that you want to want X, but you don’t actually want to want X, you want to want Y instead, while currently you want Z and believe that you want W. (This is not about conscious vs. subconscious, this is about not confusing epistemic estimates of values with the values themselves, whatever nature each of these has.)
(See also An Epistemological Nightmare; I’m not joking though.)
Good link. I agree with guarding against wrong epistemic estimates of values (good wording).
Our disagreement comes down to this (I think): “I want to want X” Is this
a) an epistemic estimate of a value
b) a value in itself, pattern matching “I want Y”, with Y being “to want X”
Consider a LW reader saying “I want to be a more rational reasoning agent”, when previously she did not (this does not fit “want to want”, but is also stating a potentially new element of a utility function potentially at odds with the the previous versions of the u.f.).
Could that reader be wrong about such? Or could there merely be a contradiction with the (consciously, how else) stated value versus other, contradictory values.
You’d say such a stated value can be wrong because it is merely an epistemic estimate of a value.
But why can you not introduce new values by wanting to want new values? Can you not (sorry) consciously try to modify your utility function at all? That would sound a bit fatalistic.
You can, it might be a bad idea (for some senses of “values”), and if you believe that you are doing that, it’s not necessarily true, even though it might be.
I’m not saying that it’s not possible to be correct, I’m saying that it’s possible to be mistaken, and in many situations where people claim to be correct about their values, there appears to be no reason to strongly expect that to be so, so they shouldn’t have that much certainty.