I suspect most cases of “wanting to want” are better described as cases of internal conflict, where one part of us wishes that there weren’t other parts of us with different conflicting wants.
Particularly where one part is responsible for the “internal narrative” and the other is responsible for motivation and prioritization, because the latter usually wins out and the former complains loudest.
More to the point, the internal narrative part largely doesn’t need to be disingenuous for signaling purposes, because it’s kept in the dark about what the motivation and prioritization part is really up to.
Agreed, but the parts in conflict may be of vastly different reflectivity.
Some relevant parts may not have anything analogous to awareness of some other parts.
That is one way of describing cases where second order desires conflict with first order desires, perhaps. But one can want to to want X and want X… its just that Alicorn used only examples where the two conflict (and probably the distinction is best illustrated by looking at the conflicts). But right now I have both a first order desire not to use heroin and a second order desire to not use heroin. In fact, the vast majority of our desires are probably like this. So most cases of “wanting to want” are not cases of internal conflict but perhaps these cases can be described as instances of internal consistency.
In any case, I think Occam’s Razor demands that we reject the notion of generalized second-order desires. We can leave that concept out entirely and explain everything as conflicts between first order desires and a generalized desire for consistency and/or resolution. Note that in all the examples, there are conflicting goals. In (1) it’s the desire to stay awake vs. avoiding noxious stimuli. In (2) it’s the desire to stay alive vs. cop another high. In (3) it’s the desire to live up to his upbringing vs. follow his sexual urges.
I’m not even sure that a generalized desire for consistency and/or resolution would be a second-order desire. I think that the feeling of conflict over not being able to decide which speaker to buy is a lot like resolving conflicts between incompatible desires. The only difference is that choosing to buy a speaker is usually morally neutral, but there is societal pressure to choose one option as the “right” one in 2 and 3, and an imperative to preserve one’s life in 1 and 2, so we are steered towards a particular outcome. It may well be only a trick of language that prompts us to say “I want to want X.” I suspect that we could also say, “I want X and I want Y. I cannot have both, and I know I’m supposed to want X. I wish I wasn’t conflicted.” But that is much longer than, “I want to want X.”
Do you mean better in that you think it’s a more accurate view of the inside of your head?
Or better in that it’s a more helpful metaphorical view of the situation that can be used to overcome the difficulties described?
I think the view of it as a conflict between different algorithms is useful, and it’s the one that I start with, but I wonder whether different views of this problem might be helpful in developing more methods for overcoming it.
The thing I’m getting from all this is:
Any time you have two desires that turn out in the environment to be contradictory, you could also have a desire to change one of them (the ‘lesser’ one?)
But we don’t always get this desire to want something different, I’m wondering if always should, never should, or if there is some clear rule.
Seconded; more specifically it seems to me that if one does not want something but one wants to want it, it has to be the case that either:
there’s one entity doing the not wanting, and some other entity that wants the one entity to stop doing the not wanting
one values wanting the thing for the sake of wanting the thing or for the sake of some result of wanting the thing other than getting the thing, not for the sake of getting the thing, and moreover one values wanting the thing so much that this outweighs the extra likelihood of getting the thing
one does not want to want the thing unconditionally, but one does want the thing to turn into a different thing that one does want (e.g. Alicorn’s Mountain Dew example; there the thing one wants is to enjoy and want to drink Mountain Dew, which is not the same thing as wanting to want to drink Mountain Dew even though one does not enjoy it)
“wanting” here is some more informal human thing that isn’t captured by rational decision theory (which is bad!)
I suspect most cases of “wanting to want” are better described as cases of internal conflict, where one part of us wishes that there weren’t other parts of us with different conflicting wants.
Particularly where one part is responsible for the “internal narrative” and the other is responsible for motivation and prioritization, because the latter usually wins out and the former complains loudest.
furthermore, the internal narrative has been carefully honed to be able to be disingenuous for signaling purposes.
More to the point, the internal narrative part largely doesn’t need to be disingenuous for signaling purposes, because it’s kept in the dark about what the motivation and prioritization part is really up to.
Agreed, but the parts in conflict may be of vastly different reflectivity. Some relevant parts may not have anything analogous to awareness of some other parts.
Just so everyone is clear:
That is one way of describing cases where second order desires conflict with first order desires, perhaps. But one can want to to want X and want X… its just that Alicorn used only examples where the two conflict (and probably the distinction is best illustrated by looking at the conflicts). But right now I have both a first order desire not to use heroin and a second order desire to not use heroin. In fact, the vast majority of our desires are probably like this. So most cases of “wanting to want” are not cases of internal conflict but perhaps these cases can be described as instances of internal consistency.
In any case, I think Occam’s Razor demands that we reject the notion of generalized second-order desires. We can leave that concept out entirely and explain everything as conflicts between first order desires and a generalized desire for consistency and/or resolution. Note that in all the examples, there are conflicting goals. In (1) it’s the desire to stay awake vs. avoiding noxious stimuli. In (2) it’s the desire to stay alive vs. cop another high. In (3) it’s the desire to live up to his upbringing vs. follow his sexual urges.
I’m not even sure that a generalized desire for consistency and/or resolution would be a second-order desire. I think that the feeling of conflict over not being able to decide which speaker to buy is a lot like resolving conflicts between incompatible desires. The only difference is that choosing to buy a speaker is usually morally neutral, but there is societal pressure to choose one option as the “right” one in 2 and 3, and an imperative to preserve one’s life in 1 and 2, so we are steered towards a particular outcome. It may well be only a trick of language that prompts us to say “I want to want X.” I suspect that we could also say, “I want X and I want Y. I cannot have both, and I know I’m supposed to want X. I wish I wasn’t conflicted.” But that is much longer than, “I want to want X.”
“Better” in what way?
Do you mean better in that you think it’s a more accurate view of the inside of your head?
Or better in that it’s a more helpful metaphorical view of the situation that can be used to overcome the difficulties described?
I think the view of it as a conflict between different algorithms is useful, and it’s the one that I start with, but I wonder whether different views of this problem might be helpful in developing more methods for overcoming it.
The thing I’m getting from all this is: Any time you have two desires that turn out in the environment to be contradictory, you could also have a desire to change one of them (the ‘lesser’ one?)
But we don’t always get this desire to want something different, I’m wondering if always should, never should, or if there is some clear rule.
Seconded; more specifically it seems to me that if one does not want something but one wants to want it, it has to be the case that either:
there’s one entity doing the not wanting, and some other entity that wants the one entity to stop doing the not wanting
one values wanting the thing for the sake of wanting the thing or for the sake of some result of wanting the thing other than getting the thing, not for the sake of getting the thing, and moreover one values wanting the thing so much that this outweighs the extra likelihood of getting the thing
one does not want to want the thing unconditionally, but one does want the thing to turn into a different thing that one does want (e.g. Alicorn’s Mountain Dew example; there the thing one wants is to enjoy and want to drink Mountain Dew, which is not the same thing as wanting to want to drink Mountain Dew even though one does not enjoy it)
“wanting” here is some more informal human thing that isn’t captured by rational decision theory (which is bad!)
Are those all the possibilities?