Very interesting. This reminded me of Keith Stanovich’s idea of the master rationality motive, which he defines as a desire to integrate higher-order preferences with first-order preferences. He gives an example of wanting to smoke and not wanting to want to smoke, which sounds like you would consider this as two conflicting preferences, health vs. the short-term reward from smoking. His idea how these conflicts are resolved are to have a “decoupled” simulation in which we can simulate adapting our first-order desires (I guess ‘wanting to smoke’ should rather be thought of as a strategy to quench the discomfortful craving than a desire?) and finding better solutions.
The master rationality motive seems to aim at something slightly different, though, e.g. given the questionnaire items Stanovich envisions to measure it, for example
I am only confident of decisions that are made after careful analysis of all available information.
I don’t feel I have to have reasons for what I do. (R)
Regarding the asymmetry, I have the intuition that the asymmetry of honorability comes through a different weighing of desires, e.g. you’d expect some things to be more important for our survival and reproduction, e.g. food, sex, not freezing, avoiding danger > honesty, caring for non-kin, right?
I guess ‘wanting to smoke’ should rather be thought of as a strategy to quench the discomfortful craving than a desire?
I’m not sure exactly what you mean … I guess I would say “wanting to smoke” is being in a state where plans-that-will-not-lead-to-smoking get docked a ton of points by the brainstem, plus maybe there’s some mechanism that is incessantly forcing high-level attention to be focused on the (unpleasant) sensations related to not-smoking-right-now (see my comment here), and so on.
food, sex, not freezing, avoiding danger > honesty, caring for non-kin
I want to push back on that. Humans are an intensely social species, and an individual’s prospects for survival and reproduction are extremely dependent on their winning and maintaining allies, being popular and well-respected, etc. I think this is reflected in our behavior: e.g. among humanity’s favorite collective activities are talking about people, thinking about people, talking to people, etc. Social experiences are probably well-represented among the best and worst experiences of most people’s lives. I mean, there are lots of non-social species, they just don’t do those kinds of things. This book says that if an early human made a lot of enemies, the enemies would probably gang up on that person and kill them. This book is about how practically every thought we think gets distorted by our social instincts, etc. etc. I think I read somewhere that in small tribes, the most charismatic and popular and high-status people are likelier to have multiple wives (for men) and lots of children etc.
It would be weird for two desires to have a strict hierarchical relationship. When given a choice between food and water, sometimes we choose food, sometimes we choose water, depending on our current metabolic state, how much food or water is at stake, etc. It’s definitely not the case that “water always automatically trumps food, regardless of context”; that would be weird.
So by the same token, if your friend is holding a drink, you probably won’t stab them in the back and steal their drink, as you would under a “quenching thirst >> social instincts” model. But if you’re super-duper-desperately thirsty, then maybe you would stab them in the back and steal their drink. So to me this looks very similar to the thirst vs hunger situation: there’s a tradeoff between satisfying competing desires, a desire to be kind to your friends (an evolved desire which ultimately serves the purpose of having allies / being popular / etc.) and a desire to drink when you’re thirsty.
It would be weird for two desires to have a strict hierarchical relationship.
I agree, I didn’t mean to imply a strict hierarchical relationship, and I think you don’t need a strict relationship to explain at least some part of the asymmetry. You just would need less honorable desires on average having more power over the default, e.g.
taking care of hunger,
thirst,
breath,
looking at aesthetically pleasing things,
remove discomforts
versus
taking care of long-term health
clean surrounding
expressing gratitude
And then we can try to optimize the default by searching for good compromises or something like that, which more often involve more honorable desires, like self-actualization, social relationships, or something like that. (I expect all of this to vary across individuals and probably also cultures).
there’s a tradeoff between satisfying competing desires
I agree it depends on the current state, e.g. of course if your satiated you won’t care much about food. But, similar to your example, could you make somebody stab their friend by starving them in their need for showing gratitude, or the desire for having fun? I suspect not. But could you do it by starving them in their need of breathing oxygen, or making them super-duper-depesperately thirsty? I (also) suspect more often yes. That seems to imply some more general weighing?
> I guess ‘wanting to smoke’ should rather be thought of as a strategy to quench the discomfortful craving than a desire?
You just would need less honorable desires on average having more power over the default
I guess I sort of have a different way of thinking about it. On my perspective, if someone takes an action to satisfy a social-related desire at the expense of a food-related desire, then that means social-related desire was the more powerful desire at that particular time.
So if Alice in point of fact chooses an action that advances friendship over an action that would satisfy her mild hunger right now, I would say the straightforward thing: “Well, I guess Alice’s desire to advance friendship was a more powerful desire for her, at this particular moment, than her desire to satisfy her mild hunger”. Or at least, in my mind, this is the straightforward and obvious way to think about it. I guess you would disagree, but I’m not quite sure what you would say instead.
What do weak desires look like? Here’s an example. I have a very weak desire that, when sitting down, I prefer to put my legs up, other things equal. I wouldn’t even bother to walk across a room to get an ottoman, I don’t think about it at all, the only effect of this weak desire on my behavior is that, if I happen to be in a situation where putting my legs up is super easy and convenient and has essentially no costs whatsoever, I’ll go ahead and put my legs up. In my model, the mark of a weak desire is that it has very little influence on my thoughts and behaviors.
…And in particular, my model does not have a thing where weak desires heroically fight Jason-vs-Goliath battles to overcome stronger desires.
See also the thing I wrote about internalizing ego-syntotic desires here, maybe that will help.
could you make somebody stab their friend by starving them in their need for showing gratitude, or the desire for having fun? I suspect not. But could you do it by starving them in their need of breathing oxygen, or making them super-duper-depesperately thirsty? I (also) suspect more often yes. That seems to imply some more general weighing?
I guess I would say, any given desire has some range of how strong it can be in different situations, and if you tell me that the very strongest possible air-hunger-related desire is stronger than the very strongest possible social-instinct-related desire, I would say “OK sure, that’s plausible.” But it doesn’t seem particularly relevant to me. The relevant thing to me is how strong the desires are at the particular time that you’re making a decision or thinking a thought.
That said, I’m not sure that it is true that the very strongest possible air hunger desire is definitely stronger than the very strongest possible social-instinct-related desire. My impression is that some people will not betray their friends even while being tortured, even if the torture involves inducing extreme air-hunger, thirst, etc.
Also, social instincts can prompt people to take premeditated actions that they know will lead to instant death, or extreme pain, or spending the rest of their lives in prison, etc. It’s powerful stuff. :-P
I guess I would say, any given desire has some range of how strong it can be in different situations, and if you tell me that the very strongest possible air-hunger-related desire is stronger than the very strongest possible social-instinct-related desire, I would say “OK sure, that’s plausible.” But it doesn’t seem particularly relevant to me. The relevant thing to me is how strong the desires are at the particular time that you’re making a decision or thinking a thought.
I think that almost captures what I was thinking, only that I expect the average intensity within these ranges to differ, e.g. for some individuals the desire for social interaction is usually very strong or for others rather weak (which I expect you to agree with). And this should explain which desires more often supply the default plan and for which additional “secondary” desires the neocortex has to work for to find an overall better compromise.
For example, you come home and your body feels tired and the desire that is strongest at this moment is the desire for rest, and the plan that suits this desire most is lying in bed and watching TV. But then another desire for feeling productive pushes for more plan suggestions and the neocortex comes up with lying on the coach and reading a book. And then the desire for being social pushes a bit and the revised plan is for reading the book your mum got you as a present.
Hmm, when I think “default plan”, I think something like “what’s the first thing I think to do, based on what’s most salient in my mind right now?”. So this can be related to the acetylcholine dynamic I mentioned here, where things like itches and annoying car alarms are salient in my mind even if I don’t want them to be. Hunger is definitely capable of forcibly pulling attention. But I do also think you can get a similar dynamic from social instincts. Like if someone shouts your name “Hey MaxRa!!”, your “default plan” is to immediately pay attention to that person. Or a more pleasant example is: if you’re snuggling under the blanket with your significant other, then the associated pleasant feelings are very salient in your mind, and the “default plan” is to remain under the blanket.
That acetylcholine dynamic is just one example; there can be other reasons for things to be more or less salient. Like, maybe I’m thinking: “I could go to the party…”, but then I immediately think: “…my ex might be at the party and oh geez I don’t want to see them and have to talk to them”. That’s an example where there are social instincts on both sides of the dilemma, but still, the downsides of going to the party (seeing my ex) pop right out immediately to the forefront of my mind when I think of the party, whereas the benefits of going to the party (I’ll be really glad I did etc.) are strong but less salient. So the latter can spawn very powerful desires if I’m actively thinking of them, but they’re comparatively easy to overlook.
Very interesting. This reminded me of Keith Stanovich’s idea of the master rationality motive, which he defines as a desire to integrate higher-order preferences with first-order preferences. He gives an example of wanting to smoke and not wanting to want to smoke, which sounds like you would consider this as two conflicting preferences, health vs. the short-term reward from smoking. His idea how these conflicts are resolved are to have a “decoupled” simulation in which we can simulate adapting our first-order desires (I guess ‘wanting to smoke’ should rather be thought of as a strategy to quench the discomfortful craving than a desire?) and finding better solutions.
The master rationality motive seems to aim at something slightly different, though, e.g. given the questionnaire items Stanovich envisions to measure it, for example
I am only confident of decisions that are made after careful analysis of all available information.
I don’t feel I have to have reasons for what I do. (R)
https://www.researchgate.net/publication/220041090_Higher-order_preferences_and_the_Master_Rationality_Motive
Regarding the asymmetry, I have the intuition that the asymmetry of honorability comes through a different weighing of desires, e.g. you’d expect some things to be more important for our survival and reproduction, e.g. food, sex, not freezing, avoiding danger > honesty, caring for non-kin, right?
Thanks!
I’m not sure exactly what you mean … I guess I would say “wanting to smoke” is being in a state where plans-that-will-not-lead-to-smoking get docked a ton of points by the brainstem, plus maybe there’s some mechanism that is incessantly forcing high-level attention to be focused on the (unpleasant) sensations related to not-smoking-right-now (see my comment here), and so on.
I want to push back on that. Humans are an intensely social species, and an individual’s prospects for survival and reproduction are extremely dependent on their winning and maintaining allies, being popular and well-respected, etc. I think this is reflected in our behavior: e.g. among humanity’s favorite collective activities are talking about people, thinking about people, talking to people, etc. Social experiences are probably well-represented among the best and worst experiences of most people’s lives. I mean, there are lots of non-social species, they just don’t do those kinds of things. This book says that if an early human made a lot of enemies, the enemies would probably gang up on that person and kill them. This book is about how practically every thought we think gets distorted by our social instincts, etc. etc. I think I read somewhere that in small tribes, the most charismatic and popular and high-status people are likelier to have multiple wives (for men) and lots of children etc.
It would be weird for two desires to have a strict hierarchical relationship. When given a choice between food and water, sometimes we choose food, sometimes we choose water, depending on our current metabolic state, how much food or water is at stake, etc. It’s definitely not the case that “water always automatically trumps food, regardless of context”; that would be weird.
So by the same token, if your friend is holding a drink, you probably won’t stab them in the back and steal their drink, as you would under a “quenching thirst >> social instincts” model. But if you’re super-duper-desperately thirsty, then maybe you would stab them in the back and steal their drink. So to me this looks very similar to the thirst vs hunger situation: there’s a tradeoff between satisfying competing desires, a desire to be kind to your friends (an evolved desire which ultimately serves the purpose of having allies / being popular / etc.) and a desire to drink when you’re thirsty.
I agree, I didn’t mean to imply a strict hierarchical relationship, and I think you don’t need a strict relationship to explain at least some part of the asymmetry. You just would need less honorable desires on average having more power over the default, e.g.
taking care of hunger,
thirst,
breath,
looking at aesthetically pleasing things,
remove discomforts
versus
taking care of long-term health
clean surrounding
expressing gratitude
And then we can try to optimize the default by searching for good compromises or something like that, which more often involve more honorable desires, like self-actualization, social relationships, or something like that. (I expect all of this to vary across individuals and probably also cultures).
I agree it depends on the current state, e.g. of course if your satiated you won’t care much about food. But, similar to your example, could you make somebody stab their friend by starving them in their need for showing gratitude, or the desire for having fun? I suspect not. But could you do it by starving them in their need of breathing oxygen, or making them super-duper-depesperately thirsty? I (also) suspect more often yes. That seems to imply some more general weighing?
What you replied makes sense to me, thanks.
I guess I sort of have a different way of thinking about it. On my perspective, if someone takes an action to satisfy a social-related desire at the expense of a food-related desire, then that means social-related desire was the more powerful desire at that particular time.
So if Alice in point of fact chooses an action that advances friendship over an action that would satisfy her mild hunger right now, I would say the straightforward thing: “Well, I guess Alice’s desire to advance friendship was a more powerful desire for her, at this particular moment, than her desire to satisfy her mild hunger”. Or at least, in my mind, this is the straightforward and obvious way to think about it. I guess you would disagree, but I’m not quite sure what you would say instead.
What do weak desires look like? Here’s an example. I have a very weak desire that, when sitting down, I prefer to put my legs up, other things equal. I wouldn’t even bother to walk across a room to get an ottoman, I don’t think about it at all, the only effect of this weak desire on my behavior is that, if I happen to be in a situation where putting my legs up is super easy and convenient and has essentially no costs whatsoever, I’ll go ahead and put my legs up. In my model, the mark of a weak desire is that it has very little influence on my thoughts and behaviors.
…And in particular, my model does not have a thing where weak desires heroically fight Jason-vs-Goliath battles to overcome stronger desires.
See also the thing I wrote about internalizing ego-syntotic desires here, maybe that will help.
I guess I would say, any given desire has some range of how strong it can be in different situations, and if you tell me that the very strongest possible air-hunger-related desire is stronger than the very strongest possible social-instinct-related desire, I would say “OK sure, that’s plausible.” But it doesn’t seem particularly relevant to me. The relevant thing to me is how strong the desires are at the particular time that you’re making a decision or thinking a thought.
That said, I’m not sure that it is true that the very strongest possible air hunger desire is definitely stronger than the very strongest possible social-instinct-related desire. My impression is that some people will not betray their friends even while being tortured, even if the torture involves inducing extreme air-hunger, thirst, etc.
Also, social instincts can prompt people to take premeditated actions that they know will lead to instant death, or extreme pain, or spending the rest of their lives in prison, etc. It’s powerful stuff. :-P
Thanks for elaborating!
I think that almost captures what I was thinking, only that I expect the average intensity within these ranges to differ, e.g. for some individuals the desire for social interaction is usually very strong or for others rather weak (which I expect you to agree with). And this should explain which desires more often supply the default plan and for which additional “secondary” desires the neocortex has to work for to find an overall better compromise.
For example, you come home and your body feels tired and the desire that is strongest at this moment is the desire for rest, and the plan that suits this desire most is lying in bed and watching TV. But then another desire for feeling productive pushes for more plan suggestions and the neocortex comes up with lying on the coach and reading a book. And then the desire for being social pushes a bit and the revised plan is for reading the book your mum got you as a present.
Hmm, when I think “default plan”, I think something like “what’s the first thing I think to do, based on what’s most salient in my mind right now?”. So this can be related to the acetylcholine dynamic I mentioned here, where things like itches and annoying car alarms are salient in my mind even if I don’t want them to be. Hunger is definitely capable of forcibly pulling attention. But I do also think you can get a similar dynamic from social instincts. Like if someone shouts your name “Hey MaxRa!!”, your “default plan” is to immediately pay attention to that person. Or a more pleasant example is: if you’re snuggling under the blanket with your significant other, then the associated pleasant feelings are very salient in your mind, and the “default plan” is to remain under the blanket.
That acetylcholine dynamic is just one example; there can be other reasons for things to be more or less salient. Like, maybe I’m thinking: “I could go to the party…”, but then I immediately think: “…my ex might be at the party and oh geez I don’t want to see them and have to talk to them”. That’s an example where there are social instincts on both sides of the dilemma, but still, the downsides of going to the party (seeing my ex) pop right out immediately to the forefront of my mind when I think of the party, whereas the benefits of going to the party (I’ll be really glad I did etc.) are strong but less salient. So the latter can spawn very powerful desires if I’m actively thinking of them, but they’re comparatively easy to overlook.