The term “wrong” here confuses me more than anything. What’s the point of the question? My comment is about how the need for closure is suboptimal for both the individual and the society
Let me rephrase the question in your terms, then. Why is the need for closure suboptimal? What are you optimizing for?
Consider hunger—the desire to eat. It is “extremely selfish” and “suboptimal for both the individual and the society”?
Consider the need for solitude. Consider the desire to look pretty. Consider the yearning to be loved. Are they all “extremely selfish” and “suboptimal for both the individual and the society”?
Consider hunger—the desire to eat. It is “extremely selfish” and “suboptimal for both the individual and the society”?
As a desire that causes us to fulfill a necessary condition for survival, as per physics, no. Survival is beneficial, while perhaps not always optimal, currently the best general rule that I can think of.
The other examples, modulo some signalling and escalation subtleties regarding the “look pretty” case that would require a separate and lengthy discussion, are similar cases in that the desires lead individuals to take actions that are, ceteris paribus, overall beneficial given the current human condition.
Now change a variable: Food is no longer necessary for humans to live. All humans function perfectly well, as if they were eating optimally, without food (maybe they now take energy from waste heat or something, in an entropy-optimal kind of way). In this hypothetical, I would consider the desire to eat very selfish and suboptimal—it consumes resources of all kinds, including time that the individual could be spending on other things!
My assertion is that, on average, the desire for closure is more similar to the hypothetical second case than it is similar to the first case.
Corollaries / secondary assertions: The desire is purely emotional, individuals without it usually actually function better than their counterparts in situations where it is relevant (or at least would in the hypothetical case where there is no social expectation of such), and an individual that does not value conformity to an inner narrative that generates the need for closure is, ceteris paribus, happier and obtains higher expected utility than their counterparts.
Now change a variable: Food is no longer necessary for humans to live … In this hypothetical, I would consider the desire to eat very selfish and suboptimal—it consumes resources of all kinds, including time that the individual could be spending on other things!
You haven’t answered an important question: what are you optimizing for?
In your hypothetical eating (for pure hedonics) does consume resources including time, but you have neglected to show that this is not a good use of these resources. Yes, they can be spent on other things but why these other things are more valuable than the hedonics of eating?
What is the yardstick that you apply to outcomes to determine whether they are suboptimal or not?
The desire is purely emotional, individuals without it usually actually function better than their counterparts in situations where it is relevant
8-0 That’s an unexpected approach. Are you pointing out the “purely emotional” part in a derogatory sense? Is having emotional desires, err… suboptimal?
What do you mean by individuals without such emotional desires functioning “better”? Are emotions a crippling disability?
I am comparing across utility systems, so my best yardsticks are intuition and a vague idea of strength of hedons + psychological utilon estimates as my best approximation of per-person-utility.
I do realize this makes little formal sense considering that the problem of comparing different utility functions with different units is completely unresolved, but it’s not like we can’t throw balls in we don’t understand physics.
So what I’m really optimizing for is a weighted or normalized “evaluation”, on the theoretical assumption that this is possible across all relevant variants of humans, of any given human’s utility function. Naturally, the optimization target is the highest possible value.
It’s with that in mind that if I consider the case of two MWI-like branches of the same person, one where this person spontaneously develops a need for closure and one where it doesn’t happen, and try to visualize in as much detail as possible both the actions and stream of consciousness of both side-by-side, I can only imagine the person without a need for closure to be “better off” in a selfish manner, and if these individuals’ utility functions care for what they do for or cost to society, this compounds into an even greater difference in favor of the branch without need for closure.
This exercise can be (and I mentally did, yesterday) extended to the four-branch example of hunger and need for food, for all binary conjunctions. It seems to me that clearly the hungerless, food-need-less person ought to be better off and obtain higher values on their utility function, ceteris paribus.
so my best yardsticks are intuition and a vague idea of … estimates
Um. Intuition is often used as a fancy word for “I ain’t got no arguments but I got an opinion”. Effectively you are talking about your n=1 personal likes and dislikes. This is fine, but I don’t know why do you want to generalize on that basis.
It seems to me that clearly the hungerless, food-need-less person ought to be better off and obtain higher values on their utility function, ceteris paribus.
Let’s extend that line of imagination a bit further. It seems to me that this leads to a claim that the less needs and desires you have, the more “optimal” you will be in the sense of obtaining “higher values on [the] utility function”. In the end someone with no needs or desires at all will score the highest utility.
Let me rephrase the question in your terms, then. Why is the need for closure suboptimal? What are you optimizing for?
Consider hunger—the desire to eat. It is “extremely selfish” and “suboptimal for both the individual and the society”?
Consider the need for solitude. Consider the desire to look pretty. Consider the yearning to be loved. Are they all “extremely selfish” and “suboptimal for both the individual and the society”?
As a desire that causes us to fulfill a necessary condition for survival, as per physics, no. Survival is beneficial, while perhaps not always optimal, currently the best general rule that I can think of.
The other examples, modulo some signalling and escalation subtleties regarding the “look pretty” case that would require a separate and lengthy discussion, are similar cases in that the desires lead individuals to take actions that are, ceteris paribus, overall beneficial given the current human condition.
Now change a variable: Food is no longer necessary for humans to live. All humans function perfectly well, as if they were eating optimally, without food (maybe they now take energy from waste heat or something, in an entropy-optimal kind of way). In this hypothetical, I would consider the desire to eat very selfish and suboptimal—it consumes resources of all kinds, including time that the individual could be spending on other things!
My assertion is that, on average, the desire for closure is more similar to the hypothetical second case than it is similar to the first case.
Corollaries / secondary assertions: The desire is purely emotional, individuals without it usually actually function better than their counterparts in situations where it is relevant (or at least would in the hypothetical case where there is no social expectation of such), and an individual that does not value conformity to an inner narrative that generates the need for closure is, ceteris paribus, happier and obtains higher expected utility than their counterparts.
You haven’t answered an important question: what are you optimizing for?
In your hypothetical eating (for pure hedonics) does consume resources including time, but you have neglected to show that this is not a good use of these resources. Yes, they can be spent on other things but why these other things are more valuable than the hedonics of eating?
What is the yardstick that you apply to outcomes to determine whether they are suboptimal or not?
8-0 That’s an unexpected approach. Are you pointing out the “purely emotional” part in a derogatory sense? Is having emotional desires, err… suboptimal?
What do you mean by individuals without such emotional desires functioning “better”? Are emotions a crippling disability?
I am comparing across utility systems, so my best yardsticks are intuition and a vague idea of strength of hedons + psychological utilon estimates as my best approximation of per-person-utility.
I do realize this makes little formal sense considering that the problem of comparing different utility functions with different units is completely unresolved, but it’s not like we can’t throw balls in we don’t understand physics.
So what I’m really optimizing for is a weighted or normalized “evaluation”, on the theoretical assumption that this is possible across all relevant variants of humans, of any given human’s utility function. Naturally, the optimization target is the highest possible value.
It’s with that in mind that if I consider the case of two MWI-like branches of the same person, one where this person spontaneously develops a need for closure and one where it doesn’t happen, and try to visualize in as much detail as possible both the actions and stream of consciousness of both side-by-side, I can only imagine the person without a need for closure to be “better off” in a selfish manner, and if these individuals’ utility functions care for what they do for or cost to society, this compounds into an even greater difference in favor of the branch without need for closure.
This exercise can be (and I mentally did, yesterday) extended to the four-branch example of hunger and need for food, for all binary conjunctions. It seems to me that clearly the hungerless, food-need-less person ought to be better off and obtain higher values on their utility function, ceteris paribus.
Um. Intuition is often used as a fancy word for “I ain’t got no arguments but I got an opinion”. Effectively you are talking about your n=1 personal likes and dislikes. This is fine, but I don’t know why do you want to generalize on that basis.
Let’s extend that line of imagination a bit further. It seems to me that this leads to a claim that the less needs and desires you have, the more “optimal” you will be in the sense of obtaining “higher values on [the] utility function”. In the end someone with no needs or desires at all will score the highest utility.
That doesn’t look reasonable to me.