Now change a variable: Food is no longer necessary for humans to live … In this hypothetical, I would consider the desire to eat very selfish and suboptimal—it consumes resources of all kinds, including time that the individual could be spending on other things!
You haven’t answered an important question: what are you optimizing for?
In your hypothetical eating (for pure hedonics) does consume resources including time, but you have neglected to show that this is not a good use of these resources. Yes, they can be spent on other things but why these other things are more valuable than the hedonics of eating?
What is the yardstick that you apply to outcomes to determine whether they are suboptimal or not?
The desire is purely emotional, individuals without it usually actually function better than their counterparts in situations where it is relevant
8-0 That’s an unexpected approach. Are you pointing out the “purely emotional” part in a derogatory sense? Is having emotional desires, err… suboptimal?
What do you mean by individuals without such emotional desires functioning “better”? Are emotions a crippling disability?
I am comparing across utility systems, so my best yardsticks are intuition and a vague idea of strength of hedons + psychological utilon estimates as my best approximation of per-person-utility.
I do realize this makes little formal sense considering that the problem of comparing different utility functions with different units is completely unresolved, but it’s not like we can’t throw balls in we don’t understand physics.
So what I’m really optimizing for is a weighted or normalized “evaluation”, on the theoretical assumption that this is possible across all relevant variants of humans, of any given human’s utility function. Naturally, the optimization target is the highest possible value.
It’s with that in mind that if I consider the case of two MWI-like branches of the same person, one where this person spontaneously develops a need for closure and one where it doesn’t happen, and try to visualize in as much detail as possible both the actions and stream of consciousness of both side-by-side, I can only imagine the person without a need for closure to be “better off” in a selfish manner, and if these individuals’ utility functions care for what they do for or cost to society, this compounds into an even greater difference in favor of the branch without need for closure.
This exercise can be (and I mentally did, yesterday) extended to the four-branch example of hunger and need for food, for all binary conjunctions. It seems to me that clearly the hungerless, food-need-less person ought to be better off and obtain higher values on their utility function, ceteris paribus.
so my best yardsticks are intuition and a vague idea of … estimates
Um. Intuition is often used as a fancy word for “I ain’t got no arguments but I got an opinion”. Effectively you are talking about your n=1 personal likes and dislikes. This is fine, but I don’t know why do you want to generalize on that basis.
It seems to me that clearly the hungerless, food-need-less person ought to be better off and obtain higher values on their utility function, ceteris paribus.
Let’s extend that line of imagination a bit further. It seems to me that this leads to a claim that the less needs and desires you have, the more “optimal” you will be in the sense of obtaining “higher values on [the] utility function”. In the end someone with no needs or desires at all will score the highest utility.
You haven’t answered an important question: what are you optimizing for?
In your hypothetical eating (for pure hedonics) does consume resources including time, but you have neglected to show that this is not a good use of these resources. Yes, they can be spent on other things but why these other things are more valuable than the hedonics of eating?
What is the yardstick that you apply to outcomes to determine whether they are suboptimal or not?
8-0 That’s an unexpected approach. Are you pointing out the “purely emotional” part in a derogatory sense? Is having emotional desires, err… suboptimal?
What do you mean by individuals without such emotional desires functioning “better”? Are emotions a crippling disability?
I am comparing across utility systems, so my best yardsticks are intuition and a vague idea of strength of hedons + psychological utilon estimates as my best approximation of per-person-utility.
I do realize this makes little formal sense considering that the problem of comparing different utility functions with different units is completely unresolved, but it’s not like we can’t throw balls in we don’t understand physics.
So what I’m really optimizing for is a weighted or normalized “evaluation”, on the theoretical assumption that this is possible across all relevant variants of humans, of any given human’s utility function. Naturally, the optimization target is the highest possible value.
It’s with that in mind that if I consider the case of two MWI-like branches of the same person, one where this person spontaneously develops a need for closure and one where it doesn’t happen, and try to visualize in as much detail as possible both the actions and stream of consciousness of both side-by-side, I can only imagine the person without a need for closure to be “better off” in a selfish manner, and if these individuals’ utility functions care for what they do for or cost to society, this compounds into an even greater difference in favor of the branch without need for closure.
This exercise can be (and I mentally did, yesterday) extended to the four-branch example of hunger and need for food, for all binary conjunctions. It seems to me that clearly the hungerless, food-need-less person ought to be better off and obtain higher values on their utility function, ceteris paribus.
Um. Intuition is often used as a fancy word for “I ain’t got no arguments but I got an opinion”. Effectively you are talking about your n=1 personal likes and dislikes. This is fine, but I don’t know why do you want to generalize on that basis.
Let’s extend that line of imagination a bit further. It seems to me that this leads to a claim that the less needs and desires you have, the more “optimal” you will be in the sense of obtaining “higher values on [the] utility function”. In the end someone with no needs or desires at all will score the highest utility.
That doesn’t look reasonable to me.