I am comparing across utility systems, so my best yardsticks are intuition and a vague idea of strength of hedons + psychological utilon estimates as my best approximation of per-person-utility.
I do realize this makes little formal sense considering that the problem of comparing different utility functions with different units is completely unresolved, but it’s not like we can’t throw balls in we don’t understand physics.
So what I’m really optimizing for is a weighted or normalized “evaluation”, on the theoretical assumption that this is possible across all relevant variants of humans, of any given human’s utility function. Naturally, the optimization target is the highest possible value.
It’s with that in mind that if I consider the case of two MWI-like branches of the same person, one where this person spontaneously develops a need for closure and one where it doesn’t happen, and try to visualize in as much detail as possible both the actions and stream of consciousness of both side-by-side, I can only imagine the person without a need for closure to be “better off” in a selfish manner, and if these individuals’ utility functions care for what they do for or cost to society, this compounds into an even greater difference in favor of the branch without need for closure.
This exercise can be (and I mentally did, yesterday) extended to the four-branch example of hunger and need for food, for all binary conjunctions. It seems to me that clearly the hungerless, food-need-less person ought to be better off and obtain higher values on their utility function, ceteris paribus.
so my best yardsticks are intuition and a vague idea of … estimates
Um. Intuition is often used as a fancy word for “I ain’t got no arguments but I got an opinion”. Effectively you are talking about your n=1 personal likes and dislikes. This is fine, but I don’t know why do you want to generalize on that basis.
It seems to me that clearly the hungerless, food-need-less person ought to be better off and obtain higher values on their utility function, ceteris paribus.
Let’s extend that line of imagination a bit further. It seems to me that this leads to a claim that the less needs and desires you have, the more “optimal” you will be in the sense of obtaining “higher values on [the] utility function”. In the end someone with no needs or desires at all will score the highest utility.
I am comparing across utility systems, so my best yardsticks are intuition and a vague idea of strength of hedons + psychological utilon estimates as my best approximation of per-person-utility.
I do realize this makes little formal sense considering that the problem of comparing different utility functions with different units is completely unresolved, but it’s not like we can’t throw balls in we don’t understand physics.
So what I’m really optimizing for is a weighted or normalized “evaluation”, on the theoretical assumption that this is possible across all relevant variants of humans, of any given human’s utility function. Naturally, the optimization target is the highest possible value.
It’s with that in mind that if I consider the case of two MWI-like branches of the same person, one where this person spontaneously develops a need for closure and one where it doesn’t happen, and try to visualize in as much detail as possible both the actions and stream of consciousness of both side-by-side, I can only imagine the person without a need for closure to be “better off” in a selfish manner, and if these individuals’ utility functions care for what they do for or cost to society, this compounds into an even greater difference in favor of the branch without need for closure.
This exercise can be (and I mentally did, yesterday) extended to the four-branch example of hunger and need for food, for all binary conjunctions. It seems to me that clearly the hungerless, food-need-less person ought to be better off and obtain higher values on their utility function, ceteris paribus.
Um. Intuition is often used as a fancy word for “I ain’t got no arguments but I got an opinion”. Effectively you are talking about your n=1 personal likes and dislikes. This is fine, but I don’t know why do you want to generalize on that basis.
Let’s extend that line of imagination a bit further. It seems to me that this leads to a claim that the less needs and desires you have, the more “optimal” you will be in the sense of obtaining “higher values on [the] utility function”. In the end someone with no needs or desires at all will score the highest utility.
That doesn’t look reasonable to me.