To live a life with even the faintest hint of displeasure is a horrific crime, the thought goes. I am under the impression that most people here operate with some sort of utilitarianist philosophy. This to me seems to imply that unless one declares that there is no objective state for which utilitarianism is to be directed towards, humanity in this example is wrong.
The general thrust of the Superhappy segments of Three Worlds Collide seems to be that simple utilitarian schemas based on subjective happiness or pleasure are insufficient to describe human value systems or preferences as they’re expressed in the wild. Similar points are made in the Fun Theory sequence. Neither of these mean that utilitarianism generally is wrong; merely that the utility function we’re summing (or averaging over, or taking the minimum of, etc.) isn’t as simple as sometimes assumed.
Now, Fun Theory is probably one of the less well-developed sequences here (unfortunately, in my view; it’s a very deep question, and intimately related to human value structure and all its AI consequences), and you’re certainly free to prefer 3WC’s assimilation ending or to believe that the kind of soft wireheading the Superhappies embody really is optimal under some more or less objective criterion. That does seem to be implied in one form or another by several major schools of ethics, and any intuition pump I could deploy to convince you otherwise would probably end up looking a lot like the Assimilation Ending, which I gather you don’t find convincing.
Personally, though, I’m inclined to be sympathetic to the True Ending, and think more generally that pain and suffering tend to be wrongly conflated with moral evil when in fact there’s a considerably looser and more subtle relationship between the two. But I’m nowhere near a fully developed ethics, and while this seems to have something to do with the “complexity” you mentioned I feel like stopping there would be an unjustified handwave.
The general thrust of the Superhappy segments of Three Worlds Collide seems to be that simple utilitarian schemas based on subjective happiness or pleasure are insufficient to describe human value systems or preferences as they’re expressed in the wild. Similar points are made in the Fun Theory sequence. Neither of these mean that utilitarianism generally is wrong; merely that the utility function we’re summing (or averaging over, or taking the minimum of, etc.) isn’t as simple as sometimes assumed.
Now, Fun Theory is probably one of the less well-developed sequences here (unfortunately, in my view; it’s a very deep question, and intimately related to human value structure and all its AI consequences), and you’re certainly free to prefer 3WC’s assimilation ending or to believe that the kind of soft wireheading the Superhappies embody really is optimal under some more or less objective criterion. That does seem to be implied in one form or another by several major schools of ethics, and any intuition pump I could deploy to convince you otherwise would probably end up looking a lot like the Assimilation Ending, which I gather you don’t find convincing.
Personally, though, I’m inclined to be sympathetic to the True Ending, and think more generally that pain and suffering tend to be wrongly conflated with moral evil when in fact there’s a considerably looser and more subtle relationship between the two. But I’m nowhere near a fully developed ethics, and while this seems to have something to do with the “complexity” you mentioned I feel like stopping there would be an unjustified handwave.