Arguably, many consequentialists already fall in to this category. If you are unsettled by the image of a universe composed entirely of undifferentiated orgasmium, then it’s a fair bet that happiness is not your (only) terminal value. To return to a common sentiment, “I don’t want to maximize my happiness, I want to maximize my awesomeness.”
That said, happiness is usually of some value to students of ethics. A system in which it had zero value could conceivably still be pretty happy for instrumental reasons, since happiness makes humans more efficient in the pursuit of most things that we can expect to be valued. Once you start creating non-human entities from the ground up, you would expect happiness to become rare, although not necessarily to be replaced with misery. (The paperclipper is such a force.)
Arguably, many consequentialists already fall in to this category. If you are unsettled by the image of a universe composed entirely of undifferentiated orgasmium, then it’s a fair bet that happiness is not your (only) terminal value. To return to a common sentiment, “I don’t want to maximize my happiness, I want to maximize my awesomeness.”
That said, happiness is usually of some value to students of ethics. A system in which it had zero value could conceivably still be pretty happy for instrumental reasons, since happiness makes humans more efficient in the pursuit of most things that we can expect to be valued. Once you start creating non-human entities from the ground up, you would expect happiness to become rare, although not necessarily to be replaced with misery. (The paperclipper is such a force.)