I’d be curious to know if there is a principled model for optimal human happiness which does not conflict so violently with our moral instincts.
Seems we need to take “creating” and “destroying” humans out of the equation—total or average happiness can work fine in a fixed population (and indeed are the same). We can tweak the conditions maybe, and count the dead and the unborn as having a certain level of happiness—but it will still lead to assumptions that violate our instincts; there will always be moments where creating a new life while making everyone unhappy or killing off someone to raise average happiness will be the right thing for the model to do.
I think we need to deal with “creating” and “destroying” people with other principles than happiness.
I’d be curious to know if there is a principled model for optimal human happiness which does not conflict so violently with our moral instincts.
Seems we need to take “creating” and “destroying” humans out of the equation—total or average happiness can work fine in a fixed population (and indeed are the same). We can tweak the conditions maybe, and count the dead and the unborn as having a certain level of happiness—but it will still lead to assumptions that violate our instincts; there will always be moments where creating a new life while making everyone unhappy or killing off someone to raise average happiness will be the right thing for the model to do.
I think we need to deal with “creating” and “destroying” people with other principles than happiness.