But I thought the human moral judgment that the baby-eaters should not eat babies was based on how it inflicts disutility on the babies, not simply from a broad, categorical opposition to sentient beings being eaten?
That is, if a baby wanted to get eaten (or perhaps suitably intelligent being like an adult), you would need some other compelling reason to oppose the being being eaten, correct? So shouldn’t the baby-eaters’ universal desire to have a custom of baby-eating put any baby-eater that wants to be exempt from baby-eating entirely, in the same position as the being in (1) -- which is to say, a being that prefers a system but prefers to “free ride” off the sacrifices that the system requires of everyone?
Isn’t your point of view precisely the one the SuperHappies are coming from? Your critique of humanity seems to be the one they level when asking why, when humans achieved the necessary level of biotechnology, they did not edit their own minds. The SuperHappy solution was to, rather than inflict disutility by punishing defection, instead change preferences so that the cooperative attitude gives the highest utility payoff.
No, I’m criticizing humans for wanting to help enforce a relevantly-hypocritical preference on the grounds of its superficial similarities to acts they normally oppose. Good question though.
But I thought the human moral judgment that the baby-eaters should not eat babies was based on how it inflicts disutility on the babies, not simply from a broad, categorical opposition to sentient beings being eaten?
That is, if a baby wanted to get eaten (or perhaps suitably intelligent being like an adult), you would need some other compelling reason to oppose the being being eaten, correct? So shouldn’t the baby-eaters’ universal desire to have a custom of baby-eating put any baby-eater that wants to be exempt from baby-eating entirely, in the same position as the being in (1) -- which is to say, a being that prefers a system but prefers to “free ride” off the sacrifices that the system requires of everyone?
Isn’t your point of view precisely the one the SuperHappies are coming from? Your critique of humanity seems to be the one they level when asking why, when humans achieved the necessary level of biotechnology, they did not edit their own minds. The SuperHappy solution was to, rather than inflict disutility by punishing defection, instead change preferences so that the cooperative attitude gives the highest utility payoff.
No, I’m criticizing humans for wanting to help enforce a relevantly-hypocritical preference on the grounds of its superficial similarities to acts they normally oppose. Good question though.