Isn’t your point of view precisely the one the SuperHappies are coming from? Your critique of humanity seems to be the one they level when asking why, when humans achieved the necessary level of biotechnology, they did not edit their own minds. The SuperHappy solution was to, rather than inflict disutility by punishing defection, instead change preferences so that the cooperative attitude gives the highest utility payoff.
No, I’m criticizing humans for wanting to help enforce a relevantly-hypocritical preference on the grounds of its superficial similarities to acts they normally oppose. Good question though.
Isn’t your point of view precisely the one the SuperHappies are coming from? Your critique of humanity seems to be the one they level when asking why, when humans achieved the necessary level of biotechnology, they did not edit their own minds. The SuperHappy solution was to, rather than inflict disutility by punishing defection, instead change preferences so that the cooperative attitude gives the highest utility payoff.
No, I’m criticizing humans for wanting to help enforce a relevantly-hypocritical preference on the grounds of its superficial similarities to acts they normally oppose. Good question though.