I think one of the main points Elizier is trying to make is that we would disagree with future humans almost as much as we would disagree with the baby-eaters or superhappies.
I never had this impression; if anything, I thought that all the things Eliezer mentioned in any detail—changes in gender and sexuality, the arcane libertarian framework that replaces the state and generally all the differences that seem important by the measure of our own history—are still intended to underscore how humanity still operates against a scale recongizable to its past. The aliens are simply unrelated, and that’s why dialogue fails.
When faced with such a catastrophic choice, we humans argue whether to use consequentialist or non-consequentialist ethics, whether an utilitarian model should value billions of lives more than the permanent extinction of some of our deepest emotions, etc, etc. To the Superhappies all of this is simply an incomprehensible hellish nightmare; if a human asked them to go ahead with the transformation but leave a fork of the species as a control group (like in Joe Haldeman’s Forever War), this would sound to them like a Holocaust survivor asking us to set up a new camp with intensified torture, so it can “fix” something that’s been wrong with the human condition.
(This does not imply some deep xenophobia in me; indeed, after thorough thinking, I say that I would’ve risked waiting the full eight hours for the evacuation—not because I like the alien mode of being so much, but because I find myself unable to form any judgment about it strong enough to outweigh the cost in lives. My utility function here simply runs into a boundary, with 16 billion ~ infinity to a factor of X, where I don’t quite understand what value to assign X with)
I never had this impression; if anything, I thought that all the things Eliezer mentioned in any detail—changes in gender and sexuality, the arcane libertarian framework that replaces the state and generally all the differences that seem important by the measure of our own history—are still intended to underscore how humanity still operates against a scale recongizable to its past. The aliens are simply unrelated, and that’s why dialogue fails.
When faced with such a catastrophic choice, we humans argue whether to use consequentialist or non-consequentialist ethics, whether an utilitarian model should value billions of lives more than the permanent extinction of some of our deepest emotions, etc, etc. To the Superhappies all of this is simply an incomprehensible hellish nightmare; if a human asked them to go ahead with the transformation but leave a fork of the species as a control group (like in Joe Haldeman’s Forever War), this would sound to them like a Holocaust survivor asking us to set up a new camp with intensified torture, so it can “fix” something that’s been wrong with the human condition.
(This does not imply some deep xenophobia in me; indeed, after thorough thinking, I say that I would’ve risked waiting the full eight hours for the evacuation—not because I like the alien mode of being so much, but because I find myself unable to form any judgment about it strong enough to outweigh the cost in lives. My utility function here simply runs into a boundary, with 16 billion ~ infinity to a factor of X, where I don’t quite understand what value to assign X with)