I’m considering the case of FAI, that is humanity’s preference correctly rendered.
Do you see a world with “no recognizable humans” as a very likely thing for the human race (or its extrapolated volition) to collectively want?
Status quo has no power. So the question shouldn’t be whether “no recognizable humans” is the particular thing humanity wants, but rather whether “preserving recognizable humans” happens to be the particular thing that humanity wants. And I’m not sure there are strong enough reasons to expect “world with recognizable humans” to be the optimal thing to do with the matter. It might be, but I’m not convinced we know enough to locate this particular hypothesis. The default assumption that humans want humans seems to stem from the cached moral intuition promoted by availability in the current situation, but reconstructing the optimal situation from preference is a very indirect process, that won’t respect the historical accidents of natural development of humanity, only humanity’s values.
I’m considering the case of FAI, that is humanity’s preference correctly rendered.
Status quo has no power. So the question shouldn’t be whether “no recognizable humans” is the particular thing humanity wants, but rather whether “preserving recognizable humans” happens to be the particular thing that humanity wants. And I’m not sure there are strong enough reasons to expect “world with recognizable humans” to be the optimal thing to do with the matter. It might be, but I’m not convinced we know enough to locate this particular hypothesis. The default assumption that humans want humans seems to stem from the cached moral intuition promoted by availability in the current situation, but reconstructing the optimal situation from preference is a very indirect process, that won’t respect the historical accidents of natural development of humanity, only humanity’s values.