But I’m less confident that a correctly-constructed (i.e., Friendly) CEV calculation would replace humans with something radically nonhuman, than that CEV would kill all or most non-humans.
If a CEV did this then I believe it would be acting unethically—at the very least, I find it highly implausible that, among the hundreds of thousands(?) of sentient species, homo sapiens is capable of producing the most happiness per unit of resources. This is a big reason why I feel uneasy about the idea of creating a CEV from human values. If we do create a CEV, it should take all existing interests into account, not just the interests of humans.
It also seems highly implausible that any extant species is optimized for producing pleasure. After all, evolution produces organisms that are good at carrying on genes, not feeling happy. A superintelligent AI could probably create much more effective happiness-experiencers than any currently-living beings. This seems to be similar to what you’re getting at in your last paragraph.
If a CEV did this then I believe it would be acting unethically—at the very least, I find it highly implausible that, among the hundreds of thousands(?) of sentient species, homo sapiens is capable of producing the most happiness per unit of resources. This is a big reason why I feel uneasy about the idea of creating a CEV from human values. If we do create a CEV, it should take all existing interests into account, not just the interests of humans.
It also seems highly implausible that any extant species is optimized for producing pleasure. After all, evolution produces organisms that are good at carrying on genes, not feeling happy. A superintelligent AI could probably create much more effective happiness-experiencers than any currently-living beings. This seems to be similar to what you’re getting at in your last paragraph.