Isn’t there an irony in belonging to an organisation dedicated to the plight of sentient but cognitively humble beings in the imminent face of vastly superior intelligence and claiming that the plight of sentient but cognitively humble beings in the face of vastly superior intelligence is of no ethical consequence whatsoever. Insofar as we want a benign outcome for humans, I’d have thought that the computational equivalent of Godlike capacity for perspective-taking is precisely what we should be aiming for.
No. Someone who cares about human-level beings but not animals will care about the plight of humans in the face of an AI, but there’s no reason they must care about the plight of animals in the face of humans, because they didn’t care about animals to begin with.
It may be that the best construction for a friendly AI is some kind of complex perspective taking that lends itself to caring about animals, but this is a fact about the world; it falls on the is side of the is-ought divide.
No. Someone who cares about human-level beings but not animals will care about the plight of humans in the face of an AI, but there’s no reason they must care about the plight of animals in the face of humans, because they didn’t care about animals to begin with.
It may be that the best construction for a friendly AI is some kind of complex perspective taking that lends itself to caring about animals, but this is a fact about the world; it falls on the is side of the is-ought divide.