SaidAchmiz, to treat exploiting and killing nonhuman animals as ethically no different from “exploiting and killing ore-bearing rocks” does not suggest a cognitively ambitious level of empathetic understanding of other subjects of experience. Isn’t there an irony in belonging to an organisation dedicated to the plight of sentient but cognitively humble beings in the imminent face of vastly superior intelligence and claiming that the plight of sentient but cognitively humble beings in the face of vastly superior intelligence is of no ethical consequence whatsoever? Insofar as we want a benign outcome for humans, I’d have thought that the computational equivalent of Godlike capacity for perspective-taking is precisely what we should be aiming for.
Isn’t there an irony in belonging to an organisation dedicated to the plight of sentient but cognitively humble beings in the imminent face of vastly superior intelligence and claiming that the plight of sentient but cognitively humble beings in the face of vastly superior intelligence is of no ethical consequence whatsoever. Insofar as we want a benign outcome for humans, I’d have thought that the computational equivalent of Godlike capacity for perspective-taking is precisely what we should be aiming for.
No. Someone who cares about human-level beings but not animals will care about the plight of humans in the face of an AI, but there’s no reason they must care about the plight of animals in the face of humans, because they didn’t care about animals to begin with.
It may be that the best construction for a friendly AI is some kind of complex perspective taking that lends itself to caring about animals, but this is a fact about the world; it falls on the is side of the is-ought divide.
a cognitively ambitious level of empathetic understanding of other subjects of experience
What the heck does this mean? (And why should I be interested in having it?)
Isn’t there an irony in belonging to an organisation dedicated to the plight of sentient but cognitively humble beings in the imminent face of vastly superior intelligence and claiming that the plight of sentient but cognitively humble beings in the face of vastly superior intelligence is of no ethical consequence whatsoever?
Wikipedia says:
In modern western philosophy, sentience is the ability to experience sensations (known in philosophy of mind as “qualia”).
If that’s how you’re using “sentience”, then:
1) It’s not clear to me that (most) nonhuman animals have this quality; 2) This quality doesn’t seem central to moral worth.
So I see no irony.
If you use “sentience” to mean something else, then by all means clarify.
There are some other problems with your formulation, such as:
1) I don’t “belong to” MIRI (which is the organization you refer to, yes?). I have donated to them, which I suppose counts? 2) Your description of their mission, specifically the implied comparison of an FAI with humans, is inaccurate.
the computational equivalent of Godlike capacity for perspective-taking
You use a lot of terms (“cognitively ambitious”, “cognitively humble”, “empathetic understanding”, “Godlike capacity for perspective-taking” (and “the computation equivalent” thereof)) that I’m not sure how to respond to, because it seems like either these phrases are exceedingly odd ways of referring to familiar concepts, or else they are incoherent and have no referents. I’m not sure which interpretation is dictated by the principle of charity here; I don’t want to just assume that I know what you’re talking about. So, if you please, do clarify what you mean by… any of what you just said.
SaidAchmiz, to treat exploiting and killing nonhuman animals as ethically no different from “exploiting and killing ore-bearing rocks” does not suggest a cognitively ambitious level of empathetic understanding of other subjects of experience. Isn’t there an irony in belonging to an organisation dedicated to the plight of sentient but cognitively humble beings in the imminent face of vastly superior intelligence and claiming that the plight of sentient but cognitively humble beings in the face of vastly superior intelligence is of no ethical consequence whatsoever? Insofar as we want a benign outcome for humans, I’d have thought that the computational equivalent of Godlike capacity for perspective-taking is precisely what we should be aiming for.
No. Someone who cares about human-level beings but not animals will care about the plight of humans in the face of an AI, but there’s no reason they must care about the plight of animals in the face of humans, because they didn’t care about animals to begin with.
It may be that the best construction for a friendly AI is some kind of complex perspective taking that lends itself to caring about animals, but this is a fact about the world; it falls on the is side of the is-ought divide.
What the heck does this mean? (And why should I be interested in having it?)
Wikipedia says:
If that’s how you’re using “sentience”, then:
1) It’s not clear to me that (most) nonhuman animals have this quality;
2) This quality doesn’t seem central to moral worth.
So I see no irony.
If you use “sentience” to mean something else, then by all means clarify.
There are some other problems with your formulation, such as:
1) I don’t “belong to” MIRI (which is the organization you refer to, yes?). I have donated to them, which I suppose counts?
2) Your description of their mission, specifically the implied comparison of an FAI with humans, is inaccurate.
You use a lot of terms (“cognitively ambitious”, “cognitively humble”, “empathetic understanding”, “Godlike capacity for perspective-taking” (and “the computation equivalent” thereof)) that I’m not sure how to respond to, because it seems like either these phrases are exceedingly odd ways of referring to familiar concepts, or else they are incoherent and have no referents. I’m not sure which interpretation is dictated by the principle of charity here; I don’t want to just assume that I know what you’re talking about. So, if you please, do clarify what you mean by… any of what you just said.