Humanity’s only justification of exploiting and killing nonhuman animals is that we enjoy the taste of their flesh.
It seems to me like a far more relevant justification for exploiting and killing nonhuman animals is “and why shouldn’t we do this...?”, which is the same justification we use for exploiting and killing ore-bearing rocks. Which is to say, there’s no moral problem with doing this, so it needs no “justification”.
Pigs, for example, are at least as intelligent as prelinguistic toddlers; but are they less sentient? The same genes, neural processes, anatomical pathways and behavioural responses to noxious stimuli are found in pigs and toddlers alike. So I think the burden of proof here lies on meat-eating critics who deny any equivalence.
I make it clear in this post that I don’t deny the equivalence, and don’t think that very young children have the moral worth of cognitively developed humans. (The optimal legality of Doing Bad Things to them is a slightly more complicated matter.)
we don’t regard, say, humans with infantile Tay-Sachs who lack the potential to become cognitively mature adults as any less worthy of love, care and respect than heathy toddlers.
Well, I certainly do.
Apt comparison or otherwise, creating nonhuman-animal-friendly intelligence is going to be an immense challenge.
Eh...? Expand on this, please; I’m quite unsure what you mean here.
SaidAchmiz, to treat exploiting and killing nonhuman animals as ethically no different from “exploiting and killing ore-bearing rocks” does not suggest a cognitively ambitious level of empathetic understanding of other subjects of experience. Isn’t there an irony in belonging to an organisation dedicated to the plight of sentient but cognitively humble beings in the imminent face of vastly superior intelligence and claiming that the plight of sentient but cognitively humble beings in the face of vastly superior intelligence is of no ethical consequence whatsoever? Insofar as we want a benign outcome for humans, I’d have thought that the computational equivalent of Godlike capacity for perspective-taking is precisely what we should be aiming for.
Isn’t there an irony in belonging to an organisation dedicated to the plight of sentient but cognitively humble beings in the imminent face of vastly superior intelligence and claiming that the plight of sentient but cognitively humble beings in the face of vastly superior intelligence is of no ethical consequence whatsoever. Insofar as we want a benign outcome for humans, I’d have thought that the computational equivalent of Godlike capacity for perspective-taking is precisely what we should be aiming for.
No. Someone who cares about human-level beings but not animals will care about the plight of humans in the face of an AI, but there’s no reason they must care about the plight of animals in the face of humans, because they didn’t care about animals to begin with.
It may be that the best construction for a friendly AI is some kind of complex perspective taking that lends itself to caring about animals, but this is a fact about the world; it falls on the is side of the is-ought divide.
a cognitively ambitious level of empathetic understanding of other subjects of experience
What the heck does this mean? (And why should I be interested in having it?)
Isn’t there an irony in belonging to an organisation dedicated to the plight of sentient but cognitively humble beings in the imminent face of vastly superior intelligence and claiming that the plight of sentient but cognitively humble beings in the face of vastly superior intelligence is of no ethical consequence whatsoever?
Wikipedia says:
In modern western philosophy, sentience is the ability to experience sensations (known in philosophy of mind as “qualia”).
If that’s how you’re using “sentience”, then:
1) It’s not clear to me that (most) nonhuman animals have this quality; 2) This quality doesn’t seem central to moral worth.
So I see no irony.
If you use “sentience” to mean something else, then by all means clarify.
There are some other problems with your formulation, such as:
1) I don’t “belong to” MIRI (which is the organization you refer to, yes?). I have donated to them, which I suppose counts? 2) Your description of their mission, specifically the implied comparison of an FAI with humans, is inaccurate.
the computational equivalent of Godlike capacity for perspective-taking
You use a lot of terms (“cognitively ambitious”, “cognitively humble”, “empathetic understanding”, “Godlike capacity for perspective-taking” (and “the computation equivalent” thereof)) that I’m not sure how to respond to, because it seems like either these phrases are exceedingly odd ways of referring to familiar concepts, or else they are incoherent and have no referents. I’m not sure which interpretation is dictated by the principle of charity here; I don’t want to just assume that I know what you’re talking about. So, if you please, do clarify what you mean by… any of what you just said.
Well, first of all, this is just false. People do things for the barest, most trivial of reasons all the time. You’re walking along the street and you kick a bottle that happens to turn up in your path. What’s it in for you? In the most trivial sense you could say that “I felt like it” is what’s in it for you, but then the concept rather loses its meaning.
In any case, that’s a tangent, because you mistook my meaning: I wasn’t talking about the motivation for doing something. I (and davidpearce, as I read him) was talking about the moral justification for eating meat. His comment, under my intepretation, was something like: “Exploiting and killing nonhuman animals carries great negative moral value. What moral justification do we have for doing this? (i.e. what positive moral value counterbalances it?) None but that we enjoy the taste of their flesh.” (Implied corollary: and that is inadequate moral justification!)
To which my response was, essentially, that morally neutral acts do not require such justification. (And by implication, I was contradicting davidpearce by claiming that killing and eating animals is a morally neutral act.) If I smash a rock, I don’t need to justify that (unless the rock was someone’s property, I suppose, which is not the issue we’re discussing). I might have any number of motivations for performing a morally neutral act, but they’re none of anyone’s business, and certainly not an issue for moral philosophers.
(Did you really not get all of this intended meaning from my comment...? If that’s how you intepreted what I said, shouldn’t you be objecting that smashing ore-bearing rocks is not, in fact, unmotivated, as I would seem to be implying, under your interpretation?)
It seems to me like a far more relevant justification for exploiting and killing nonhuman animals is “and why shouldn’t we do this...?”, which is the same justification we use for exploiting and killing ore-bearing rocks. Which is to say, there’s no moral problem with doing this, so it needs no “justification”.
I make it clear in this post that I don’t deny the equivalence, and don’t think that very young children have the moral worth of cognitively developed humans. (The optimal legality of Doing Bad Things to them is a slightly more complicated matter.)
Well, I certainly do.
Eh...? Expand on this, please; I’m quite unsure what you mean here.
SaidAchmiz, to treat exploiting and killing nonhuman animals as ethically no different from “exploiting and killing ore-bearing rocks” does not suggest a cognitively ambitious level of empathetic understanding of other subjects of experience. Isn’t there an irony in belonging to an organisation dedicated to the plight of sentient but cognitively humble beings in the imminent face of vastly superior intelligence and claiming that the plight of sentient but cognitively humble beings in the face of vastly superior intelligence is of no ethical consequence whatsoever? Insofar as we want a benign outcome for humans, I’d have thought that the computational equivalent of Godlike capacity for perspective-taking is precisely what we should be aiming for.
No. Someone who cares about human-level beings but not animals will care about the plight of humans in the face of an AI, but there’s no reason they must care about the plight of animals in the face of humans, because they didn’t care about animals to begin with.
It may be that the best construction for a friendly AI is some kind of complex perspective taking that lends itself to caring about animals, but this is a fact about the world; it falls on the is side of the is-ought divide.
What the heck does this mean? (And why should I be interested in having it?)
Wikipedia says:
If that’s how you’re using “sentience”, then:
1) It’s not clear to me that (most) nonhuman animals have this quality;
2) This quality doesn’t seem central to moral worth.
So I see no irony.
If you use “sentience” to mean something else, then by all means clarify.
There are some other problems with your formulation, such as:
1) I don’t “belong to” MIRI (which is the organization you refer to, yes?). I have donated to them, which I suppose counts?
2) Your description of their mission, specifically the implied comparison of an FAI with humans, is inaccurate.
You use a lot of terms (“cognitively ambitious”, “cognitively humble”, “empathetic understanding”, “Godlike capacity for perspective-taking” (and “the computation equivalent” thereof)) that I’m not sure how to respond to, because it seems like either these phrases are exceedingly odd ways of referring to familiar concepts, or else they are incoherent and have no referents. I’m not sure which interpretation is dictated by the principle of charity here; I don’t want to just assume that I know what you’re talking about. So, if you please, do clarify what you mean by… any of what you just said.
Huh, no, you don’t normally go out of your way to do stuff unless there’s something in it for you or someone else.
Well, first of all, this is just false. People do things for the barest, most trivial of reasons all the time. You’re walking along the street and you kick a bottle that happens to turn up in your path. What’s it in for you? In the most trivial sense you could say that “I felt like it” is what’s in it for you, but then the concept rather loses its meaning.
In any case, that’s a tangent, because you mistook my meaning: I wasn’t talking about the motivation for doing something. I (and davidpearce, as I read him) was talking about the moral justification for eating meat. His comment, under my intepretation, was something like: “Exploiting and killing nonhuman animals carries great negative moral value. What moral justification do we have for doing this? (i.e. what positive moral value counterbalances it?) None but that we enjoy the taste of their flesh.” (Implied corollary: and that is inadequate moral justification!)
To which my response was, essentially, that morally neutral acts do not require such justification. (And by implication, I was contradicting davidpearce by claiming that killing and eating animals is a morally neutral act.) If I smash a rock, I don’t need to justify that (unless the rock was someone’s property, I suppose, which is not the issue we’re discussing). I might have any number of motivations for performing a morally neutral act, but they’re none of anyone’s business, and certainly not an issue for moral philosophers.
(Did you really not get all of this intended meaning from my comment...? If that’s how you intepreted what I said, shouldn’t you be objecting that smashing ore-bearing rocks is not, in fact, unmotivated, as I would seem to be implying, under your interpretation?)