The problem with throwing out #3 is you also have to throw out:
(4) How we value a being’s moral worth is a function of their abilities (or other faculties that in some way relate to pain, e.g. the abilities A-G listed above)
Which is a rather nice proposition.
Edit: As Said points out, this should be:
(4) How we value a being’s pain is a function of their ability to feel pain (or other faculties that in some way relate to pain, e.g. the abilities A-G listed above)
You don’t, actually. For example, the following is a function):
Let a be a variable representing the abilities of a being. Let E(a) be the ethical value of a being with abilities a. The domain is the nonnegative reals, the range is the reals. Let H be some level of abilities that we have chosen to identify as “human-level abilities”. We define E(a) thus:
a < H : E(a) = 0. a ≥ H: E(a) = f(a), where f(x) is some other function of our choice.
Uh, would you mind editing your statement back, or adding a note about what it said before? Otherwise I am left looking like a crazy person, spouting non sequiturs ;) Edit: Thanks!
Anyway, your updated statement is no longer vulnerable to my objection, but neither is it particularly “nice” anymore (that is, I don’t endorse it, and I don’t think most people here who take the “speciesist” position do either).
(By the way, letting H be “maleness” doesn’t make a whole lot of sense. It would be very awkward, to say the least, to represent “maleness” as some nonnegative real number; and it assumes that the abilities captured by a are somehow parallel to the gender spectrum; and it would make it so that we value male chickens but not human women; and calling “maleness” a “level of abilities” is pretty weird.)
But why don’t you think it’s “nice” to require abilities to be relevant? If you feel pain more strongly than others do, then I care more about when you’re in pain than when others are in pain.
… provided that you meet my criteria for caring about your pain in the first place — which criteria do not, themselves, have anything directly to do with pain. (See this post).
[1] Well, at first glance. Actually, I’m not so sure; I don’t seem to have any clear intuitions about this in the human case — but I definitely do in the sub-human case, and that’s what matters.
Well, if you follow that post far enough you’ll see that the author thinks animals feel something that’s morally equivalent to pain, s/he just doesn’t like calling it “pain”.
But assuming you genuinely don’t think animals feel something morally equivalent to pain, why? That post gives some high level ideas, but doesn’t list any supporting evidence.
But assuming you genuinely don’t think animals feel something morally equivalent to pain, why?
I had a longer response typed out, about what properties make me assign moral worth to entities, before I realized that you were asking me to clarify a position that I never took.
I didn’t say anything about animals not feeling pain (what does it “morally equivalent to pain” mean?). I said I don’t care about animal pain.
… the more I write this response, the more I want to ask you to just reread my comment. I suspect this means that I am misunderstanding you, or in any case that we’re talking past each other.
I apologize for the confusion. Let me attempt to summarize your position:
It is possible for subjectively bad things to happen to animals
Despite this fact, it is not possible for objectively bad things to happen to animals
Is that correct? If so, could you explain what “subjective” and “objective” mean here—usually, “objective” just means something like “the sum of subjective”, in which case #2 trivially follows from #1, which was the source of my confusion.
The problem with throwing out #3 is you also have to throw out:
(4) How we value a being’s moral worth is a function of their abilities (or other faculties that in some way relate to pain, e.g. the abilities A-G listed above)
Which is a rather nice proposition.
Edit: As Said points out, this should be:
(4) How we value a being’s pain is a function of their ability to feel pain (or other faculties that in some way relate to pain, e.g. the abilities A-G listed above)
You don’t, actually. For example, the following is a function):
Let a be a variable representing the abilities of a being. Let E(a) be the ethical value of a being with abilities a. The domain is the nonnegative reals, the range is the reals. Let H be some level of abilities that we have chosen to identify as “human-level abilities”. We define E(a) thus:
a < H : E(a) = 0.
a ≥ H: E(a) = f(a), where f(x) is some other function of our choice.
Fair enough. I’ve updated my statement:
Otherwise we could let H be “maleness” and justify sexism, etc.
Uh, would you mind editing your statement back, or adding a note about what it said before? Otherwise I am left looking like a crazy person, spouting non sequiturs ;) Edit: Thanks!
Anyway, your updated statement is no longer vulnerable to my objection, but neither is it particularly “nice” anymore (that is, I don’t endorse it, and I don’t think most people here who take the “speciesist” position do either).
(By the way, letting H be “maleness” doesn’t make a whole lot of sense. It would be very awkward, to say the least, to represent “maleness” as some nonnegative real number; and it assumes that the abilities captured by a are somehow parallel to the gender spectrum; and it would make it so that we value male chickens but not human women; and calling “maleness” a “level of abilities” is pretty weird.)
Haha, sure, updated.
But why don’t you think it’s “nice” to require abilities to be relevant? If you feel pain more strongly than others do, then I care more about when you’re in pain than when others are in pain.
I probably[1] do as well…
… provided that you meet my criteria for caring about your pain in the first place — which criteria do not, themselves, have anything directly to do with pain. (See this post).
[1] Well, at first glance. Actually, I’m not so sure; I don’t seem to have any clear intuitions about this in the human case — but I definitely do in the sub-human case, and that’s what matters.
Well, if you follow that post far enough you’ll see that the author thinks animals feel something that’s morally equivalent to pain, s/he just doesn’t like calling it “pain”.
But assuming you genuinely don’t think animals feel something morally equivalent to pain, why? That post gives some high level ideas, but doesn’t list any supporting evidence.
I had a longer response typed out, about what properties make me assign moral worth to entities, before I realized that you were asking me to clarify a position that I never took.
I didn’t say anything about animals not feeling pain (what does it “morally equivalent to pain” mean?). I said I don’t care about animal pain.
… the more I write this response, the more I want to ask you to just reread my comment. I suspect this means that I am misunderstanding you, or in any case that we’re talking past each other.
I apologize for the confusion. Let me attempt to summarize your position:
It is possible for subjectively bad things to happen to animals
Despite this fact, it is not possible for objectively bad things to happen to animals
Is that correct? If so, could you explain what “subjective” and “objective” mean here—usually, “objective” just means something like “the sum of subjective”, in which case #2 trivially follows from #1, which was the source of my confusion.
I don’t know what “subjective” and “objective” mean here, because I am not the one using that wording.
What do you mean by “subjectively bad things”?