… provided that you meet my criteria for caring about your pain in the first place — which criteria do not, themselves, have anything directly to do with pain. (See this post).
[1] Well, at first glance. Actually, I’m not so sure; I don’t seem to have any clear intuitions about this in the human case — but I definitely do in the sub-human case, and that’s what matters.
Well, if you follow that post far enough you’ll see that the author thinks animals feel something that’s morally equivalent to pain, s/he just doesn’t like calling it “pain”.
But assuming you genuinely don’t think animals feel something morally equivalent to pain, why? That post gives some high level ideas, but doesn’t list any supporting evidence.
But assuming you genuinely don’t think animals feel something morally equivalent to pain, why?
I had a longer response typed out, about what properties make me assign moral worth to entities, before I realized that you were asking me to clarify a position that I never took.
I didn’t say anything about animals not feeling pain (what does it “morally equivalent to pain” mean?). I said I don’t care about animal pain.
… the more I write this response, the more I want to ask you to just reread my comment. I suspect this means that I am misunderstanding you, or in any case that we’re talking past each other.
I apologize for the confusion. Let me attempt to summarize your position:
It is possible for subjectively bad things to happen to animals
Despite this fact, it is not possible for objectively bad things to happen to animals
Is that correct? If so, could you explain what “subjective” and “objective” mean here—usually, “objective” just means something like “the sum of subjective”, in which case #2 trivially follows from #1, which was the source of my confusion.
I probably[1] do as well…
… provided that you meet my criteria for caring about your pain in the first place — which criteria do not, themselves, have anything directly to do with pain. (See this post).
[1] Well, at first glance. Actually, I’m not so sure; I don’t seem to have any clear intuitions about this in the human case — but I definitely do in the sub-human case, and that’s what matters.
Well, if you follow that post far enough you’ll see that the author thinks animals feel something that’s morally equivalent to pain, s/he just doesn’t like calling it “pain”.
But assuming you genuinely don’t think animals feel something morally equivalent to pain, why? That post gives some high level ideas, but doesn’t list any supporting evidence.
I had a longer response typed out, about what properties make me assign moral worth to entities, before I realized that you were asking me to clarify a position that I never took.
I didn’t say anything about animals not feeling pain (what does it “morally equivalent to pain” mean?). I said I don’t care about animal pain.
… the more I write this response, the more I want to ask you to just reread my comment. I suspect this means that I am misunderstanding you, or in any case that we’re talking past each other.
I apologize for the confusion. Let me attempt to summarize your position:
It is possible for subjectively bad things to happen to animals
Despite this fact, it is not possible for objectively bad things to happen to animals
Is that correct? If so, could you explain what “subjective” and “objective” mean here—usually, “objective” just means something like “the sum of subjective”, in which case #2 trivially follows from #1, which was the source of my confusion.
I don’t know what “subjective” and “objective” mean here, because I am not the one using that wording.
What do you mean by “subjectively bad things”?