The goal of defining ‘human’ (and/or ‘sapient’) here is to steel-man (or at least better understand) the claim that only human suffering matters, so we can evaluate it. If “language use and sensation” end up only being necessary or sufficient for concepts of ‘human’ that aren’t plausible candidates for the original ‘non-humans aren’t moral patients’ claim, then they aren’t relevant. The goal here isn’t to come up with the one true definition of ‘human’, just to find one that helps with the immediate task of cashing out anthropocentric ethical systems.
I am pretty sure that if the universe consisted only of sentient but no sapient beings, I would be at a loss as to how we should discuss moral significance.
Well, you’d be at a loss because you either wouldn’t exist or wouldn’t be able to linguistically express anything. But we can still adopt an outsider’s perspective and claim that universes with sentience but no sapience are better when they have a higher ratio of joy to suffering, or of preference satisfaction to preference frustration.
The goal here isn’t to come up with the one true definition of ‘human’, just to find one that helps with the immediate task of cashing out anthropocentric ethical systems.
Right, exactly. Doing so, and defending an anthropocentric ethical system, does not entail that it’s perfectly okay to subject sentient non-language users to infinite torture. It does probably entail that our reasons for protecting sapient non-language users (if we discover it ethically necessary to do so as anthropocentrists) will come down to anthropocentric reasons. This argument didn’t begin as an attempt to steel-man the claim that only human suffer matters, it began as an attempt to steel-man the claim that the reason human suffering matters to us (when we have no other reason to care) is that it is specifically human suffering.
Another way to put this is that I’m defending, or trying to steel-man, the claim that the fact that a human’s suffering is human gives us a reason all on its own to think that that suffering is ethically significant. While nothing about an animal’s suffering being animal suffering gives us a reason all on its own to think that that suffering is ethically significant. We could still have other reasons to think it so, so the ‘infinite torture’ objection doesn’t necessarily land.
Well, you’d be at a loss because you either wouldn’t exist or wouldn’t be able to linguistically express anything.
Right, exactly. Doing so, and defending an anthropocentric ethical system, does not entail that it’s perfectly okay to subject sentient non-language users to infinite torture.
You seem to be using ‘anthropocentric’ to mean ‘humans are the ultimate arbiters or sources of morality’. I’m using ‘anthropocentric’ instead to mean ‘only human experiences matter’. Then by definition it doesn’t matter whether non-humans are tortured, except insofar as this also diminishes humans’ welfare. This is the definition that seems relevant Qiaochu’s statement, “I am still not convinced that I should care about animal suffering.” The question isn’t why we should care; it’s whether we should care at all.
It does probably entail that our reasons for protecting sapient non-language users (if we discover it ethically necessary to do so as anthropocentrists) will come down to anthropocentric reasons.
I don’t think which reasons happen to psychologically motivate us matters here. People can have bad reasons to do good things. More interesting is the question of whether our good reasons would all be human-related, but that too is independent of Qiaochu’s question.
This argument didn’t begin as an attempt to steel-man the claim that only human suffer matters, it began as an attempt to steel-man the claim that the reason human suffering matters to us is that it is specifically human suffering.
No, the latter was an afterthought. The discussion begins here.
I’m using ‘anthropocentric’ instead to mean ‘only human experiences matter’.
Ah, okay, to be clear, I’m not defending this view. I think it’s a strawman.
I don’t think which reasons happen to psychologically motivate us matters here.
I didn’t refer to psychological reasons. An example besides Kant’s (which is not psychological in the relevant sense) might be this: it is unethical to torture a cow because though cows have no ethical significance in and of themselves, they do have ethical significance as domesticated animals, who are wards of our society. But that’s just an example of such a reason.
No, the latter was an afterthought. The discussion begins here.
I took the discussion to begin from Peter’s response to that comment, since that comment didn’t contain an argument, while Peter’s did. It would be weird for me to respond to Qiaochu’s request for an argument defending the moral significance of animal suffering by defending the idea that only human suffering is fundamental.
But this is getting to be a discussion about our discussion. I’m not tapping out, quite, but I would like us to move on to the actual conversation.
It would be weird for me to respond to Qiaochu’s request for an argument defending the moral significance of animal suffering by defending the idea that only human suffering is fundamental.
Not if you agreed with Qiaochu that no adequately strong reasons for caring about any non-human suffering have yet been presented. There’s no rule against agreeing with an OP.
Fair point, though we might be reading Qiaochu differently. I took him to be saying “I know of no reasons to take animal suffering as morally significant, though this is consistant with my treating it as if it is and with its actually being so.” I suppose you took him to be saying something more like “I don’t think there are any reasons to take animal suffering as morally significant.”
I don’t have good reasons to think my reading is better. I wouldn’t want to try and defend Qiaochu’s view if the second reading represents it.
The goal of defining ‘human’ (and/or ‘sapient’) here is to steel-man (or at least better understand) the claim that only human suffering matters, so we can evaluate it. If “language use and sensation” end up only being necessary or sufficient for concepts of ‘human’ that aren’t plausible candidates for the original ‘non-humans aren’t moral patients’ claim, then they aren’t relevant. The goal here isn’t to come up with the one true definition of ‘human’, just to find one that helps with the immediate task of cashing out anthropocentric ethical systems.
Well, you’d be at a loss because you either wouldn’t exist or wouldn’t be able to linguistically express anything. But we can still adopt an outsider’s perspective and claim that universes with sentience but no sapience are better when they have a higher ratio of joy to suffering, or of preference satisfaction to preference frustration.
Right, exactly. Doing so, and defending an anthropocentric ethical system, does not entail that it’s perfectly okay to subject sentient non-language users to infinite torture. It does probably entail that our reasons for protecting sapient non-language users (if we discover it ethically necessary to do so as anthropocentrists) will come down to anthropocentric reasons. This argument didn’t begin as an attempt to steel-man the claim that only human suffer matters, it began as an attempt to steel-man the claim that the reason human suffering matters to us (when we have no other reason to care) is that it is specifically human suffering.
Another way to put this is that I’m defending, or trying to steel-man, the claim that the fact that a human’s suffering is human gives us a reason all on its own to think that that suffering is ethically significant. While nothing about an animal’s suffering being animal suffering gives us a reason all on its own to think that that suffering is ethically significant. We could still have other reasons to think it so, so the ‘infinite torture’ objection doesn’t necessarily land.
We can discuss that world from this one.
You seem to be using ‘anthropocentric’ to mean ‘humans are the ultimate arbiters or sources of morality’. I’m using ‘anthropocentric’ instead to mean ‘only human experiences matter’. Then by definition it doesn’t matter whether non-humans are tortured, except insofar as this also diminishes humans’ welfare. This is the definition that seems relevant Qiaochu’s statement, “I am still not convinced that I should care about animal suffering.” The question isn’t why we should care; it’s whether we should care at all.
I don’t think which reasons happen to psychologically motivate us matters here. People can have bad reasons to do good things. More interesting is the question of whether our good reasons would all be human-related, but that too is independent of Qiaochu’s question.
No, the latter was an afterthought. The discussion begins here.
Ah, okay, to be clear, I’m not defending this view. I think it’s a strawman.
I didn’t refer to psychological reasons. An example besides Kant’s (which is not psychological in the relevant sense) might be this: it is unethical to torture a cow because though cows have no ethical significance in and of themselves, they do have ethical significance as domesticated animals, who are wards of our society. But that’s just an example of such a reason.
I took the discussion to begin from Peter’s response to that comment, since that comment didn’t contain an argument, while Peter’s did. It would be weird for me to respond to Qiaochu’s request for an argument defending the moral significance of animal suffering by defending the idea that only human suffering is fundamental.
But this is getting to be a discussion about our discussion. I’m not tapping out, quite, but I would like us to move on to the actual conversation.
Not if you agreed with Qiaochu that no adequately strong reasons for caring about any non-human suffering have yet been presented. There’s no rule against agreeing with an OP.
Fair point, though we might be reading Qiaochu differently. I took him to be saying “I know of no reasons to take animal suffering as morally significant, though this is consistant with my treating it as if it is and with its actually being so.” I suppose you took him to be saying something more like “I don’t think there are any reasons to take animal suffering as morally significant.”
I don’t have good reasons to think my reading is better. I wouldn’t want to try and defend Qiaochu’s view if the second reading represents it.