Some might be willing to bite the bullet at this point, trusting some strongly held ethical principle of theirs (e.g. A, B, C, D, or E above), to the conclusion of excluding humans who lack certain cognitive capacities from moral concern. One could point out that people’s empathy and indirect considerations about human rights, societal stability and so on, will ensure that this “loophole” in such an ethical view almost certainly remains without consequences for beings with human DNA. It is a convenient Schelling point after all to care about all humans (or at least all humans outside their mother’s womb).
This is pretty much my view. You dismiss it as unacceptable and absurd, but I would be interested in more detail on why you think that.
a society in which some babies were (factory-)farmed would be totally fine as long as the people are okay with it
This definitely hits the absurdity heuristic, but I think it is fine. The problem with the Babyeaters in Three Worlds Collide is not that they eat their young but that “the alien children, though their bodies were tiny, had full-sized brains. They could talk. They protested as they were eaten, in the flickering internal lights that the aliens used to communicate.”
If I was told that some evil scientist would first operate on my brain to (temporarily) lower my IQ and cognitive abilities, and then torture me afterwards, it is not like I will be less afraid of the torture or care less about averting it!
I would. Similarly if I were going to undergo torture I would be very glad if my capacity to form long term memories would be temporarily disabled.
(Speciesism has always seemed like a straw-man to me. How could someone with a reductionist worldview think that species classification matters morally? The “why species membership really is an absurd criterion” section is completely reasonable, reasonable enough that I have trouble seeing non-religious arguments against.)
Your view seems consistent. All I can say is that I don’t understand why intelligence is relevant for whether you care about suffering. (I’m assuming that you think human infants can suffer, or at least don’t rule it out completely, otherwise we would only have an empirical disagreement.)
I would. Similarly if I were going to undergo torture I would be very glad if my capacity to form long term memories would be temporarily disabled.
Me too. But we can control for memories by comparing the scenario I outlined with a scenario where you are first tortured (in your normal mental state) and then have the memory erased.
Speciesism has always seemed like a straw-man to me. How could someone with a reductionist worldview think that species classification matters morally?
You’re right, it’s not a big deal once you point it out. The interesting thing is that even a lot of secular people will at first (and sometimes even afterwards) bring arguments against the view that animals matter that don’t stand the test of the argument of species overlap. It seems like they simply aren’t thinking through all the implications of what they are saying, as if it isn’t their true rejection. Having said that, there is always the option of biting the bullet, but many people who argue against caring about nonhumans don’t actually want to do that.
I’m assuming that you think human infants can suffer
I definitely think human infants can suffer, but I think their suffering is different from that of adult humans in an important way. See my response to Xodarap.
All I can say is that I don’t understand why intelligence is relevant for whether you care about suffering.
Intelligence is relevant for the extent to which I expect alleviating suffering to have secondary positive effects. Since I expect most of the value of suffering alleviation to come through secondary effects on the far future, I care much more about human suffering than animal suffering.
As far as I can tell, animal suffering and human suffering are comparably important from a utility-function standpoint, but the difference in EV between alleviating human and animal suffering is huge—the difference in potential impact on the future between a suffering human vs a non-suffering human is massive compared to that between a suffering animal and a non-suffering animal.
Basically, it seems like alleviating one human’s suffering has more potential to help the far future than alleviating one animal’s suffering. A human who might be incapacitated to say, deal with x-risk might become helpful, while an animal is still not going to be consequential on that front.
So my opinion winds up being something like “We should help the animals, but not now, or even soon, because other issues are more important and more pressing”.
I agree with this point entirely—but at the same time, becoming vegetarian is such a cheap change in lifestyle (given an industrialized society) that you can have your cake and eat it too. Action—such as devoting time / money to animal rights groups—has to be ballanced against other action—helping humans—but that doesn’t apply very strongly to innaction—not eating meat.
You can come up with costs—social, personal, etc. to being vegetarian—but remember to weigh those costs on the right scale. And most of those costs disappear if you merely reduce meat consumption, rather than eliminate it outright.
You can come up with costs—social, personal, etc. to being vegetarian—but remember to weigh those costs on the right scale.
By saying this, yoiu’re trying to gloss over the very reason why becoming vegetarian is not a cheap change. Human beings are wired so as not to be able to ignore having to make many minor decisions or face many minor changes, and the fact that such things cannot be ignored means that being vegetarian actually has a high cost which involves being mentally nickel-and-dimed over and over again. It’s a cheap change in the sense that you can do it without paying lots of money or spending lots of time, but that isn’t sufficient to make the choice cheap in all meaningful senses.
Or to put it another way, being a vegetarian “just to try it” is like running a shareware program that pops up a nag screen every five minutes and occasionally forces you to type a random phrase in order to continue to run. Sure, it’s light on your pocketbook, doesn’t take much time, and reasding the nag screens and typing the phrases isn’t difficult, but that’s beside the point.
As has been mentioned elsewhere in this conversation, that’s a fully general argument—it can be applied to every change one might possibly make in one’s behavior.
Let’s enumerate the costs, rather than just saying “there are costs.”
Money wise, you save or break even.
It has no time cost in much of the US (most restaurants have vegetarian options).
The social cost depends on your situation—if you have people who cook for you, then you have to explain the change to them (in Washington state, this cost is tiny—people are understanding. In Texas, it is expensive).
The mental cost is difficult to discuss in a universal way. I found them to be rather small in my own case. Other people claim them to be quite large. But “I don’t want to change my behavior because changing behavior is hard” is not terribly convincing.
Your discounting of non-human life has to be rather extreme for “I will have to remind myself to change my behavior” to out weigh an immediate, direct and calculable reduction in world suffering.
This is false. Unless you eat steak or other expensive meats on a regular basis, meat is quite cheap. For example, my meat consumption is mostly chicken, assorted processed meats (salamis, frankfurters, and other sorts of sausages, mainly, but also things like pelmeni), fish (not the expensive kind), and the occasional pork (canned) and beef (cheap cuts). None of these things are pricy; I am getting a lot of protein (and fat and other good/necessary stuff) for my money.
It has no time cost in much of the US (most restaurants have vegetarian options).
Do you eat at restaurants all the time? Learning how to cook the new things you’re now eating instead of meat is a time cost.
Also, there are costs you don’t mention: for instance, a sudden, radical change in diet may have unforeseen health consequences. If the transition causes me to feel hungry all the time, that would be disastrous; hunger has an extreme negative effect on my mental performance, and as a software engineer, that is not the slightest bit acceptable. Furthermore, for someone with food allergies, like me, trying new foods is not without risk.
it can be applied to every change one might possibly make in one’s behavior.
And it would be correct to deny that a change that would possibly be made to one’s behavior is “such a cheap change” that we don’t need to weigh the cost of the change very much.
Your discounting of non-human life has to be rather extreme for “I will have to remind myself to change my behavior” to out weigh an immediate, direct and calculable reduction in world suffering.
That only applies to someone who already agrees with you about animal suffering to a sufficient degree that he should just become a vegetarian immediately anyway. Otherwise it’s not all that calculable.
I wasn’t able to glean this from your other article either, so I apologize if you’ve said it before: do you think non-human animals suffer? Or do you believe they suffer, but you just don’t care about their suffering?
I think suffering is qualitatively different when it’s accompanied by some combination I don’t fully understand of intelligence, self awareness, preferences, etc. So yes, humans are not the only animals that can suffer, but they’re the only animals whose suffering is morally relevant.
jkaufman, the dimmer-switch metaphor of consciousness is intuitively appealing. But consider some of the most intense experiences that humans can undergo, e.g. orgasm, raw agony, or blind panic. Such intense experiences are characterised by a breakdown of any capacity for abstract rational thought or reflective self-awareness. Neuroscanning evidence, too, suggests that much of our higher brain function effectively shuts down during the experience of panic or orgasm. Contrast this intensity of feeling with the subtle and rarefied phenomenology involved in e.g. language production, solving mathematical equations, introspecting one’s thoughts-episodes, etc—all those cognitive capacities that make mature members of our species distinctively human. For sure, this evidence is suggestive, not conclusive. But the supportive evidence converges with e.g. microelectrode studies using awake human subjects. Such studies suggest the limbic brain structures that generate our most intense experiences are evolutionarily very ancient. Also, the same genes, same neurotransmitter pathways and same responses to noxious stimuli are found in our fellow vertebrates. In view of how humans treat nonhumans, I think we ought to be worried that humans could be catastrophically mistaken about nonhuman animal sentience.
How certain are you that there is such a qualitative difference, and that you want to care about it? If there is some empirical (or perhaps also normative) uncertainty, shouldn’t you at least attribute some amount of concern for sentient beings that lack self-awareness?
It strikes me that the only “disagreement” you have with the OP is that your reasoning isn’t completely spelled out.
If you said, for example, “I don’t believe pigs’ suffering matters as much because they don’t show long-term behavior modifications as a result of painful stimuli” that wouldn’t be a speciesist remark. (It might be factually wrong, though.)
How do you avoid it being kosher to kill you when you’re asleep—and thus unable to perform at your usual level of consciousness—if you don’t endorse some version of the potential principle?
If you were to sleep and never wake, then it wouldn’t necessarily seem wrong, even from my perspective, to kill you. It seems like your potential for waking up that makes it wrong.
Killing me when I’m asleep is wrong for the same reason as killing me instantly and painlessly when I’m awake is wrong. Both ways I don’t get to continue living this life that I enjoy.
So, presumably, if you were destined for a life of horrifying squicky pain some time in the next couple of weeks, you’d approve of me just killing you. I mean ideally you’d probably like to be killed as close to the point HSP as possible but still, the future seems pretty important when determining whether you want to persist—it’s even in the text you linked
A death is bad because of the effect it has on those that remain and because it removes the possibilty for future joy on the part of the deceased.
So, bearing in mind that you don’t always seem to be performing at your normal level of thought—e.g. when you’re asleep—how do you bind that principle so that it applies to you and not infants?
I don’t think you should kill infants either, again for the “effect it has on those that remain and because it removes the possibility for future joy on the part of the deceased” logic.
a society in which some babies were (factory-)farmed would be totally fine as long as the people are okay with it
This definitely hits the absurdity heuristic, but I think it is fine. The problem with the Babyeaters in Three Worlds Collide is not that they eat their young but that “the alien children, though their bodies were tiny, had full-sized brains. They could talk. They protested as they were eaten, in the flickering internal lights that the aliens used to communicate.”
The “as long as the people are ok with it” deals with the “effect it has on those that remain”. The “removes the possibility for future joy on the part of the deceased” remains, but depending on what benefits the society was getting out of consuming their young it might still come out ahead. The future experiences of the babies are one consideration, but not the only one.
Granted, but do you really think that they’re going to be so incredibly tasty that the value people gain from eating babies over not eating babies outweighs the loss of all the future experiences of the babies?
To link that back to the marginal cases argument, which I believe—correct me if I’m wrong—you were responding to: Do you think that meat diets are just that much more tasty than vegetarian diets that the utility gained for human society outweighs the suffering and death of the animals? (Which may not be the only consideration, but I think at this point—may be wrong - you’d admit isn’t nothing.) If so, have you made an honest attempt to test this assumption for yourself by, for instance, getting a bunch of highly rated veg recipes and trying to be vegetarian for a month or so?
that the value people gain from eating babies over not eating babies outweighs the loss of all the future experiences of the babies?
The value a society might get from it isn’t limited to taste. They could have some sort of complex and fulfilling system set up around it. But I think you’re right, that any world I can think of where people are eating (some of) their babies would be improved by them switching to stop doing that.
that the utility gained for human society outweighs the suffering and death of the animals?
The “loss of all the future experiences of the babies” bit doesn’t apply here. Animals stay creatures without moral worth through their whole lives, and so the “suffering and death of the animals” here has no moral value.
The “loss of all the future experiences of the babies” bit doesn’t apply here. Animals stay creatures without moral worth through their whole lives, and so the “suffering and death of the animals” here has no moral value.
Pigs can meaningfully play computer games. Dolphins can communicate with people. Wolves have complex social structures and hunting patterns. I take all of these to be evidence of intelligence beyond the battery farmed infant level. They’re not as smart as humans but it’s not like they’ve got 0 potential for developing intelligence. Since birth seems to deprive your of a clear point in this regard—what’s your criteria for being smart enough to be morally considerable, and why?
If you’re considering opening a baby farm, not opening the baby farm doesn’t mean the babies get to live fulfilling lives: it means they don’t get to exist, so that point is moot.
If you view human potential as valuable then you end up saying something like that people should maximise that via breeding up to whatever the resource boundary is for meaningful human life. Unless that is implicitly bound—which I think to be a reasonable assumption to make for most people’s likely world views.
I would. Similarly if I were going to undergo torture I would be very glad if my capacity to form long term memories would be temporarily disabled.
Is this because you expect the torture wouldn’t be as bad if that happened or because you would care less about yourself in that state? Or a combination?
Similarly if I were going to undergo torture I would be very glad if my capacity to form long term memories would be temporarily disabled.
What if you were killed immediately afterwards, so long term memories wouldn’t come into play?
Is this because you expect the torture wouldn’t be as bad if that happened or because you would care less about yourself in that state? Or a combination?
If I had the mental capacity of a chicken it would not be bad to torture me, both because I wouldn’t matter morally. I also wouldn’t be “me” anymore in any meaningful sense.
What if you were killed immediately afterwards
If you offered me the choice between:
A) 50% chance you are tortured and then released, 50% chance you are killed immediately
B) 50% chance you are tortured and then killed, 50% chance you are released immediately
I would strongly prefer B. Is that what you’re asking?
If I had the mental capacity of a chicken it would not be bad to torture me, both because I wouldn’t matter morally. I also wouldn’t be “me” anymore in any meaningful sense.
If not morally, do the two situations not seem equivalent in terms of your non-moral preference for either? In other words, would you prefer one over the other in purely self interested terms?
I would strongly prefer B. Is that what you’re asking?
I was just making the point that if your only reason for thinking that it would be worse for you to be tortured now was that you would suffer more overall through long term memories we could just stipulate that you would be killed after in both situations so long term memories wouldn’t be a factor.
we could just stipulate that you would be killed after in both situations so long term memories wouldn’t be a factor
I see. Makes sense. I was giving long term memory formation an example of a way you could remove part of my self and decrease how much I objected to being tortured, but it’s not the only way.
B) Having your IQ and cognitive abilities lowered then being tortured.
EDIT:
I am asking because it is useful to consider pure self interest because it seems like a failure of a moral theory if it suggests people act outside of their self interest without some compensating goodness. If I want to eat an apple but my moral theory says that shouldn’t even though doing so wouldn’t harm anyone else, that seems like a point against that moral theory.
I see. Makes sense. I was giving long term memory formation an example of a way you could remove part of my self and decrease how much I objected to being tortured, but it’s not the only way.
Different cognitive abilities would matter in some ways for how much suffering is actually experienced but not as much as most people think. There are also situations where it seems like it could increase the amount an animal suffers by. While a chicken is being tortured it would not really be able to hope that the situation will change.
This is pretty much my view. You dismiss it as unacceptable and absurd, but I would be interested in more detail on why you think that.
This definitely hits the absurdity heuristic, but I think it is fine. The problem with the Babyeaters in Three Worlds Collide is not that they eat their young but that “the alien children, though their bodies were tiny, had full-sized brains. They could talk. They protested as they were eaten, in the flickering internal lights that the aliens used to communicate.”
I would. Similarly if I were going to undergo torture I would be very glad if my capacity to form long term memories would be temporarily disabled.
(Speciesism has always seemed like a straw-man to me. How could someone with a reductionist worldview think that species classification matters morally? The “why species membership really is an absurd criterion” section is completely reasonable, reasonable enough that I have trouble seeing non-religious arguments against.)
Your view seems consistent. All I can say is that I don’t understand why intelligence is relevant for whether you care about suffering. (I’m assuming that you think human infants can suffer, or at least don’t rule it out completely, otherwise we would only have an empirical disagreement.)
Me too. But we can control for memories by comparing the scenario I outlined with a scenario where you are first tortured (in your normal mental state) and then have the memory erased.
You’re right, it’s not a big deal once you point it out. The interesting thing is that even a lot of secular people will at first (and sometimes even afterwards) bring arguments against the view that animals matter that don’t stand the test of the argument of species overlap. It seems like they simply aren’t thinking through all the implications of what they are saying, as if it isn’t their true rejection. Having said that, there is always the option of biting the bullet, but many people who argue against caring about nonhumans don’t actually want to do that.
I definitely think human infants can suffer, but I think their suffering is different from that of adult humans in an important way. See my response to Xodarap.
Intelligence is relevant for the extent to which I expect alleviating suffering to have secondary positive effects. Since I expect most of the value of suffering alleviation to come through secondary effects on the far future, I care much more about human suffering than animal suffering.
As far as I can tell, animal suffering and human suffering are comparably important from a utility-function standpoint, but the difference in EV between alleviating human and animal suffering is huge—the difference in potential impact on the future between a suffering human vs a non-suffering human is massive compared to that between a suffering animal and a non-suffering animal.
Basically, it seems like alleviating one human’s suffering has more potential to help the far future than alleviating one animal’s suffering. A human who might be incapacitated to say, deal with x-risk might become helpful, while an animal is still not going to be consequential on that front.
So my opinion winds up being something like “We should help the animals, but not now, or even soon, because other issues are more important and more pressing”.
I agree with this point entirely—but at the same time, becoming vegetarian is such a cheap change in lifestyle (given an industrialized society) that you can have your cake and eat it too. Action—such as devoting time / money to animal rights groups—has to be ballanced against other action—helping humans—but that doesn’t apply very strongly to innaction—not eating meat.
You can come up with costs—social, personal, etc. to being vegetarian—but remember to weigh those costs on the right scale. And most of those costs disappear if you merely reduce meat consumption, rather than eliminate it outright.
By saying this, yoiu’re trying to gloss over the very reason why becoming vegetarian is not a cheap change. Human beings are wired so as not to be able to ignore having to make many minor decisions or face many minor changes, and the fact that such things cannot be ignored means that being vegetarian actually has a high cost which involves being mentally nickel-and-dimed over and over again. It’s a cheap change in the sense that you can do it without paying lots of money or spending lots of time, but that isn’t sufficient to make the choice cheap in all meaningful senses.
Or to put it another way, being a vegetarian “just to try it” is like running a shareware program that pops up a nag screen every five minutes and occasionally forces you to type a random phrase in order to continue to run. Sure, it’s light on your pocketbook, doesn’t take much time, and reasding the nag screens and typing the phrases isn’t difficult, but that’s beside the point.
As has been mentioned elsewhere in this conversation, that’s a fully general argument—it can be applied to every change one might possibly make in one’s behavior.
Let’s enumerate the costs, rather than just saying “there are costs.”
Money wise, you save or break even.
It has no time cost in much of the US (most restaurants have vegetarian options).
The social cost depends on your situation—if you have people who cook for you, then you have to explain the change to them (in Washington state, this cost is tiny—people are understanding. In Texas, it is expensive).
The mental cost is difficult to discuss in a universal way. I found them to be rather small in my own case. Other people claim them to be quite large. But “I don’t want to change my behavior because changing behavior is hard” is not terribly convincing.
Your discounting of non-human life has to be rather extreme for “I will have to remind myself to change my behavior” to out weigh an immediate, direct and calculable reduction in world suffering.
This is false. Unless you eat steak or other expensive meats on a regular basis, meat is quite cheap. For example, my meat consumption is mostly chicken, assorted processed meats (salamis, frankfurters, and other sorts of sausages, mainly, but also things like pelmeni), fish (not the expensive kind), and the occasional pork (canned) and beef (cheap cuts). None of these things are pricy; I am getting a lot of protein (and fat and other good/necessary stuff) for my money.
Do you eat at restaurants all the time? Learning how to cook the new things you’re now eating instead of meat is a time cost.
Also, there are costs you don’t mention: for instance, a sudden, radical change in diet may have unforeseen health consequences. If the transition causes me to feel hungry all the time, that would be disastrous; hunger has an extreme negative effect on my mental performance, and as a software engineer, that is not the slightest bit acceptable. Furthermore, for someone with food allergies, like me, trying new foods is not without risk.
And it would be correct to deny that a change that would possibly be made to one’s behavior is “such a cheap change” that we don’t need to weigh the cost of the change very much.
That only applies to someone who already agrees with you about animal suffering to a sufficient degree that he should just become a vegetarian immediately anyway. Otherwise it’s not all that calculable.
I wasn’t able to glean this from your other article either, so I apologize if you’ve said it before: do you think non-human animals suffer? Or do you believe they suffer, but you just don’t care about their suffering?
(And in either case, why?)
I think suffering is qualitatively different when it’s accompanied by some combination I don’t fully understand of intelligence, self awareness, preferences, etc. So yes, humans are not the only animals that can suffer, but they’re the only animals whose suffering is morally relevant.
jkaufman, the dimmer-switch metaphor of consciousness is intuitively appealing. But consider some of the most intense experiences that humans can undergo, e.g. orgasm, raw agony, or blind panic. Such intense experiences are characterised by a breakdown of any capacity for abstract rational thought or reflective self-awareness. Neuroscanning evidence, too, suggests that much of our higher brain function effectively shuts down during the experience of panic or orgasm. Contrast this intensity of feeling with the subtle and rarefied phenomenology involved in e.g. language production, solving mathematical equations, introspecting one’s thoughts-episodes, etc—all those cognitive capacities that make mature members of our species distinctively human. For sure, this evidence is suggestive, not conclusive. But the supportive evidence converges with e.g. microelectrode studies using awake human subjects. Such studies suggest the limbic brain structures that generate our most intense experiences are evolutionarily very ancient. Also, the same genes, same neurotransmitter pathways and same responses to noxious stimuli are found in our fellow vertebrates. In view of how humans treat nonhumans, I think we ought to be worried that humans could be catastrophically mistaken about nonhuman animal sentience.
“Accompanied” can also mean “reflected upon after the fact”.
I agree with your last sentence though.
How certain are you that there is such a qualitative difference, and that you want to care about it? If there is some empirical (or perhaps also normative) uncertainty, shouldn’t you at least attribute some amount of concern for sentient beings that lack self-awareness?
I second this. Really not sure what justifies such confidence.
It strikes me that the only “disagreement” you have with the OP is that your reasoning isn’t completely spelled out.
If you said, for example, “I don’t believe pigs’ suffering matters as much because they don’t show long-term behavior modifications as a result of painful stimuli” that wouldn’t be a speciesist remark. (It might be factually wrong, though.)
There’s missing something at the end, like ”… is morally relevant”, right?
Fixed; thanks!
How do you avoid it being kosher to kill you when you’re asleep—and thus unable to perform at your usual level of consciousness—if you don’t endorse some version of the potential principle?
If you were to sleep and never wake, then it wouldn’t necessarily seem wrong, even from my perspective, to kill you. It seems like your potential for waking up that makes it wrong.
Killing me when I’m asleep is wrong for the same reason as killing me instantly and painlessly when I’m awake is wrong. Both ways I don’t get to continue living this life that I enjoy.
(I’m not as anti-death as some people here.)
So, presumably, if you were destined for a life of horrifying squicky pain some time in the next couple of weeks, you’d approve of me just killing you. I mean ideally you’d probably like to be killed as close to the point HSP as possible but still, the future seems pretty important when determining whether you want to persist—it’s even in the text you linked
So, bearing in mind that you don’t always seem to be performing at your normal level of thought—e.g. when you’re asleep—how do you bind that principle so that it applies to you and not infants?
I don’t think you should kill infants either, again for the “effect it has on those that remain and because it removes the possibility for future joy on the part of the deceased” logic.
How do you reconcile that with:
The “as long as the people are ok with it” deals with the “effect it has on those that remain”. The “removes the possibility for future joy on the part of the deceased” remains, but depending on what benefits the society was getting out of consuming their young it might still come out ahead. The future experiences of the babies are one consideration, but not the only one.
Granted, but do you really think that they’re going to be so incredibly tasty that the value people gain from eating babies over not eating babies outweighs the loss of all the future experiences of the babies?
To link that back to the marginal cases argument, which I believe—correct me if I’m wrong—you were responding to: Do you think that meat diets are just that much more tasty than vegetarian diets that the utility gained for human society outweighs the suffering and death of the animals? (Which may not be the only consideration, but I think at this point—may be wrong - you’d admit isn’t nothing.) If so, have you made an honest attempt to test this assumption for yourself by, for instance, getting a bunch of highly rated veg recipes and trying to be vegetarian for a month or so?
The value a society might get from it isn’t limited to taste. They could have some sort of complex and fulfilling system set up around it. But I think you’re right, that any world I can think of where people are eating (some of) their babies would be improved by them switching to stop doing that.
The “loss of all the future experiences of the babies” bit doesn’t apply here. Animals stay creatures without moral worth through their whole lives, and so the “suffering and death of the animals” here has no moral value.
Pigs can meaningfully play computer games. Dolphins can communicate with people. Wolves have complex social structures and hunting patterns. I take all of these to be evidence of intelligence beyond the battery farmed infant level. They’re not as smart as humans but it’s not like they’ve got 0 potential for developing intelligence. Since birth seems to deprive your of a clear point in this regard—what’s your criteria for being smart enough to be morally considerable, and why?
If you’re considering opening a baby farm, not opening the baby farm doesn’t mean the babies get to live fulfilling lives: it means they don’t get to exist, so that point is moot.
If you view human potential as valuable then you end up saying something like that people should maximise that via breeding up to whatever the resource boundary is for meaningful human life. Unless that is implicitly bound—which I think to be a reasonable assumption to make for most people’s likely world views.
Is this because you expect the torture wouldn’t be as bad if that happened or because you would care less about yourself in that state? Or a combination?
What if you were killed immediately afterwards, so long term memories wouldn’t come into play?
If I had the mental capacity of a chicken it would not be bad to torture me, both because I wouldn’t matter morally. I also wouldn’t be “me” anymore in any meaningful sense.
If you offered me the choice between:
A) 50% chance you are tortured and then released, 50% chance you are killed immediately
B) 50% chance you are tortured and then killed, 50% chance you are released immediately
I would strongly prefer B. Is that what you’re asking?
If not morally, do the two situations not seem equivalent in terms of your non-moral preference for either? In other words, would you prefer one over the other in purely self interested terms?
I was just making the point that if your only reason for thinking that it would be worse for you to be tortured now was that you would suffer more overall through long term memories we could just stipulate that you would be killed after in both situations so long term memories wouldn’t be a factor.
I’m sorry, I’m confused. Which two situations?
I see. Makes sense. I was giving long term memory formation an example of a way you could remove part of my self and decrease how much I objected to being tortured, but it’s not the only way.
A) Being tortured as you are now
B) Having your IQ and cognitive abilities lowered then being tortured.
EDIT:
I am asking because it is useful to consider pure self interest because it seems like a failure of a moral theory if it suggests people act outside of their self interest without some compensating goodness. If I want to eat an apple but my moral theory says that shouldn’t even though doing so wouldn’t harm anyone else, that seems like a point against that moral theory.
Different cognitive abilities would matter in some ways for how much suffering is actually experienced but not as much as most people think. There are also situations where it seems like it could increase the amount an animal suffers by. While a chicken is being tortured it would not really be able to hope that the situation will change.
Strong preference for (B), having my cognitive abilities lowered to the point that there’s no longer anyone there to experience the torture.
Those are not the same thing. They’re not even remotely similar beyond both involving brain surgery.
Me too, but I never could persuade the people arguing for it of this fact :(
Agreed.
I was attempting to give an example of other ways in which I might find torture more palatable if I were modified first.
Right, which is why this argument isn’t actually a straw-man and why ice9′s post is useful.
Ah, OK.
Hah, yes. Sorry, I thought you were complaining it was actually a strawman :/ Whoops.