Human suffering might be orders of magnitude more important. (Though: what reason do you have in mind for this?) But non-human animal suffering is likely to be orders of magnitude more common. Some non-human animals are probably capable of suffering, and we care a great deal about suffering in the case of humans (as, presumably, we would in the case of intelligent aliens). So it seems arbitrary to exclude non-human animal suffering from our concerns completely. Moreover, if you’re uncertain about whether animals suffer, you should err on the side of assuming that they do because this is the safer assumption. Mistakenly killing thousands of suffering moral patients over your lifetime is plausibly a much bigger worry than mistakenly sparing thousands of unconscious zombies and missing out on some mouth-pleasures.
I’m not a vegetarian myself, but I do think vegetarianism is a morally superior option. I also think vegetarians should adopt a general policy of not paying people to become vegetarians (except perhaps as a short-term experiment, to incentivize trying out the lifestyle).
Human suffering might be orders of magnitude more important. (Though: what reason do you have in mind for this?)
I’m a human and I care about humans. Animals only matter insofar as they affect the lives of humans. Is this really such a difficult concept?
But non-human animal suffering is likely to be orders of magnitude more common.
I don’t mean per organism, I mean in aggregate. In aggregate, I think the totality of animal suffering is orders of magnitude less important than the totality of human suffering.
Moreover, if you’re uncertain about whether animals suffer, you should err on the side of assuming that they do because this is the safer assumption.
I’m not disagreeing that animals suffer. I’m telling you that I don’t care whether they suffer.
You are many things: a physical object, a living being, a mammal, a member of the species Homo sapiens, an East Asian (I believe), etc. What’s so special about the particular category you picked?
Presumably mammals also exhibit more psychological similarity than non-mammals, and the same is probably true about East Asians relative to members of other races. What makes the psychological unity of mankind special?
Moreover, it seems that insofar as you care about humans because they have certain psychological traits, you should care about any creature that has those traits. Since many animals have many of the traits that humans have, and some animals have those traits to a greater degree than some humans do, it seems you should care about at least some nonhuman animals.
it seems you should care about at least some nonhuman animals.
I’m willing to entertain this possibility. I’ve recently been convinced that I should consider caring about dolphins and other similarly intelligent animals, possibly including pigs (so I might be willing to give up pork). I still don’t care about fish or chickens. I don’t think I can have a meaningful relationship with a fish or a chicken even in principle.
Presumably mammals also exhibit more psychological similarity than non-mammals, and the same is probably true about East Asians relative to members of other races. What makes the psychological unity of mankind special?
I suspect that if you plotted all living beings by psychological similarity with Qiaochu_Yuan, there would be a much bigger gap between the -- [reminds himself about small children, people with advanced-stage Alzheimer’s, etc.] never mind.
(I could steelman my yesterday self by noticing that even though small children aren’t similar to QY they can easily become so in the future, and by replacing “gap” with “sparsely populated region”.)
Moreover, it seems that insofar as you care about humans because they have certain psychological traits, you should care about any creature that has those traits. Since many animals have many of the traits that humans have, and some animals have those traits to a greater degree than some humans do, it seems you should care about at least some nonhuman animals.
Doesn’t follow. If we imagine a personhood metric for animals evaluated over some reasonably large number of features, it might end up separating (most) humans from all nonhuman animals even if for each particular feature there exist some nonhuman animals that beat humans on it. There’s no law of ethics saying that the parameter space has to be small.
It’s not likely to be a clean separation, and there are almost certainly some exceptional specimens of H. sapiens that wouldn’t stand up to such a metric, but—although I can’t speak for Qiaochu—that’s a bullet I’m willing to bite.
Since many animals have many of the traits that humans have, and some animals have those traits to a greater degree than some humans do, it seems you should care about at least some animals.
Does not follow, since an equally valid conclusion is that Qiaochu_Yuan should not-care about some humans (those that exhibit relevant traits less than some nonhuman animals). One person’s modus ponens is etc.
I’m a human and I care about humans. Animals only matter insofar as they affect the lives of humans.
Every human I know cares at least somewhat about animal suffering. We don’t like seeing chickens endlessly and horrifically tortured—and when we become vividly acquainted with such torture, our not-liking-it generally manifests as a desire for the torture to stop, not just as a desire to become ignorant that this is going on so it won’t disturb our peace of mind. I’ll need more information to see where the disanalogy is supposed to be between compassion for other species and compassion for other humans.
I’m not disagreeing that animals suffer. I’m telling you that I don’t care whether they suffer.
Are you certain you don’t care?
Are you certain that you won’t end up viewing this dispassion as a bias on your part, analogous to people in history who genuinely didn’t care at all about black people (but would regret and abandon this apathy if they knew all the facts)?
If you feel there’s any realistic chance you might discover that you do care in the future, you should again err strongly on the side of vegetarianism. Feeling a bit silly 20 years from now because you avoided torturing beings it turns out you don’t care about is a much smaller cost than learning 20 years from now you’re the hitler of cows. Vegetarianism accommodates meta-uncertainty about ethical systems better than its rivals do.
I’ll need more information to see where the disanalogy is supposed to be between compassion for other species and compassion for other humans.
I don’t feel psychologically similar to a chicken in the same way that I feel psychologically similar to other humans.
Are you certain you don’t care?
No, or else I wouldn’t be asking for arguments.
If you feel there’s any chance you might discover that you do care in the future, you should again err strongly on the side of vegetarianism. Feeling a bit silly 20 years from now because you avoided torturing beings it turns out you don’t care about is a much smaller cost than learning 20 years from now you’re the hitler of cows. Vegetarianism accommodates meta-uncertainty about ethical systems better than its rivals do.
I don’t feel psychologically similar to a chicken in the same way that I feel psychologically similar to other humans.
I don’t either, but unless I can come up with a sharp and universal criterion for distinguishing all chickens from all humans, chickens’ psychological alienness to me will seem a difference of degree more than of kind. It’s a lot easier to argue that chicken suffering matters less than human suffering (or to argue that chickens are zombies) than to argue that chicken suffering is completely morally irrelevant.
Some chickens may very well have more psychologically in common with me than I have in common with certain human infants or with certain brain-damaged humans; but I still find myself able to feel that sentient infants and disabled sentient humans oughtn’t be tortured. (And not just because I don’t want their cries to disturb my own peace of mind. Nor just because they could potentially become highly intelligent, through development or medical intervention. Those might enhance the moral standing of any of these organisms, but they don’t appear to exhaust it..)
That’s not a good point, that’s a variety of Pascal’s Mugging: you’re suggesting that the fact that the possible consequence is large (“I tortured beings” is a really negative thing) means that even fi the chance is small, you should act on that basis.
I’m telling you that I don’t care whether they suffer.
I don’t believe you. If you see someone torturing a cat, a dolphin or a monkey, would you feel nothing? (Suppose that they are not likely to switch to torturing humans, to avoid “gateway torture” complications.)
My problem with this question is that if I see video of someone torturing a cat when I am confident there was no actual cat-torturing involved in creating those images (e.g., I am confident it was all photoshopped), what I feel is pretty much indistinguishable from what I feel if I see video of someone torturing a cat when I am confident there was actual cat-torturing.
So I’m reluctant to treat what I feel in either case as expressing much of an opinion about suffering, since I feel it roughly equally when I believe suffering is present and when I don’t.
So if you can factor-out, so to speak, the actual animal suffering: If you had to choose between “watch that video, no animal was harmed” versus “watch that video, an animal was harmed, also you get a biscuit (not the food, the 100 squid (not the animals, the pounds (not the weight unit, the monetary unit)))”, which would you choose? (Your feelings would be the same, as you say, your decision probably wouldn’t be. Just checking.)
A biscuit provides the same number of calories as 100 SQUID, which stands for Superconducting Quantum Interference Device, which weigh a pound apiece, which masses 453.6 grams, which converts to 4 10^16 joules, which can be converted into 1.13 10^10 kilowatt-hours, which are worth 12 cents per kW-hr, so around 136 billion dollars or so.
Reminds me of … Note the name of the website. She doesn’t look happy! “I am altering the deal. Pray I don’t alter it any further.”
Edit: Also, 1.13 * 10^10 kilowatt-hours at 12 cents each yields 1.36 billion dollars, not 136 billion dollars! An honest mistake (cents, not dollars per kWh), or a scam? And as soon as Dmitry is less active …
“squid” is slang for a GBP, i.e. Pound Sterling, although I’m more used to hearing the similar “quid.” One hundred of them can be referred to as a “biscuit,” apparently because of casino chips, similar to how people in America will sometimes refer to a hundred dollars as a “benjamin.”
That is, what are TheOtherDave’s preferences between watching an unsettling movie that does not correspond to reality and watching an unsettling movie that does correspond to reality, but they’re paid some cash.
In this case it seems to. It’s the first time I recall encountering it but I’m not British and my parsing of unfamiliar and ‘rough’ accents is such that if I happened to have heard someone say ‘squid’ I may have parsed it as ‘quid’, and discarded the ‘s’ as noise from people saying a familiar term in a weird way rather than a different term.
It amuses me that despite making neither head nor tail of the unpacking, I answered the right question. Well, to the extent that my noncommital response can be considered an answer to any question at all.
Well, I figured that much out from googling, but I was more reacting to what seems like a deliberate act of obfuscation on Kawoomba’s part that serves no real purpose.
So to be clear—you do some Googling and find two videos, one has realistic CGI animal harm, the other real animal harm; assume the CGI etc is so good that I wouldn’t be able to tell which was which if you hadn’t told me. You don’t pay for the animal harm video, or in any way give anyone an incentive to harm an animal in fetching it; just pick up a pre-existing one. I have a choice between watching the fake-harm video (and knowing it’s fake) or watching the real-harm video and receiving £100.
If the reward is £100, I’ll take the £100; if it’s an actual biscuit, I prefer to watch the fake-harm video.
I’m genuinely unsure, not least because of your perplexing unpacking of “biscuit”.
Both examples are unpleasant; I don’t have a reliable intuition as to which is more so if indeed either is.
I have some vague notion that if I watch the real-harm video that might somehow be interpreted as endorsing real-harm more strongly than if I watch the fake-harm vide, like through ratings or download monitoring or something, which inclines me to the fake-harm video. Though whether I’m motivated by the vague belief that such differential endorsement might cause more harm to animals, or by the vague belief that it might cause more harm to my status, I’m again genuinely unsure of. In the real world I usually assume that when I’m not sure it’s the latter, but this is such a contrived scenario that I’m not confident of that either.
If I assume the biscuit is a reward of some sort, then maybe that reward is enough to offset the differential endorsement above, and maybe it isn’t.
I don’t want to see animals get tortured because that would be an unpleasant thing to see, but there are lots of things I think are unpleasant things to see that don’t have moral valence (in another comment I gave the example of seeing corpses get raped).
I might also be willing to assign dolphins and monkeys moral value (I haven’t made up my mind about this), but not most animals.
Do you have another example besides the assault of corpses? I can easily see real moral repugnance from the effect it has on the offenders, who are victims of their own actions. If you find it unpleasant only when you see it, would not they find it horrific when they perform it?
Also in these situations, repugnance can leak due to uncertainty of other real moral outcomes, such as the (however small) likelihood of family members of the deceased learning of the activity, for whom these corpses have real moral value.
Seeing humans perform certain kinds of body modifications would also be deeply unpleasant to me, but it’s also not an act I assign moral valence to (I think people should be allowed to modify their bodies more or less arbitrarily).
I’ll chime in to comment that QiaochuYuan’s[1] views as expressed in this entire thread are quite similar to my own (with the caveat that for his “human” I would substitute something like “sapient, self-aware beings of approximately human-level intelligence and above” and possibly certain other qualifiers having to do with shared values, to account for Yoda/Spock/AIs/whatever; it seems like QiaochuYuan uses “approximately human” to mean roughly this).
So, please reconsider your disbelief.
[1] Sorry, the board software is doing weird things when I put in underscores...
If I did have a pet, it is possible that I would not care for it (assuming animal cruelty laws did not exist), although it is more likely that I would develop an attachment to it, and would come to care about its well-being. That is how humans work, in my experience. I don’t think this necessarily has any implications w.r.t. the moral status of nonhuman animals.
Do you consider young children and very low-intelligence people to be morally-relevant?
(If—in the case of children—you consider potential for later development to be a key factor, we can instead discuss only children who have terminal illnesses.)
Long answer: When I read Peter Singer, what I took away was not, as many people here apparently did, that we should value animals; what I took away is that we should not value fetuses, newborns, and infants (to a certain age, somewhere between 0 and 2 years [1]). That is, I think the cutoff for moral relevant is somewhere above, say, cats, dogs, newborns… where exactly? I’m not sure.
Humans who have a general intelligence so low that they are incapable of thinking about themselves as conscious individuals are also, in my view, not morally relevant. I don’t know whether such humans exist (most people with Down syndrome don’t quite seem to fit that criterion, for instance).
There are many caveats and edge cases, for instance: what if the low-intelligence condition is temporary, and will repair itself with time? Then I think we should consider the wishes of the self that the person was before the impairment, and the rights of their future, non-impaired, selves. But what if the impairment can be repaired using medical technology? Same deal. What if it can’t? Then I would consider this person morally irrelevant. What if the person was of extremely low intelligence, and had always been so, but we could apply some medical intervention to raise their intelligence to at least normal human level? I would consider that act morally equivalent to creating a new sapient being (whether that’s good or bad is a separate question).
So: it’s complicated. But to answer practical questions: I don’t consider infanticide the moral equivalent of murder (although it’s reasonable to outlaw it anyway, as birth is good Schelling point, but the penalty should surely be nowhere near as harsh as for killing an adult or older child). The rights of low-intelligence people is a harder issue partly because there are no obvious cutoffs or metrics.
I hope that answers your question; if not, I’ll be happy to elaborate further.
Would that allow absorbing some of his midichlorians? Black magic! Well, I might try (since he died of natural causes anyway). But yoda dies without leaving a corpse. It would be difficult. The only viable strategy would seem to be to have Yoda anethetize himself a minute before he ghosts (“becomes one with the force”). Then the flesh would remain corporeal for consumption.
The real ethical test would be would I freeze yoda’s head in carbonite, acquire brain scanning technology and upload him into a robot body? Yoda may have religious objections to the practice so I may honour his preferences while being severely disappointed. I suspect I’d choose the Dark Side of the Force myself. The Sith philosophy seems much more compatible with life extension by whatever means necessary.
It should be noted that Yoda has an observable afterlife. Obi-wan had already appeared after his body had died, apparently in full possession of his memories and his reasoning abilities; Yoda proposes to follow in Obi-wan’s footsteps, and has good reason to believe that he will be able to do so.
Actual use of Sith techniques seems to turn people evil at ridiculously accelerated rates. At least in-universe it seems that sensible people would write off this attractive-sounding philosophy as window dressing on an extremely damaging set of psychic techniques.
I wouldn’t eat flies or squids either. But I know that that’s a cultural construct.
Let’s ask another question: would I care if someone else eats Yoda?
Well, I might, but only because eating Yoda is, in practice, correlated with lots of other things I might find undesirable. If I could be assured that such was not the case (for instance, if there was another culture which ate the dead to honor them, that’s why he ate Yoda, and Yoda’s will granted permission for this), then no, I wouldn’t care if someone else eats Yoda.
Well, I might, but only because eating Yoda is, in practice, correlated with lots of other things I might find undesirable.
In practice? In common Yoda-eating practice? Something about down to earth ‘in practice’ empirical observations about things that can not possibly have ever occurred strikes me as broken. Perhaps “would be, presumably, correlated with”.
If I could be assured that such was not the case (for instance, if there was another culture which ate the dead to honor them, that’s why he ate Yoda, and Yoda’s will granted permission for this), then no, I wouldn’t care if someone else eats Yoda.
In Yoda’s case he could even have just asked for permission from Yoda’s force ghost. Jedi add a whole new level of meaning to “Living Will”.
“In practice” doesn’t mean “this is practiced”, it means “given that this is done, what things are, with high probability, associated with it in real-life situations” (or in this case, real-life-+-Yoda situations). “In practice” can apply to rare or unique events.
I really don’t think statements of the form “X is, in practice, correlated with Y” should apply to situations where X has literally never occurred. You might want to say “I expect that X would, in practice, be correlated with Y” instead.
“In practice” doesn’t mean “this is practiced”, it means “given that this is done, what things are, with high probability, associated with it in real-life situations” (or in this case, real-life-+-Yoda situations). “In practice” can apply to rare or unique events.
Human suffering might be orders of magnitude more important. (Though: what reason do you have in mind for this?) But non-human animal suffering is likely to be orders of magnitude more common. Some non-human animals are probably capable of suffering, and we care a great deal about suffering in the case of humans (as, presumably, we would in the case of intelligent aliens). So it seems arbitrary to exclude non-human animal suffering from our concerns completely. Moreover, if you’re uncertain about whether animals suffer, you should err on the side of assuming that they do because this is the safer assumption. Mistakenly killing thousands of suffering moral patients over your lifetime is plausibly a much bigger worry than mistakenly sparing thousands of unconscious zombies and missing out on some mouth-pleasures.
I’m not a vegetarian myself, but I do think vegetarianism is a morally superior option. I also think vegetarians should adopt a general policy of not paying people to become vegetarians (except perhaps as a short-term experiment, to incentivize trying out the lifestyle).
I’m a human and I care about humans. Animals only matter insofar as they affect the lives of humans. Is this really such a difficult concept?
I don’t mean per organism, I mean in aggregate. In aggregate, I think the totality of animal suffering is orders of magnitude less important than the totality of human suffering.
I’m not disagreeing that animals suffer. I’m telling you that I don’t care whether they suffer.
You are many things: a physical object, a living being, a mammal, a member of the species Homo sapiens, an East Asian (I believe), etc. What’s so special about the particular category you picked?
The psychological unity of humankind. See also this comment.
Presumably mammals also exhibit more psychological similarity than non-mammals, and the same is probably true about East Asians relative to members of other races. What makes the psychological unity of mankind special?
Moreover, it seems that insofar as you care about humans because they have certain psychological traits, you should care about any creature that has those traits. Since many animals have many of the traits that humans have, and some animals have those traits to a greater degree than some humans do, it seems you should care about at least some nonhuman animals.
I’m willing to entertain this possibility. I’ve recently been convinced that I should consider caring about dolphins and other similarly intelligent animals, possibly including pigs (so I might be willing to give up pork). I still don’t care about fish or chickens. I don’t think I can have a meaningful relationship with a fish or a chicken even in principle.
I suspect that if you plotted all living beings by psychological similarity with Qiaochu_Yuan, there would be a much bigger gap between the -- [reminds himself about small children, people with advanced-stage Alzheimer’s, etc.] never mind.
:-)
(I could steelman my yesterday self by noticing that even though small children aren’t similar to QY they can easily become so in the future, and by replacing “gap” with “sparsely populated region”.)
Doesn’t follow. If we imagine a personhood metric for animals evaluated over some reasonably large number of features, it might end up separating (most) humans from all nonhuman animals even if for each particular feature there exist some nonhuman animals that beat humans on it. There’s no law of ethics saying that the parameter space has to be small.
It’s not likely to be a clean separation, and there are almost certainly some exceptional specimens of H. sapiens that wouldn’t stand up to such a metric, but—although I can’t speak for Qiaochu—that’s a bullet I’m willing to bite.
Does not follow, since an equally valid conclusion is that Qiaochu_Yuan should not-care about some humans (those that exhibit relevant traits less than some nonhuman animals). One person’s modus ponens is etc.
Every human I know cares at least somewhat about animal suffering. We don’t like seeing chickens endlessly and horrifically tortured—and when we become vividly acquainted with such torture, our not-liking-it generally manifests as a desire for the torture to stop, not just as a desire to become ignorant that this is going on so it won’t disturb our peace of mind. I’ll need more information to see where the disanalogy is supposed to be between compassion for other species and compassion for other humans.
Are you certain you don’t care?
Are you certain that you won’t end up viewing this dispassion as a bias on your part, analogous to people in history who genuinely didn’t care at all about black people (but would regret and abandon this apathy if they knew all the facts)?
If you feel there’s any realistic chance you might discover that you do care in the future, you should again err strongly on the side of vegetarianism. Feeling a bit silly 20 years from now because you avoided torturing beings it turns out you don’t care about is a much smaller cost than learning 20 years from now you’re the hitler of cows. Vegetarianism accommodates meta-uncertainty about ethical systems better than its rivals do.
I don’t feel psychologically similar to a chicken in the same way that I feel psychologically similar to other humans.
No, or else I wouldn’t be asking for arguments.
This is a good point.
I don’t either, but unless I can come up with a sharp and universal criterion for distinguishing all chickens from all humans, chickens’ psychological alienness to me will seem a difference of degree more than of kind. It’s a lot easier to argue that chicken suffering matters less than human suffering (or to argue that chickens are zombies) than to argue that chicken suffering is completely morally irrelevant.
Some chickens may very well have more psychologically in common with me than I have in common with certain human infants or with certain brain-damaged humans; but I still find myself able to feel that sentient infants and disabled sentient humans oughtn’t be tortured. (And not just because I don’t want their cries to disturb my own peace of mind. Nor just because they could potentially become highly intelligent, through development or medical intervention. Those might enhance the moral standing of any of these organisms, but they don’t appear to exhaust it..)
That’s not a good point, that’s a variety of Pascal’s Mugging: you’re suggesting that the fact that the possible consequence is large (“I tortured beings” is a really negative thing) means that even fi the chance is small, you should act on that basis.
It’s not a variant of Pascal’s Mugging, because the chances aren’t vanishingly small and the payoff isn’t nearly infinite.
I don’t believe you. If you see someone torturing a cat, a dolphin or a monkey, would you feel nothing? (Suppose that they are not likely to switch to torturing humans, to avoid “gateway torture” complications.)
My problem with this question is that if I see video of someone torturing a cat when I am confident there was no actual cat-torturing involved in creating those images (e.g., I am confident it was all photoshopped), what I feel is pretty much indistinguishable from what I feel if I see video of someone torturing a cat when I am confident there was actual cat-torturing.
So I’m reluctant to treat what I feel in either case as expressing much of an opinion about suffering, since I feel it roughly equally when I believe suffering is present and when I don’t.
So if you can factor-out, so to speak, the actual animal suffering: If you had to choose between “watch that video, no animal was harmed” versus “watch that video, an animal was harmed, also you get a biscuit (not the food, the 100 squid (not the animals, the pounds (not the weight unit, the monetary unit)))”, which would you choose? (Your feelings would be the same, as you say, your decision probably wouldn’t be. Just checking.)
What?
A biscuit provides the same number of calories as 100 SQUID, which stands for Superconducting Quantum Interference Device, which weigh a pound apiece, which masses 453.6 grams, which converts to 4 10^16 joules, which can be converted into 1.13 10^10 kilowatt-hours, which are worth 12 cents per kW-hr, so around 136 billion dollars or so.
...plus a constant.
Reminds me of … Note the name of the website. She doesn’t look happy! “I am altering the deal. Pray I don’t alter it any further.”
Edit: Also, 1.13 * 10^10 kilowatt-hours at 12 cents each yields 1.36 billion dollars, not 136 billion dollars! An honest mistake (cents, not dollars per kWh), or a scam? And as soon as Dmitry is less active …
“squid” is slang for a GBP, i.e. Pound Sterling, although I’m more used to hearing the similar “quid.” One hundred of them can be referred to as a “biscuit,” apparently because of casino chips, similar to how people in America will sometimes refer to a hundred dollars as a “benjamin.”
That is, what are TheOtherDave’s preferences between watching an unsettling movie that does not correspond to reality and watching an unsettling movie that does correspond to reality, but they’re paid some cash.
“Quid” is slang, “squid” is a commonly used jokey soundalike. There’s a joke that ends “here’s that sick squid I owe you”.
EDIT: also, never heard “biscuit” = £100 before; that’s a “ton”.
Does Cockney rhyming slang not count as slang?
In this case it seems to. It’s the first time I recall encountering it but I’m not British and my parsing of unfamiliar and ‘rough’ accents is such that if I happened to have heard someone say ‘squid’ I may have parsed it as ‘quid’, and discarded the ‘s’ as noise from people saying a familiar term in a weird way rather than a different term.
It amuses me that despite making neither head nor tail of the unpacking, I answered the right question.
Well, to the extent that my noncommital response can be considered an answer to any question at all.
Well, I figured that much out from googling, but I was more reacting to what seems like a deliberate act of obfuscation on Kawoomba’s part that serves no real purpose.
Nested parentheses are their own reward, perhaps?
In an interesting twist, in many social circles (not here) your use of the word “obfuscation” would be obfuscatin’ in itself.
To be very clear though: “Eschew obfuscation, espouse elucidation.”
So to be clear—you do some Googling and find two videos, one has realistic CGI animal harm, the other real animal harm; assume the CGI etc is so good that I wouldn’t be able to tell which was which if you hadn’t told me. You don’t pay for the animal harm video, or in any way give anyone an incentive to harm an animal in fetching it; just pick up a pre-existing one. I have a choice between watching the fake-harm video (and knowing it’s fake) or watching the real-harm video and receiving £100.
If the reward is £100, I’ll take the £100; if it’s an actual biscuit, I prefer to watch the fake-harm video.
I’m genuinely unsure, not least because of your perplexing unpacking of “biscuit”.
Both examples are unpleasant; I don’t have a reliable intuition as to which is more so if indeed either is.
I have some vague notion that if I watch the real-harm video that might somehow be interpreted as endorsing real-harm more strongly than if I watch the fake-harm vide, like through ratings or download monitoring or something, which inclines me to the fake-harm video. Though whether I’m motivated by the vague belief that such differential endorsement might cause more harm to animals, or by the vague belief that it might cause more harm to my status, I’m again genuinely unsure of. In the real world I usually assume that when I’m not sure it’s the latter, but this is such a contrived scenario that I’m not confident of that either.
If I assume the biscuit is a reward of some sort, then maybe that reward is enough to offset the differential endorsement above, and maybe it isn’t.
I don’t want to see animals get tortured because that would be an unpleasant thing to see, but there are lots of things I think are unpleasant things to see that don’t have moral valence (in another comment I gave the example of seeing corpses get raped).
I might also be willing to assign dolphins and monkeys moral value (I haven’t made up my mind about this), but not most animals.
Do you have another example besides the assault of corpses? I can easily see real moral repugnance from the effect it has on the offenders, who are victims of their own actions. If you find it unpleasant only when you see it, would not they find it horrific when they perform it?
Also in these situations, repugnance can leak due to uncertainty of other real moral outcomes, such as the (however small) likelihood of family members of the deceased learning of the activity, for whom these corpses have real moral value.
Two Girls One Cup?
Seeing humans perform certain kinds of body modifications would also be deeply unpleasant to me, but it’s also not an act I assign moral valence to (I think people should be allowed to modify their bodies more or less arbitrarily).
I’ll chime in to comment that QiaochuYuan’s[1] views as expressed in this entire thread are quite similar to my own (with the caveat that for his “human” I would substitute something like “sapient, self-aware beings of approximately human-level intelligence and above” and possibly certain other qualifiers having to do with shared values, to account for Yoda/Spock/AIs/whatever; it seems like QiaochuYuan uses “approximately human” to mean roughly this).
So, please reconsider your disbelief.
[1] Sorry, the board software is doing weird things when I put in underscores...
So, presumably you don’t keep a pet, and if you did, you would not care for its well-being?
Indeed, I have no pets.
If I did have a pet, it is possible that I would not care for it (assuming animal cruelty laws did not exist), although it is more likely that I would develop an attachment to it, and would come to care about its well-being. That is how humans work, in my experience. I don’t think this necessarily has any implications w.r.t. the moral status of nonhuman animals.
Do you consider young children and very low-intelligence people to be morally-relevant?
(If—in the case of children—you consider potential for later development to be a key factor, we can instead discuss only children who have terminal illnesses.)
Good question. Short answer: no.
Long answer: When I read Peter Singer, what I took away was not, as many people here apparently did, that we should value animals; what I took away is that we should not value fetuses, newborns, and infants (to a certain age, somewhere between 0 and 2 years [1]). That is, I think the cutoff for moral relevant is somewhere above, say, cats, dogs, newborns… where exactly? I’m not sure.
Humans who have a general intelligence so low that they are incapable of thinking about themselves as conscious individuals are also, in my view, not morally relevant. I don’t know whether such humans exist (most people with Down syndrome don’t quite seem to fit that criterion, for instance).
There are many caveats and edge cases, for instance: what if the low-intelligence condition is temporary, and will repair itself with time? Then I think we should consider the wishes of the self that the person was before the impairment, and the rights of their future, non-impaired, selves. But what if the impairment can be repaired using medical technology? Same deal. What if it can’t? Then I would consider this person morally irrelevant. What if the person was of extremely low intelligence, and had always been so, but we could apply some medical intervention to raise their intelligence to at least normal human level? I would consider that act morally equivalent to creating a new sapient being (whether that’s good or bad is a separate question).
So: it’s complicated. But to answer practical questions: I don’t consider infanticide the moral equivalent of murder (although it’s reasonable to outlaw it anyway, as birth is good Schelling point, but the penalty should surely be nowhere near as harsh as for killing an adult or older child). The rights of low-intelligence people is a harder issue partly because there are no obvious cutoffs or metrics.
I hope that answers your question; if not, I’ll be happy to elaborate further.
Ethical generalizations check: Do you care about Babyeaters? Would you eat Yoda?
Would that allow absorbing some of his midichlorians? Black magic! Well, I might try (since he died of natural causes anyway). But yoda dies without leaving a corpse. It would be difficult. The only viable strategy would seem to be to have Yoda anethetize himself a minute before he ghosts (“becomes one with the force”). Then the flesh would remain corporeal for consumption.
The real ethical test would be would I freeze yoda’s head in carbonite, acquire brain scanning technology and upload him into a robot body? Yoda may have religious objections to the practice so I may honour his preferences while being severely disappointed. I suspect I’d choose the Dark Side of the Force myself. The Sith philosophy seems much more compatible with life extension by whatever means necessary.
It should be noted that Yoda has an observable afterlife. Obi-wan had already appeared after his body had died, apparently in full possession of his memories and his reasoning abilities; Yoda proposes to follow in Obi-wan’s footsteps, and has good reason to believe that he will be able to do so.
Sith philosophy, for reference:
Peace is a lie, there is only passion.
Through passion, I gain strength.
Through strength, I gain power.
Through power, I gain victory.
Through victory, my chains are broken.
The Force shall free me.
Actual use of Sith techniques seems to turn people evil at ridiculously accelerated rates. At least in-universe it seems that sensible people would write off this attractive-sounding philosophy as window dressing on an extremely damaging set of psychic techniques.
If you’re lucky, it might grant intrinsic telepathy, as long as the corpse is relatively fresh.
Nope (can’t parse them as approximately human without revulsion). Nope (approximately human).
I wouldn’t eat flies or squids either. But I know that that’s a cultural construct.
Let’s ask another question: would I care if someone else eats Yoda?
Well, I might, but only because eating Yoda is, in practice, correlated with lots of other things I might find undesirable. If I could be assured that such was not the case (for instance, if there was another culture which ate the dead to honor them, that’s why he ate Yoda, and Yoda’s will granted permission for this), then no, I wouldn’t care if someone else eats Yoda.
In practice? In common Yoda-eating practice? Something about down to earth ‘in practice’ empirical observations about things that can not possibly have ever occurred strikes me as broken. Perhaps “would be, presumably, correlated with”.
In Yoda’s case he could even have just asked for permission from Yoda’s force ghost. Jedi add a whole new level of meaning to “Living Will”.
“In practice” doesn’t mean “this is practiced”, it means “given that this is done, what things are, with high probability, associated with it in real-life situations” (or in this case, real-life-+-Yoda situations). “In practice” can apply to rare or unique events.
I really don’t think statements of the form “X is, in practice, correlated with Y” should apply to situations where X has literally never occurred. You might want to say “I expect that X would, in practice, be correlated with Y” instead.
All events have never occurred if you describe them with enough specificity; I’ve never eaten this exact sandwich on this exact day.
While nobody has eaten Yoda before, there have been instances where people have eaten beings that could talk intelligently.
I share Qiaochu’s reasoning.