“Meaningfully conscious” seem a tricky definition, and consciousness a rather slippery word.
Animals clearly aren’t sapient, but saying they aren’t conscious seems to also sneak in the connotation that there’s “nobody home” to feel the pain and the experience, like a philosophical zombie.
It’s pretty clear that animal seem to act like there’s somebody home, feeling sensations and emotions and having intentions, and what we know about neurology also suggests that.
Given how some animals even pass self-recognition tests, sapience seems the only hard cut-off we can trace between animals and humans.
I’d certainly agree that we should value life based on how “complex” it’s mental life is, (perhaps with a roof that we reach when we hit sapience that I’d like to introduce for our convenience), and it certainly makes sense we shouldn’t concern with the well being of stuff that has no mind at all, but it doesn’t seem intuitive that the lack of sapience should mean that whatever suffering strikes a mind has zero moral weight.
If we agree that the suffering of a mind has a certain weight, then yeah, the “flesh eating monster hell” is a quantitatively reduced version of doing the same thing to human beings (measuring in total moral wrongness, some consequences of doing it to humans would be totally absent and others wouldn’t be scaled down at all). We can of course discuss how much the moral wrongness is reduced.
One might argument that it’s certainly preferable to slaughter a cow than to have a human die of hunger, or to slaughter a cow (with the exact meat of a human for convenience of our example) to feed two humans and save them from starvation than to slaughter a human to save two humans, and I’d agree.
I’d even agree that one might have much more urgent thing to do for the wellbeing of others than become vegan.
But the fact that we value human lives more than animals, because of sapience, doesn’t implicate that animal lives and suffering have no value whatsoever, and as long as animal lives have some value, there are some trade-offs in animal pain for human convenience we should refuse or we’re not thinking quantitatively about morals.
Deontological rules such as “let’s let any number of animals die to save even a single human life” might be considered as a temporary placeholder to separate the issue of human lives from the issue of human convenience, I think it might make discussing the issue easier.
I do mean to imply that animals that can pass the mirror test are much more morally meaningful than the others. Computer game characters also exhibit ”intentions” and such, but there’s nobody home a lot of the time, unless you’re playing against another person.
I am fairly interested in knowing which animals are being factory farmed that pass the mirror test. If it’s anything like cows then I will be pretty upset.
Computer game characters also exhibit ”intentions” and such, but there’s nobody home a lot of the time, unless you’re playing against another person.
Yes, but what we know about the structure of a computer program is greatly different than what we know about the structure of an animal brain. More complex brains seem to share a lot of our own architecture, mammals brains are ridiculously complex, and mammals show a lot of behaviours that isn’t purely directed to acquiring food, reproducing and running from predators.
For animals such as frogs and bugs, which seem to be built more like a “sensory input goes in, reflex goes out” I’d accept more doubt on whether the “somebody’s home” metaphor can be considered true, for mammals and other smarter animals the doubt are a lot less believable.
It seems cows might be smarter than dogs and highly intelligent, and right now dogs are discussed as possibly having self-recognition, since they pass olfactory tests that require self recognition (from what I saw it seems the tests are a bit more complex than just requiring the dog to have a “this-is-your-urine-mark-for-your-territory.exe” in its brain).
Generally speaking, cows show to have long term social relations with each others, good problem solving skills, and long term effects on their emotional range from negative experiences. I haven’t been able to find information on cows passing or failing self-recognition tests, visual or not, but from the intelligence they show I’d put them pretty high on moral meaningfulness.
Pigs are notoriously smart and have passed the self-recognition test, as Pattern commented.
Though, I think my main point it’s that even simpler animals, as long as the brain architecture allows for doubts that our experience of “being home”, feeling pain and etc, is in some way generalisable to theirs, would have some scaled down moral weight.
If I had to lose my higher cognitive function and be reduced to animal levels of intelligence, I wouldn’t really be okay with agreeing to be subjected to significative pain in exchange for a trivial benefit now, on the ground that I wouldn’t be sapient.
Note: this isn’t really aimed at turning lesswrongers vegan. There are convincing reasons to be vegan based on the impact over humans, but if you are already trying to be an effective altruist by doing a hard job I can accept the need of conserving willpower and efficiency, though I guess one could consider if he/she/they could reduce consumption without risks.
I think the issue of the moral weight of animals should be considered independently from the consequences it might hold for one’s diet or behaviour, or we’re just back to plain rationalisation.
(Arguably, as smart as a dog should also be an issue. And the list only has 5 items, probably more for ‘internet format’ reasons, than that’s how many there are.)
Deontological [decisions] such as “let’s let any number of animals die to save even a single human life”
There might exist a number of animals (like the number of animals on Earth) such that, if they all died, people would die as a result—ignoring subsistence effects, we kind of need the biosphere to keep working.
Also, if hunting or farming of an animal species is done in a way that creates bio-risks, that’s a problem. I’m not sure where the ‘covid comes from bats’ theory is at now, but the Spanish flu was a big deal and it did come from animals. While I’m not sure of how that happened—i.e. it might not have been factory farming—it seems important that we don’t shot ourselves in the foot, globally.
Additionally, if an animal produces stuff that can be made into medicine, driving them extinct in order to meet current demand is obviously bad.*
In fact, even in terms of consumption and valuing that, over consumption is a problem.*
*If you care about future people at all, wrt. that.
Right now farming animals seems to be a huge risk for zoonosis, if I remember correctly Covid-19 could have spread from exotic animals being sold in high numbers, and it jumped from man to minks in farms, spread like wildfire in the packed environment, gathering all sort of mutations, and then jumped back to man.
Farming animal is also not sustainable at all with the level of tech, resources and consumption we have now. I’d expect the impact of farming to kill at least some tens-millions people in a moderately bad global warming scenario, it’s already producing humanitarian crises now, and I’m afraid global warming increases extinction risks due to how we would be more likely to botch AGI.
I had just suggested the rule for an entirely hypothetical scenario where we are asked to trade human lives against animal lives, because I was trying to discuss the moral situation “trade animal lives and suffering against human convenience” on its own.
“Meaningfully conscious” seem a tricky definition, and consciousness a rather slippery word.
Animals clearly aren’t sapient, but saying they aren’t conscious seems to also sneak in the connotation that there’s “nobody home” to feel the pain and the experience, like a philosophical zombie.
It’s pretty clear that animal seem to act like there’s somebody home, feeling sensations and emotions and having intentions, and what we know about neurology also suggests that.
Given how some animals even pass self-recognition tests, sapience seems the only hard cut-off we can trace between animals and humans.
I’d certainly agree that we should value life based on how “complex” it’s mental life is, (perhaps with a roof that we reach when we hit sapience that I’d like to introduce for our convenience), and it certainly makes sense we shouldn’t concern with the well being of stuff that has no mind at all, but it doesn’t seem intuitive that the lack of sapience should mean that whatever suffering strikes a mind has zero moral weight.
If we agree that the suffering of a mind has a certain weight, then yeah, the “flesh eating monster hell” is a quantitatively reduced version of doing the same thing to human beings (measuring in total moral wrongness, some consequences of doing it to humans would be totally absent and others wouldn’t be scaled down at all). We can of course discuss how much the moral wrongness is reduced.
One might argument that it’s certainly preferable to slaughter a cow than to have a human die of hunger, or to slaughter a cow (with the exact meat of a human for convenience of our example) to feed two humans and save them from starvation than to slaughter a human to save two humans, and I’d agree.
I’d even agree that one might have much more urgent thing to do for the wellbeing of others than become vegan.
But the fact that we value human lives more than animals, because of sapience, doesn’t implicate that animal lives and suffering have no value whatsoever, and as long as animal lives have some value, there are some trade-offs in animal pain for human convenience we should refuse or we’re not thinking quantitatively about morals.
Deontological rules such as “let’s let any number of animals die to save even a single human life” might be considered as a temporary placeholder to separate the issue of human lives from the issue of human convenience, I think it might make discussing the issue easier.
I do mean to imply that animals that can pass the mirror test are much more morally meaningful than the others. Computer game characters also exhibit ”intentions” and such, but there’s nobody home a lot of the time, unless you’re playing against another person.
I am fairly interested in knowing which animals are being factory farmed that pass the mirror test. If it’s anything like cows then I will be pretty upset.
Yes, but what we know about the structure of a computer program is greatly different than what we know about the structure of an animal brain. More complex brains seem to share a lot of our own architecture, mammals brains are ridiculously complex, and mammals show a lot of behaviours that isn’t purely directed to acquiring food, reproducing and running from predators.
For animals such as frogs and bugs, which seem to be built more like a “sensory input goes in, reflex goes out” I’d accept more doubt on whether the “somebody’s home” metaphor can be considered true, for mammals and other smarter animals the doubt are a lot less believable.
It seems cows might be smarter than dogs and highly intelligent, and right now dogs are discussed as possibly having self-recognition, since they pass olfactory tests that require self recognition (from what I saw it seems the tests are a bit more complex than just requiring the dog to have a “this-is-your-urine-mark-for-your-territory.exe” in its brain).
Generally speaking, cows show to have long term social relations with each others, good problem solving skills, and long term effects on their emotional range from negative experiences. I haven’t been able to find information on cows passing or failing self-recognition tests, visual or not, but from the intelligence they show I’d put them pretty high on moral meaningfulness.
Pigs are notoriously smart and have passed the self-recognition test, as Pattern commented.
Though, I think my main point it’s that even simpler animals, as long as the brain architecture allows for doubts that our experience of “being home”, feeling pain and etc, is in some way generalisable to theirs, would have some scaled down moral weight.
If I had to lose my higher cognitive function and be reduced to animal levels of intelligence, I wouldn’t really be okay with agreeing to be subjected to significative pain in exchange for a trivial benefit now, on the ground that I wouldn’t be sapient.
Note: this isn’t really aimed at turning lesswrongers vegan. There are convincing reasons to be vegan based on the impact over humans, but if you are already trying to be an effective altruist by doing a hard job I can accept the need of conserving willpower and efficiency, though I guess one could consider if he/she/they could reduce consumption without risks.
I think the issue of the moral weight of animals should be considered independently from the consequences it might hold for one’s diet or behaviour, or we’re just back to plain rationalisation.
TL:DR; Pigs apparently do (see link below).
Ignoring the fact that the mirror test is biased towards more visual creatures, this seems relevant:
https://www.onegreenplanet.org/animalsandnature/farm-animals-that-are-probably-smarter-than-your-dog/
(Arguably, as smart as a dog should also be an issue. And the list only has 5 items, probably more for ‘internet format’ reasons, than that’s how many there are.)
There might exist a number of animals (like the number of animals on Earth) such that, if they all died, people would die as a result—ignoring subsistence effects, we kind of need the biosphere to keep working.
Also, if hunting or farming of an animal species is done in a way that creates bio-risks, that’s a problem. I’m not sure where the ‘covid comes from bats’ theory is at now, but the Spanish flu was a big deal and it did come from animals. While I’m not sure of how that happened—i.e. it might not have been factory farming—it seems important that we don’t shot ourselves in the foot, globally.
Additionally, if an animal produces stuff that can be made into medicine, driving them extinct in order to meet current demand is obviously bad.*
In fact, even in terms of consumption and valuing that, over consumption is a problem.*
*If you care about future people at all, wrt. that.
I do agree on everything you said.
Right now farming animals seems to be a huge risk for zoonosis, if I remember correctly Covid-19 could have spread from exotic animals being sold in high numbers, and it jumped from man to minks in farms, spread like wildfire in the packed environment, gathering all sort of mutations, and then jumped back to man.
Farming animal is also not sustainable at all with the level of tech, resources and consumption we have now. I’d expect the impact of farming to kill at least some tens-millions people in a moderately bad global warming scenario, it’s already producing humanitarian crises now, and I’m afraid global warming increases extinction risks due to how we would be more likely to botch AGI.
I had just suggested the rule for an entirely hypothetical scenario where we are asked to trade human lives against animal lives, because I was trying to discuss the moral situation “trade animal lives and suffering against human convenience” on its own.