Ethical. If I wouldn’t want people torturing dogs, I have no justification to be okay with people torturing cows, pigs, and chickens, and from what I’ve seen conditions in a lot of farms and slaughterhouses are tantamount to torture. Even though animals can’t think verbally, they still have some level of awareness and the ability to feel pain, so causing them suffering is verboten. I am kind of sympathetic to the argument that free range meat raised with the animals’ welfare in mind isn’t so bad, and to the argument that if we weren’t raising these animals for food they’d probably be endangered or extinct. But free range is only a small percent of meat products, and there are major environmental costs anyway, and the meat-farming industry just does so much damage in so many ways that I feel I need to do my part to discourage it. Right now my goal is to aim for zero meat and accept the inevitable lapses when they come as not being an ethical disaster.
I’m not too strict about it. When I’m traveling or a guest somewhere it’s pretty tough to avoid meat, so I let myself get away with it.
Hard to tell. I think I’d at least share my reasons with them, but if they didn’t want to that’s their choice. As long as they can provide a rational explanation, of course :)
Never tried.
I eat a lot of Quorn when I’m in the British Isles, and soy products when I’m elsewhere. Quorn is better, but I haven’t been able to find it outside Britain and Ireland.
I’m pretty live-and-let-live about this.
Became a vegetarian in elementary school, I think, maybe middle school. Gave it up on three or four occasions for a few months, usually after moving and not being able to find good vegetarian foods there, but always went back. Sometimes give it up for a few months when I go back to my parents’ place, because the food there is too good and I don’t have as much control over my diet.
I love meat and I want it all the time.
I don’t really eat many fruits or vegetables. I hate them to the point where I have trouble keeping them down. This doesn’t apply as much to salads. So I kind of live off of grain products, with some milk and eggs and Quorn thrown in. There are a lot of diet theories that suggest I should be very fat right now, but I’m actually pretty thin. Go figure.
Ethical. If I wouldn’t want people torturing dogs, I have no justification to be okay with people torturing cows, pigs, and chickens
Dogs are genetically selected for living together with humans. As such, and unlike their wolf predecessors, dogs are friendly towards us. In many cases, care is reciprocal, in that we more often care about people who care about us. I propose that chickens don’t have even the slightest sense of morality, and don’t care whether their siblings live or die. With this in mind, I think it’s a somewhat justified to torture birds and low mammals, since they don’t care about our or their families’ well-being to begin with.
However, I would never torture a chicken unless I was at least 99% sure it had valuable information, and the future of the farm was at stake.
Kin selection suggests that chickens may care about their siblings, and general evolution suggests they definitely care about their children.
...which is exactly the problem. You sound like you’re holding a grudge against chickens for not being evolutionarily programmed in a certain way. Let it go. If you set some criteria for “deserving” our respect, of course a lot of animals can’t live up to it. But it doesn’t seem right to use that as justification for hurting them.
Thought experiment: I take Bob and cut out the part of his brain involved in empathy. Now he can’t care about other people, but his thought and emotions are otherwise intact. Is it now okay to torture Bob?
Kin selection suggests that chickens may care about their siblings, and general evolution suggests they definitely care about their children.
What I meant is that birds’ programming doesn’t feature advanced mental concepts like “care”, but simple instinctive responses (that can be easily triggered with false stimuli) take their place. However, I see now that this was not important to my point, and I could have left it out, in place of “don’t care whether other species live or die”.
If you set some criteria for “deserving” our respect, of course a lot of animals can’t live up to it. But it doesn’t seem right to use that as justification for hurting them.
What’s so inherently bad about pain? Is it morally questionable to run a piece of control software for a cleaning robot, that has a “const bool in_pain = true;”?
Now he can’t care about other people, but his thought and emotions are otherwise intact. Is it now okay to torture Bob?
With his intelligence intact, he can still be valuable to us, and depending on what he did in the past, we may be in moral debt to him. However, if he was born with no mental facilities outside of those of a chicken, my foremost reason for keeping him alive would be to prevent an emotional impact for other people.
The proper way to prove that pain is bad is proof by induction: specifically, hook an electric wire to the testicles of the person who doesn’t think pain is bad, induce a current, and continue it until the person admits that pain is bad (this is also the proper way to prove that creationism is false, or at least the most fun).
Is it morally questionable to run a piece of control software for a cleaning robot, that has a “const bool in_pain = true;”?
This is getting into the subject of qualia, which I freely admit to not understanding. But I’m pretty sure I have some, and I’m pretty sure they’re harder to produce than a variable with the label “pain”.
With his intelligence intact, he can still be valuable to us, and depending on what he did in the past, we may be in moral debt to him.
I’d guess from this statement that you’re either not a consequentialist, or you’re some exotic type of consequentialist straight out of Alicorn’s syllabus. If you clarify exactly what your moral theory is, I can give you a better estimate on how likely we are to be talking past each other because we have completely different premises.
specifically, hook an electric wire to the testicles of the person who doesn’t think pain is bad, induce a current, and continue it until the person admits that pain is bad (this is also the proper way to prove that creationism is false, or at least the most fun).
Hmm. Methinks this strategy could make debating female creationists somewhat problematic.
I already agree that (involuntary) pain for humans is bad, but I don’t think it’s bad in general, i.e. applied to any entity. For example, the cells in my brain registering pain will experience lots of pain in their lives, and probably little else, for the benefit of the body as a whole. They don’t have my sympathy, although I am grateful.
I am a consequentialist. However, if I see someone returning good favors with torture, I would not have any dealings with that person, since it would seem like a really bad investment.
For example, the cells in my brain registering pain will experience lots of pain in their lives, and probably little else, for the benefit of the body as a whole.
I don’t think it’s obvious that individual cells meaningfully experience pain, in the qualia-type sense we seem to be talking about. Qualia are a function of minds, not brains, or brain-pieces.
Objecting to the living conditions of farm animals seems only compatible with veganism, not vegetarianism. (Though “I should, but can’t be bothered” is a fair reply.) Unless you think slaughter is by far the worst part, but it doesn’t seem that way to me—especially since egg farms kill male chicks. Yet you seem fine with milk and eggs. Why?
I’m not Yvain, but I do eat milk and eggs and not beef and chicken. (I also do not go particularly out of my way to eschew leather objects, although when aware of equivalent options, I prefer faux items or ones of other materials, and I don’t buy that many things firsthand anyway.) Part of it is a matter of quantity. Avoiding actual meat draws a bright line I can toe easily, and surely reduces the number of animals mistreated on my behalf. And part of it is that, in principle, eggs and milk can be obtained without particularly mistreating the creatures that produce them. This isn’t how it’s generally done, mostly for cost reasons, and to be honest I don’t incentivize doing it that way by doing research on which sources are closer to that ideal and paying more to buy from them, but in theory farms could work out how to sex-select their chickens in the first place and how to make cows produce milk without repeatedly impregnating them only to yield veal calves, and then treat their layers and milkers nicely.
If I wouldn’t want people torturing dogs, I have no justification to be okay with people torturing cows, pigs, and chickens, and from what I’ve seen conditions in a lot of farms and slaughterhouses are tantamount to torture.
Do you place equal value on the wellbeing of all animals? This sounds like the same kind of dogmatic adherence to equal weighting that I have a problem with in utilitarianism. I don’t want people torturing dogs, I’m less concerned about people torturing chickens. I value the wellbeing of dogs more than the well-being of chickens. I value both considerably less than the wellbeing of humans and considerably more than the wellbeing of HIV viruses.
All else being equal, I’d prefer less rather than more chicken-suffering. If however I have a choice between a $5 chicken breast that caused X chicken-suffering and a $6 chicken breast that caused 0.5X chicken-suffering I’ll save the extra dollar and apply it to something I consider more important than chicken-suffering. A donation to a puppy rescue shelter for example (though that would be low on my overall list of priorities).
I weight the well-being of animals in proportion to what I would call for lack of a better word their consciousness. I think dolphins are probably self-aware, capable of reflection, and have strong senses of pain and pleasure. I think ants are probably much less so, although still nonzero. So I place much less emphasis upon the well-being of ants than upon the well-being of dolphins. Since viruses have no nervous system and no brain, I’m prepared to give them zero value.
However, I have no evidence that dogs are more aware than pigs are. Any personal preference I have for dogs is because they’re cuter than pigs are, which seems like a bad way to make moral decisions. So I am not prepared to make pigs less valuable than dogs.
I never thought about it in terms of your two-different-kinds-of-chicken-breast problem, but I would agree that this would require an actual calculation to see whether the money saved could prevent more suffering than was caused to the chicken. Given the low probability of me actually going through with donating $1 more to charity just because I bought a $1 cheaper chicken, I’d probably take the more expensive one, though.
Any personal preference I have for dogs is because they’re cuter than pigs are, which seems like a bad way to make moral decisions.
I think you’ve deliberately muddied the waters by throwing in the word ‘cute’ there. You justify your general rule for preferring some lifeforms to others by saying you value ‘consciousness’ but then say that preferring dogs over pigs for ‘cuteness’ is not a good way to make moral decisions. If you take away the loaded words all you’re really saying in both cases is that you value animal A more than animal B because it has more of property X. When X is consciousness that’s a good justification, when it’s cuteness it’s a bad justification.
I’m quite happy to just say that I prefer some animals to others and I value them accordingly. That preference is a combination of factors which I couldn’t give you a formula for but I don’t feel I need to do so to justify following my preference. In the case of dogs I think it’s more than cuteness—they are pack hunting animals that have been bred over many generations to live with humans as companions (rather than as livestock) and so it is not unsurprising that we should have affinity for them. Preferring them over pigs seems no more problematic than preferring a friendly AI over a paperclip maximizer—they share more common goals with us than pigs do.
Given the low probability of me actually going through with donating $1 more to charity just because I bought a $1 cheaper chicken, I’d probably take the more expensive one, though.
That’s not a very rational approach. If it’s easier, think of it as $150 a year (probably ballpark for me based on my own chicken consumption) and consider what charity you could donate $150 extra to. In my opinion being rational about personal finances is a pretty good starting place for an aspiring rationalist.
I don’t interpret “consciousness” as a preference giving some animals more value to me than others. I interpret it as a multiplier that needs to be used in order to even out preferences.
Let’s say I want to minimize suffering in a target-independent way, but I need to divide X units of torture between a human and an ant. I would choose to apply all X units to the ant, not just because I like humans more than ants, but because that decision actually minimizes total suffering. My wild guess is that ants can’t really suffer all that much; they probably get some vague negative feeling but it’s (again, I am guessing wildly) nothing like as strong or as painful as the pain that a human, with their million times more neurons, feels.
In contrast, obviously cuteness has no effect on level of suffering. If I want to divide up X units of torture between two animals, one of which is cuter than the other, from a purely consequentialist position there’s no reason to prefer one to the other.
It might help if you think of me as trying to minimize the number of suffering*consciousness units. That’s why I wouldn’t care about eating TAW’s genetically engineered neuronless cow, and it’s why I care less about ants than humans.
(or a metaphor: let’s say a hospital administrator has to distribute X organs among needy transplant patients. Even if the hospital administrator chooses to be unbiased regarding the patients’ social value—ie not prefer a millionaire to a bum—the administrator still has a good case for giving the organ to someone for whom it will bring them 50 more years of life rather than 6 more months. That’s a completely different kind of preference than ‘I like this guy better’. The administrator is trying to impartially maximize lives saved*years)
Hopefully that makes it clear what the difference between this theory and “preferring” cute animals is.
If I want to divide up X units of torture between two animals, one of which is cuter than the other, from a purely consequentialist position there’s no reason to prefer one to the other.
Well, humans seem to be more upset by images of baby seals being clubbed than by the death of less cute but similarly ‘conscious’ creatures so that might factor into your total suffering calculation but that aside this does seem to follow from your premises.
It might help if you think of me as trying to minimize the number of suffering*consciousness units.
Why is that preference uniquely privileged though? What justifies it over preferring to minimize the number of suffering*(value I assign to animal) units? If I value something about dogs over pigs (lets call it ‘empathy units’ because that is something like a description of the source of my preference) why is that a less justified choice of preference than ‘consciousness’?
If you just genuinely value what you’re calling ‘consciousness’ here over any other measure of value that’s a perfectly reasonable position to take. You seem to want to universalize the preference though and I get the impression that you recognize that it goes against most people’s instinctive preferences. If you want to persuade others to accept your preference ranking (maybe you don’t—it’s not clear to me) then I think you need to come up with a better justification. You should also bear in mind you may find yourself arguing to sacrifice humanity for a super-conscious paperclip maximizer - is that really a position you want to take?
Well, I admit to being one of the approximately seven billion humans who can’t prove their utility functions from first principles. But I think there’s a very convincing argument that consciousness is in fact what we’re actually looking for and naturally taking into account.
Happiness only is happiness, and pain only is pain, insofar as it is perceived by awareness. If a scientist took a nerve cell with a pain receptor, put it in a Petri dish, and stimulated it for a while, I wouldn’t consider this a morally evil act.
I find in my own life that different levels of awareness correspond to different levels of suffering. Although something bad happening to me in a dream is bad, I don’t worry about it nearly as much as I would if it happened when I was awake and fully aware. Likewise, if I’m zonked out on sedatives, I tend to pay less attention to my own pain.
I hypothesize that different animals have different levels of awareness, based on intuition and my knowledge of their nervous systems. In this case, they would be able to experience different levels of suffering. What I meant by saying my utility function multiplied suffering by awareness would have been better phrased as:
Suffering = bad things*awareness
while trying to minimize suffering. This is why, for example, doing all sorts of horrible things to a rock is a morally neutral act, doing them to an insect is probably bad but not anything to lose sleep over, and doing them to a human is a moral problem even if it’s a human I don’t personally like.
Your paperclip example is a classical problem called the utility monster. I don’t really have any especially brilliant solution beyond what has already been said about the issue. To some degree I bite the bullet: if there was some entity whose nervous system was so acute that causing it the slightest amount of pain would correspond to 3^^^3 years of torture for a human being, I’d place high priority on keeping that entity happy.
Well, I admit to being one of the approximately seven billion humans who can’t prove their utility functions from first principles.
But you seem to think (and correct me if I’m misinterpreting) that it would be better if we could. I’m not so sure. And further you seem to think that given that we can’t, it’s still better to override our felt/intrinsic preferences that are hard to fully justify with unnatural preferences that have the sole advantage of being easier to express in simple sentences.
Now I’m not sure you’re actually claiming this but with the pig/dog comparison you seem to be acknowledging that many people value dogs more than pigs (I’m not clear if you have this instinctive preference yourself or not) but that based on some abstract concept of levels of consciousness (that is itself subjective given our current knowledge) we should override our instincts and judge them as of equal value. I’m saying “screw the abstract theory, I value dogs over pigs and that’s sufficient moral justification for me”. I can give you rationalizations for my preference—the idea that dogs have been bred to live with humans for example—but ultimately I don’t think the rationalization is required for moral justification.
But I think there’s a very convincing argument that consciousness is in fact what we’re actually looking for and naturally taking into account.
If this is true, then we should prefer our natural judgements (we value cute baby seals highly, that’s fine—what we’re really valuing is consciousness, not the fact that they share facial features with human babies and so trigger protective instincts). You can’t have it both ways—either we prefer dogs to pigs because they really are ‘more conscious’ or we should fight our instincts and value them equally because our instincts mislead us. I’d agree that what you call ‘consciousness’ or ‘awareness’ is a factor but I don’t think it’s the most important feature influencing our judgements. And I don’t see why it should be.
To some degree I bite the bullet: if there was some entity whose nervous system was so acute that causing it the slightest amount of pain would correspond to 3^^^3 years of torture for a human being, I’d place high priority on keeping that entity happy.
And it’s exactly this sort of thing that makes me inclined to reject utilitarian ethics. If following utilitarian ethics leads to morally objectionable outcomes I see no good reason to think the utilitarian position is right.
I’ve found Quorn in the United States in several grocery stores, in the frozen food. Possibly it’s regionally unavailable where you live? Or is the US not the “elsewhere” in question?
I don’t eat meat.
Ethical. If I wouldn’t want people torturing dogs, I have no justification to be okay with people torturing cows, pigs, and chickens, and from what I’ve seen conditions in a lot of farms and slaughterhouses are tantamount to torture. Even though animals can’t think verbally, they still have some level of awareness and the ability to feel pain, so causing them suffering is verboten. I am kind of sympathetic to the argument that free range meat raised with the animals’ welfare in mind isn’t so bad, and to the argument that if we weren’t raising these animals for food they’d probably be endangered or extinct. But free range is only a small percent of meat products, and there are major environmental costs anyway, and the meat-farming industry just does so much damage in so many ways that I feel I need to do my part to discourage it. Right now my goal is to aim for zero meat and accept the inevitable lapses when they come as not being an ethical disaster.
I’m not too strict about it. When I’m traveling or a guest somewhere it’s pretty tough to avoid meat, so I let myself get away with it.
Hard to tell. I think I’d at least share my reasons with them, but if they didn’t want to that’s their choice. As long as they can provide a rational explanation, of course :)
Never tried.
I eat a lot of Quorn when I’m in the British Isles, and soy products when I’m elsewhere. Quorn is better, but I haven’t been able to find it outside Britain and Ireland.
I’m pretty live-and-let-live about this.
Became a vegetarian in elementary school, I think, maybe middle school. Gave it up on three or four occasions for a few months, usually after moving and not being able to find good vegetarian foods there, but always went back. Sometimes give it up for a few months when I go back to my parents’ place, because the food there is too good and I don’t have as much control over my diet.
I love meat and I want it all the time.
I don’t really eat many fruits or vegetables. I hate them to the point where I have trouble keeping them down. This doesn’t apply as much to salads. So I kind of live off of grain products, with some milk and eggs and Quorn thrown in. There are a lot of diet theories that suggest I should be very fat right now, but I’m actually pretty thin. Go figure.
Dogs are genetically selected for living together with humans. As such, and unlike their wolf predecessors, dogs are friendly towards us. In many cases, care is reciprocal, in that we more often care about people who care about us. I propose that chickens don’t have even the slightest sense of morality, and don’t care whether their siblings live or die. With this in mind, I think it’s a somewhat justified to torture birds and low mammals, since they don’t care about our or their families’ well-being to begin with.
However, I would never torture a chicken unless I was at least 99% sure it had valuable information, and the future of the farm was at stake.
Kin selection suggests that chickens may care about their siblings, and general evolution suggests they definitely care about their children.
...which is exactly the problem. You sound like you’re holding a grudge against chickens for not being evolutionarily programmed in a certain way. Let it go. If you set some criteria for “deserving” our respect, of course a lot of animals can’t live up to it. But it doesn’t seem right to use that as justification for hurting them.
Thought experiment: I take Bob and cut out the part of his brain involved in empathy. Now he can’t care about other people, but his thought and emotions are otherwise intact. Is it now okay to torture Bob?
What I meant is that birds’ programming doesn’t feature advanced mental concepts like “care”, but simple instinctive responses (that can be easily triggered with false stimuli) take their place. However, I see now that this was not important to my point, and I could have left it out, in place of “don’t care whether other species live or die”.
What’s so inherently bad about pain? Is it morally questionable to run a piece of control software for a cleaning robot, that has a “const bool in_pain = true;”?
With his intelligence intact, he can still be valuable to us, and depending on what he did in the past, we may be in moral debt to him. However, if he was born with no mental facilities outside of those of a chicken, my foremost reason for keeping him alive would be to prevent an emotional impact for other people.
The proper way to prove that pain is bad is proof by induction: specifically, hook an electric wire to the testicles of the person who doesn’t think pain is bad, induce a current, and continue it until the person admits that pain is bad (this is also the proper way to prove that creationism is false, or at least the most fun).
This is getting into the subject of qualia, which I freely admit to not understanding. But I’m pretty sure I have some, and I’m pretty sure they’re harder to produce than a variable with the label “pain”.
I’d guess from this statement that you’re either not a consequentialist, or you’re some exotic type of consequentialist straight out of Alicorn’s syllabus. If you clarify exactly what your moral theory is, I can give you a better estimate on how likely we are to be talking past each other because we have completely different premises.
Hmm. Methinks this strategy could make debating female creationists somewhat problematic.
I already agree that (involuntary) pain for humans is bad, but I don’t think it’s bad in general, i.e. applied to any entity. For example, the cells in my brain registering pain will experience lots of pain in their lives, and probably little else, for the benefit of the body as a whole. They don’t have my sympathy, although I am grateful.
I am a consequentialist. However, if I see someone returning good favors with torture, I would not have any dealings with that person, since it would seem like a really bad investment.
I don’t think it’s obvious that individual cells meaningfully experience pain, in the qualia-type sense we seem to be talking about. Qualia are a function of minds, not brains, or brain-pieces.
Objecting to the living conditions of farm animals seems only compatible with veganism, not vegetarianism. (Though “I should, but can’t be bothered” is a fair reply.) Unless you think slaughter is by far the worst part, but it doesn’t seem that way to me—especially since egg farms kill male chicks. Yet you seem fine with milk and eggs. Why?
I’m not Yvain, but I do eat milk and eggs and not beef and chicken. (I also do not go particularly out of my way to eschew leather objects, although when aware of equivalent options, I prefer faux items or ones of other materials, and I don’t buy that many things firsthand anyway.) Part of it is a matter of quantity. Avoiding actual meat draws a bright line I can toe easily, and surely reduces the number of animals mistreated on my behalf. And part of it is that, in principle, eggs and milk can be obtained without particularly mistreating the creatures that produce them. This isn’t how it’s generally done, mostly for cost reasons, and to be honest I don’t incentivize doing it that way by doing research on which sources are closer to that ideal and paying more to buy from them, but in theory farms could work out how to sex-select their chickens in the first place and how to make cows produce milk without repeatedly impregnating them only to yield veal calves, and then treat their layers and milkers nicely.
Do you place equal value on the wellbeing of all animals? This sounds like the same kind of dogmatic adherence to equal weighting that I have a problem with in utilitarianism. I don’t want people torturing dogs, I’m less concerned about people torturing chickens. I value the wellbeing of dogs more than the well-being of chickens. I value both considerably less than the wellbeing of humans and considerably more than the wellbeing of HIV viruses.
All else being equal, I’d prefer less rather than more chicken-suffering. If however I have a choice between a $5 chicken breast that caused X chicken-suffering and a $6 chicken breast that caused 0.5X chicken-suffering I’ll save the extra dollar and apply it to something I consider more important than chicken-suffering. A donation to a puppy rescue shelter for example (though that would be low on my overall list of priorities).
I weight the well-being of animals in proportion to what I would call for lack of a better word their consciousness. I think dolphins are probably self-aware, capable of reflection, and have strong senses of pain and pleasure. I think ants are probably much less so, although still nonzero. So I place much less emphasis upon the well-being of ants than upon the well-being of dolphins. Since viruses have no nervous system and no brain, I’m prepared to give them zero value.
However, I have no evidence that dogs are more aware than pigs are. Any personal preference I have for dogs is because they’re cuter than pigs are, which seems like a bad way to make moral decisions. So I am not prepared to make pigs less valuable than dogs.
I never thought about it in terms of your two-different-kinds-of-chicken-breast problem, but I would agree that this would require an actual calculation to see whether the money saved could prevent more suffering than was caused to the chicken. Given the low probability of me actually going through with donating $1 more to charity just because I bought a $1 cheaper chicken, I’d probably take the more expensive one, though.
I think you’ve deliberately muddied the waters by throwing in the word ‘cute’ there. You justify your general rule for preferring some lifeforms to others by saying you value ‘consciousness’ but then say that preferring dogs over pigs for ‘cuteness’ is not a good way to make moral decisions. If you take away the loaded words all you’re really saying in both cases is that you value animal A more than animal B because it has more of property X. When X is consciousness that’s a good justification, when it’s cuteness it’s a bad justification.
I’m quite happy to just say that I prefer some animals to others and I value them accordingly. That preference is a combination of factors which I couldn’t give you a formula for but I don’t feel I need to do so to justify following my preference. In the case of dogs I think it’s more than cuteness—they are pack hunting animals that have been bred over many generations to live with humans as companions (rather than as livestock) and so it is not unsurprising that we should have affinity for them. Preferring them over pigs seems no more problematic than preferring a friendly AI over a paperclip maximizer—they share more common goals with us than pigs do.
That’s not a very rational approach. If it’s easier, think of it as $150 a year (probably ballpark for me based on my own chicken consumption) and consider what charity you could donate $150 extra to. In my opinion being rational about personal finances is a pretty good starting place for an aspiring rationalist.
I don’t interpret “consciousness” as a preference giving some animals more value to me than others. I interpret it as a multiplier that needs to be used in order to even out preferences.
Let’s say I want to minimize suffering in a target-independent way, but I need to divide X units of torture between a human and an ant. I would choose to apply all X units to the ant, not just because I like humans more than ants, but because that decision actually minimizes total suffering. My wild guess is that ants can’t really suffer all that much; they probably get some vague negative feeling but it’s (again, I am guessing wildly) nothing like as strong or as painful as the pain that a human, with their million times more neurons, feels.
In contrast, obviously cuteness has no effect on level of suffering. If I want to divide up X units of torture between two animals, one of which is cuter than the other, from a purely consequentialist position there’s no reason to prefer one to the other.
It might help if you think of me as trying to minimize the number of suffering*consciousness units. That’s why I wouldn’t care about eating TAW’s genetically engineered neuronless cow, and it’s why I care less about ants than humans.
(or a metaphor: let’s say a hospital administrator has to distribute X organs among needy transplant patients. Even if the hospital administrator chooses to be unbiased regarding the patients’ social value—ie not prefer a millionaire to a bum—the administrator still has a good case for giving the organ to someone for whom it will bring them 50 more years of life rather than 6 more months. That’s a completely different kind of preference than ‘I like this guy better’. The administrator is trying to impartially maximize lives saved*years)
Hopefully that makes it clear what the difference between this theory and “preferring” cute animals is.
Well, humans seem to be more upset by images of baby seals being clubbed than by the death of less cute but similarly ‘conscious’ creatures so that might factor into your total suffering calculation but that aside this does seem to follow from your premises.
Why is that preference uniquely privileged though? What justifies it over preferring to minimize the number of suffering*(value I assign to animal) units? If I value something about dogs over pigs (lets call it ‘empathy units’ because that is something like a description of the source of my preference) why is that a less justified choice of preference than ‘consciousness’?
If you just genuinely value what you’re calling ‘consciousness’ here over any other measure of value that’s a perfectly reasonable position to take. You seem to want to universalize the preference though and I get the impression that you recognize that it goes against most people’s instinctive preferences. If you want to persuade others to accept your preference ranking (maybe you don’t—it’s not clear to me) then I think you need to come up with a better justification. You should also bear in mind you may find yourself arguing to sacrifice humanity for a super-conscious paperclip maximizer - is that really a position you want to take?
Well, I admit to being one of the approximately seven billion humans who can’t prove their utility functions from first principles. But I think there’s a very convincing argument that consciousness is in fact what we’re actually looking for and naturally taking into account.
Happiness only is happiness, and pain only is pain, insofar as it is perceived by awareness. If a scientist took a nerve cell with a pain receptor, put it in a Petri dish, and stimulated it for a while, I wouldn’t consider this a morally evil act.
I find in my own life that different levels of awareness correspond to different levels of suffering. Although something bad happening to me in a dream is bad, I don’t worry about it nearly as much as I would if it happened when I was awake and fully aware. Likewise, if I’m zonked out on sedatives, I tend to pay less attention to my own pain.
I hypothesize that different animals have different levels of awareness, based on intuition and my knowledge of their nervous systems. In this case, they would be able to experience different levels of suffering. What I meant by saying my utility function multiplied suffering by awareness would have been better phrased as:
Suffering = bad things*awareness
while trying to minimize suffering. This is why, for example, doing all sorts of horrible things to a rock is a morally neutral act, doing them to an insect is probably bad but not anything to lose sleep over, and doing them to a human is a moral problem even if it’s a human I don’t personally like.
Your paperclip example is a classical problem called the utility monster. I don’t really have any especially brilliant solution beyond what has already been said about the issue. To some degree I bite the bullet: if there was some entity whose nervous system was so acute that causing it the slightest amount of pain would correspond to 3^^^3 years of torture for a human being, I’d place high priority on keeping that entity happy.
But you seem to think (and correct me if I’m misinterpreting) that it would be better if we could. I’m not so sure. And further you seem to think that given that we can’t, it’s still better to override our felt/intrinsic preferences that are hard to fully justify with unnatural preferences that have the sole advantage of being easier to express in simple sentences.
Now I’m not sure you’re actually claiming this but with the pig/dog comparison you seem to be acknowledging that many people value dogs more than pigs (I’m not clear if you have this instinctive preference yourself or not) but that based on some abstract concept of levels of consciousness (that is itself subjective given our current knowledge) we should override our instincts and judge them as of equal value. I’m saying “screw the abstract theory, I value dogs over pigs and that’s sufficient moral justification for me”. I can give you rationalizations for my preference—the idea that dogs have been bred to live with humans for example—but ultimately I don’t think the rationalization is required for moral justification.
If this is true, then we should prefer our natural judgements (we value cute baby seals highly, that’s fine—what we’re really valuing is consciousness, not the fact that they share facial features with human babies and so trigger protective instincts). You can’t have it both ways—either we prefer dogs to pigs because they really are ‘more conscious’ or we should fight our instincts and value them equally because our instincts mislead us. I’d agree that what you call ‘consciousness’ or ‘awareness’ is a factor but I don’t think it’s the most important feature influencing our judgements. And I don’t see why it should be.
And it’s exactly this sort of thing that makes me inclined to reject utilitarian ethics. If following utilitarian ethics leads to morally objectionable outcomes I see no good reason to think the utilitarian position is right.
I’ve found Quorn in the United States in several grocery stores, in the frozen food. Possibly it’s regionally unavailable where you live? Or is the US not the “elsewhere” in question?