I weight the well-being of animals in proportion to what I would call for lack of a better word their consciousness. I think dolphins are probably self-aware, capable of reflection, and have strong senses of pain and pleasure. I think ants are probably much less so, although still nonzero. So I place much less emphasis upon the well-being of ants than upon the well-being of dolphins. Since viruses have no nervous system and no brain, I’m prepared to give them zero value.
However, I have no evidence that dogs are more aware than pigs are. Any personal preference I have for dogs is because they’re cuter than pigs are, which seems like a bad way to make moral decisions. So I am not prepared to make pigs less valuable than dogs.
I never thought about it in terms of your two-different-kinds-of-chicken-breast problem, but I would agree that this would require an actual calculation to see whether the money saved could prevent more suffering than was caused to the chicken. Given the low probability of me actually going through with donating $1 more to charity just because I bought a $1 cheaper chicken, I’d probably take the more expensive one, though.
Any personal preference I have for dogs is because they’re cuter than pigs are, which seems like a bad way to make moral decisions.
I think you’ve deliberately muddied the waters by throwing in the word ‘cute’ there. You justify your general rule for preferring some lifeforms to others by saying you value ‘consciousness’ but then say that preferring dogs over pigs for ‘cuteness’ is not a good way to make moral decisions. If you take away the loaded words all you’re really saying in both cases is that you value animal A more than animal B because it has more of property X. When X is consciousness that’s a good justification, when it’s cuteness it’s a bad justification.
I’m quite happy to just say that I prefer some animals to others and I value them accordingly. That preference is a combination of factors which I couldn’t give you a formula for but I don’t feel I need to do so to justify following my preference. In the case of dogs I think it’s more than cuteness—they are pack hunting animals that have been bred over many generations to live with humans as companions (rather than as livestock) and so it is not unsurprising that we should have affinity for them. Preferring them over pigs seems no more problematic than preferring a friendly AI over a paperclip maximizer—they share more common goals with us than pigs do.
Given the low probability of me actually going through with donating $1 more to charity just because I bought a $1 cheaper chicken, I’d probably take the more expensive one, though.
That’s not a very rational approach. If it’s easier, think of it as $150 a year (probably ballpark for me based on my own chicken consumption) and consider what charity you could donate $150 extra to. In my opinion being rational about personal finances is a pretty good starting place for an aspiring rationalist.
I don’t interpret “consciousness” as a preference giving some animals more value to me than others. I interpret it as a multiplier that needs to be used in order to even out preferences.
Let’s say I want to minimize suffering in a target-independent way, but I need to divide X units of torture between a human and an ant. I would choose to apply all X units to the ant, not just because I like humans more than ants, but because that decision actually minimizes total suffering. My wild guess is that ants can’t really suffer all that much; they probably get some vague negative feeling but it’s (again, I am guessing wildly) nothing like as strong or as painful as the pain that a human, with their million times more neurons, feels.
In contrast, obviously cuteness has no effect on level of suffering. If I want to divide up X units of torture between two animals, one of which is cuter than the other, from a purely consequentialist position there’s no reason to prefer one to the other.
It might help if you think of me as trying to minimize the number of suffering*consciousness units. That’s why I wouldn’t care about eating TAW’s genetically engineered neuronless cow, and it’s why I care less about ants than humans.
(or a metaphor: let’s say a hospital administrator has to distribute X organs among needy transplant patients. Even if the hospital administrator chooses to be unbiased regarding the patients’ social value—ie not prefer a millionaire to a bum—the administrator still has a good case for giving the organ to someone for whom it will bring them 50 more years of life rather than 6 more months. That’s a completely different kind of preference than ‘I like this guy better’. The administrator is trying to impartially maximize lives saved*years)
Hopefully that makes it clear what the difference between this theory and “preferring” cute animals is.
If I want to divide up X units of torture between two animals, one of which is cuter than the other, from a purely consequentialist position there’s no reason to prefer one to the other.
Well, humans seem to be more upset by images of baby seals being clubbed than by the death of less cute but similarly ‘conscious’ creatures so that might factor into your total suffering calculation but that aside this does seem to follow from your premises.
It might help if you think of me as trying to minimize the number of suffering*consciousness units.
Why is that preference uniquely privileged though? What justifies it over preferring to minimize the number of suffering*(value I assign to animal) units? If I value something about dogs over pigs (lets call it ‘empathy units’ because that is something like a description of the source of my preference) why is that a less justified choice of preference than ‘consciousness’?
If you just genuinely value what you’re calling ‘consciousness’ here over any other measure of value that’s a perfectly reasonable position to take. You seem to want to universalize the preference though and I get the impression that you recognize that it goes against most people’s instinctive preferences. If you want to persuade others to accept your preference ranking (maybe you don’t—it’s not clear to me) then I think you need to come up with a better justification. You should also bear in mind you may find yourself arguing to sacrifice humanity for a super-conscious paperclip maximizer - is that really a position you want to take?
Well, I admit to being one of the approximately seven billion humans who can’t prove their utility functions from first principles. But I think there’s a very convincing argument that consciousness is in fact what we’re actually looking for and naturally taking into account.
Happiness only is happiness, and pain only is pain, insofar as it is perceived by awareness. If a scientist took a nerve cell with a pain receptor, put it in a Petri dish, and stimulated it for a while, I wouldn’t consider this a morally evil act.
I find in my own life that different levels of awareness correspond to different levels of suffering. Although something bad happening to me in a dream is bad, I don’t worry about it nearly as much as I would if it happened when I was awake and fully aware. Likewise, if I’m zonked out on sedatives, I tend to pay less attention to my own pain.
I hypothesize that different animals have different levels of awareness, based on intuition and my knowledge of their nervous systems. In this case, they would be able to experience different levels of suffering. What I meant by saying my utility function multiplied suffering by awareness would have been better phrased as:
Suffering = bad things*awareness
while trying to minimize suffering. This is why, for example, doing all sorts of horrible things to a rock is a morally neutral act, doing them to an insect is probably bad but not anything to lose sleep over, and doing them to a human is a moral problem even if it’s a human I don’t personally like.
Your paperclip example is a classical problem called the utility monster. I don’t really have any especially brilliant solution beyond what has already been said about the issue. To some degree I bite the bullet: if there was some entity whose nervous system was so acute that causing it the slightest amount of pain would correspond to 3^^^3 years of torture for a human being, I’d place high priority on keeping that entity happy.
Well, I admit to being one of the approximately seven billion humans who can’t prove their utility functions from first principles.
But you seem to think (and correct me if I’m misinterpreting) that it would be better if we could. I’m not so sure. And further you seem to think that given that we can’t, it’s still better to override our felt/intrinsic preferences that are hard to fully justify with unnatural preferences that have the sole advantage of being easier to express in simple sentences.
Now I’m not sure you’re actually claiming this but with the pig/dog comparison you seem to be acknowledging that many people value dogs more than pigs (I’m not clear if you have this instinctive preference yourself or not) but that based on some abstract concept of levels of consciousness (that is itself subjective given our current knowledge) we should override our instincts and judge them as of equal value. I’m saying “screw the abstract theory, I value dogs over pigs and that’s sufficient moral justification for me”. I can give you rationalizations for my preference—the idea that dogs have been bred to live with humans for example—but ultimately I don’t think the rationalization is required for moral justification.
But I think there’s a very convincing argument that consciousness is in fact what we’re actually looking for and naturally taking into account.
If this is true, then we should prefer our natural judgements (we value cute baby seals highly, that’s fine—what we’re really valuing is consciousness, not the fact that they share facial features with human babies and so trigger protective instincts). You can’t have it both ways—either we prefer dogs to pigs because they really are ‘more conscious’ or we should fight our instincts and value them equally because our instincts mislead us. I’d agree that what you call ‘consciousness’ or ‘awareness’ is a factor but I don’t think it’s the most important feature influencing our judgements. And I don’t see why it should be.
To some degree I bite the bullet: if there was some entity whose nervous system was so acute that causing it the slightest amount of pain would correspond to 3^^^3 years of torture for a human being, I’d place high priority on keeping that entity happy.
And it’s exactly this sort of thing that makes me inclined to reject utilitarian ethics. If following utilitarian ethics leads to morally objectionable outcomes I see no good reason to think the utilitarian position is right.
I weight the well-being of animals in proportion to what I would call for lack of a better word their consciousness. I think dolphins are probably self-aware, capable of reflection, and have strong senses of pain and pleasure. I think ants are probably much less so, although still nonzero. So I place much less emphasis upon the well-being of ants than upon the well-being of dolphins. Since viruses have no nervous system and no brain, I’m prepared to give them zero value.
However, I have no evidence that dogs are more aware than pigs are. Any personal preference I have for dogs is because they’re cuter than pigs are, which seems like a bad way to make moral decisions. So I am not prepared to make pigs less valuable than dogs.
I never thought about it in terms of your two-different-kinds-of-chicken-breast problem, but I would agree that this would require an actual calculation to see whether the money saved could prevent more suffering than was caused to the chicken. Given the low probability of me actually going through with donating $1 more to charity just because I bought a $1 cheaper chicken, I’d probably take the more expensive one, though.
I think you’ve deliberately muddied the waters by throwing in the word ‘cute’ there. You justify your general rule for preferring some lifeforms to others by saying you value ‘consciousness’ but then say that preferring dogs over pigs for ‘cuteness’ is not a good way to make moral decisions. If you take away the loaded words all you’re really saying in both cases is that you value animal A more than animal B because it has more of property X. When X is consciousness that’s a good justification, when it’s cuteness it’s a bad justification.
I’m quite happy to just say that I prefer some animals to others and I value them accordingly. That preference is a combination of factors which I couldn’t give you a formula for but I don’t feel I need to do so to justify following my preference. In the case of dogs I think it’s more than cuteness—they are pack hunting animals that have been bred over many generations to live with humans as companions (rather than as livestock) and so it is not unsurprising that we should have affinity for them. Preferring them over pigs seems no more problematic than preferring a friendly AI over a paperclip maximizer—they share more common goals with us than pigs do.
That’s not a very rational approach. If it’s easier, think of it as $150 a year (probably ballpark for me based on my own chicken consumption) and consider what charity you could donate $150 extra to. In my opinion being rational about personal finances is a pretty good starting place for an aspiring rationalist.
I don’t interpret “consciousness” as a preference giving some animals more value to me than others. I interpret it as a multiplier that needs to be used in order to even out preferences.
Let’s say I want to minimize suffering in a target-independent way, but I need to divide X units of torture between a human and an ant. I would choose to apply all X units to the ant, not just because I like humans more than ants, but because that decision actually minimizes total suffering. My wild guess is that ants can’t really suffer all that much; they probably get some vague negative feeling but it’s (again, I am guessing wildly) nothing like as strong or as painful as the pain that a human, with their million times more neurons, feels.
In contrast, obviously cuteness has no effect on level of suffering. If I want to divide up X units of torture between two animals, one of which is cuter than the other, from a purely consequentialist position there’s no reason to prefer one to the other.
It might help if you think of me as trying to minimize the number of suffering*consciousness units. That’s why I wouldn’t care about eating TAW’s genetically engineered neuronless cow, and it’s why I care less about ants than humans.
(or a metaphor: let’s say a hospital administrator has to distribute X organs among needy transplant patients. Even if the hospital administrator chooses to be unbiased regarding the patients’ social value—ie not prefer a millionaire to a bum—the administrator still has a good case for giving the organ to someone for whom it will bring them 50 more years of life rather than 6 more months. That’s a completely different kind of preference than ‘I like this guy better’. The administrator is trying to impartially maximize lives saved*years)
Hopefully that makes it clear what the difference between this theory and “preferring” cute animals is.
Well, humans seem to be more upset by images of baby seals being clubbed than by the death of less cute but similarly ‘conscious’ creatures so that might factor into your total suffering calculation but that aside this does seem to follow from your premises.
Why is that preference uniquely privileged though? What justifies it over preferring to minimize the number of suffering*(value I assign to animal) units? If I value something about dogs over pigs (lets call it ‘empathy units’ because that is something like a description of the source of my preference) why is that a less justified choice of preference than ‘consciousness’?
If you just genuinely value what you’re calling ‘consciousness’ here over any other measure of value that’s a perfectly reasonable position to take. You seem to want to universalize the preference though and I get the impression that you recognize that it goes against most people’s instinctive preferences. If you want to persuade others to accept your preference ranking (maybe you don’t—it’s not clear to me) then I think you need to come up with a better justification. You should also bear in mind you may find yourself arguing to sacrifice humanity for a super-conscious paperclip maximizer - is that really a position you want to take?
Well, I admit to being one of the approximately seven billion humans who can’t prove their utility functions from first principles. But I think there’s a very convincing argument that consciousness is in fact what we’re actually looking for and naturally taking into account.
Happiness only is happiness, and pain only is pain, insofar as it is perceived by awareness. If a scientist took a nerve cell with a pain receptor, put it in a Petri dish, and stimulated it for a while, I wouldn’t consider this a morally evil act.
I find in my own life that different levels of awareness correspond to different levels of suffering. Although something bad happening to me in a dream is bad, I don’t worry about it nearly as much as I would if it happened when I was awake and fully aware. Likewise, if I’m zonked out on sedatives, I tend to pay less attention to my own pain.
I hypothesize that different animals have different levels of awareness, based on intuition and my knowledge of their nervous systems. In this case, they would be able to experience different levels of suffering. What I meant by saying my utility function multiplied suffering by awareness would have been better phrased as:
Suffering = bad things*awareness
while trying to minimize suffering. This is why, for example, doing all sorts of horrible things to a rock is a morally neutral act, doing them to an insect is probably bad but not anything to lose sleep over, and doing them to a human is a moral problem even if it’s a human I don’t personally like.
Your paperclip example is a classical problem called the utility monster. I don’t really have any especially brilliant solution beyond what has already been said about the issue. To some degree I bite the bullet: if there was some entity whose nervous system was so acute that causing it the slightest amount of pain would correspond to 3^^^3 years of torture for a human being, I’d place high priority on keeping that entity happy.
But you seem to think (and correct me if I’m misinterpreting) that it would be better if we could. I’m not so sure. And further you seem to think that given that we can’t, it’s still better to override our felt/intrinsic preferences that are hard to fully justify with unnatural preferences that have the sole advantage of being easier to express in simple sentences.
Now I’m not sure you’re actually claiming this but with the pig/dog comparison you seem to be acknowledging that many people value dogs more than pigs (I’m not clear if you have this instinctive preference yourself or not) but that based on some abstract concept of levels of consciousness (that is itself subjective given our current knowledge) we should override our instincts and judge them as of equal value. I’m saying “screw the abstract theory, I value dogs over pigs and that’s sufficient moral justification for me”. I can give you rationalizations for my preference—the idea that dogs have been bred to live with humans for example—but ultimately I don’t think the rationalization is required for moral justification.
If this is true, then we should prefer our natural judgements (we value cute baby seals highly, that’s fine—what we’re really valuing is consciousness, not the fact that they share facial features with human babies and so trigger protective instincts). You can’t have it both ways—either we prefer dogs to pigs because they really are ‘more conscious’ or we should fight our instincts and value them equally because our instincts mislead us. I’d agree that what you call ‘consciousness’ or ‘awareness’ is a factor but I don’t think it’s the most important feature influencing our judgements. And I don’t see why it should be.
And it’s exactly this sort of thing that makes me inclined to reject utilitarian ethics. If following utilitarian ethics leads to morally objectionable outcomes I see no good reason to think the utilitarian position is right.