Is “preference” a word we have any idea how to define rigorously?
I have the increasingly strong conviction that we ascribe emotions and values to things we can anthropomorphize, and there’s no real possibility of underlying philosophical coherence.
But I know that the quality that causes me to care about something, morally, is not whether it is capable of reproducing, or whether it is made of carbon. I care about things that are conscious in some way that is at least similar to the way I am conscious.
No, I don’t know what causes consciousness, no, I don’t know how to test for it. But basically, I only care about things that care about things. (And by extension, I care about non-caring things that are cared about).
I’m willing to extend this beyond human motivation. I’d give (some) moral standing to a hypothetical paperclip maximizer that experienced satisfaction when it created paperclips and experienced suffering when it failed. I wouldn’t give moral standing to an identical “zombie” paperclip maximizer. I give moral standing to animals (guessing as best I can which are likely to have evolved systems that produce suffering and satisfaction)
I give higher priority to human-like motivations (so in a sense, I’m totally fine with giving higher moral standing to things I can anthropomorphize). I’d sacrifice sentient clippies and chickens for humans, but in the abstract I’d rather the universe contain clippies and chickens than nothing sentient at all. (I think I’d prefer chickens to clippies because they are more likely to eventually produce something closer to human motivation).
Don’t worry—I am not under the impression my moral philosophy is all that coherent. But unless there’s a moral philosophy that at least loosely approximates my vague intuitions, I probably don’t care about it.
The main point, though, is that if we’re picking a hazy, nonsense word to define rigorously, it should be ‘sentience,’ not ‘life.’
(edit: might be meaning to use the word “sapient,” I can never get those straight”)
I read you as arguing for a narrower class that didn’t include the chicken. I’d sacrifice Clippy in a second for something valuable to humans, but I don’t really care whether the universe has non-self-aware animals.
I believe chickens are self-aware (albeit pretty dumb). I could be wrong, and don’t have a good way to test it. (Though I have read some things suggesting they ARE near the borderline of the what kind of sentience is worth worrying about)
I believe chickens are self-aware (albeit pretty dumb). I could be wrong, and don’t have a good way to test it.
A common test for that (which I’m under the impression some people treat more like an ‘operative definition’ of self-awareness) is the mirror test. Great apes, dolphins, elephants and magpies pass it. Dunno about chickens—I guess not.
Then self-aware is quite a bad word for it. I suspect that fish and newborn babies can feel pain and pleasure, but that they’re not ‘self-aware’ the way I’d use that word.
Anthropomorphizing animals is justified based on the degree of similarity between their brains and ours. For example, we know that the parts of our brain we have found are responsible for strong emotions are also present in reptiles, so we might assume that reptiles also have strong emotions. Mammals are more similar to us, so we feel more moral obligation to them.
Is “preference” a word we have any idea how to define rigorously?
I have the increasingly strong conviction that we ascribe emotions and values to things we can anthropomorphize, and there’s no real possibility of underlying philosophical coherence.
Short answer: Rigorously? I don’t know.
But I know that the quality that causes me to care about something, morally, is not whether it is capable of reproducing, or whether it is made of carbon. I care about things that are conscious in some way that is at least similar to the way I am conscious.
No, I don’t know what causes consciousness, no, I don’t know how to test for it. But basically, I only care about things that care about things. (And by extension, I care about non-caring things that are cared about).
I’m willing to extend this beyond human motivation. I’d give (some) moral standing to a hypothetical paperclip maximizer that experienced satisfaction when it created paperclips and experienced suffering when it failed. I wouldn’t give moral standing to an identical “zombie” paperclip maximizer. I give moral standing to animals (guessing as best I can which are likely to have evolved systems that produce suffering and satisfaction)
I give higher priority to human-like motivations (so in a sense, I’m totally fine with giving higher moral standing to things I can anthropomorphize). I’d sacrifice sentient clippies and chickens for humans, but in the abstract I’d rather the universe contain clippies and chickens than nothing sentient at all. (I think I’d prefer chickens to clippies because they are more likely to eventually produce something closer to human motivation).
Don’t worry—I am not under the impression my moral philosophy is all that coherent. But unless there’s a moral philosophy that at least loosely approximates my vague intuitions, I probably don’t care about it.
The main point, though, is that if we’re picking a hazy, nonsense word to define rigorously, it should be ‘sentience,’ not ‘life.’
(edit: might be meaning to use the word “sapient,” I can never get those straight”)
The fact is that the meanings different people use for sentient vary much more than for sapient.
Interesting.
I read you as arguing for a narrower class that didn’t include the chicken. I’d sacrifice Clippy in a second for something valuable to humans, but I don’t really care whether the universe has non-self-aware animals.
I believe chickens are self-aware (albeit pretty dumb). I could be wrong, and don’t have a good way to test it. (Though I have read some things suggesting they ARE near the borderline of the what kind of sentience is worth worrying about)
A common test for that (which I’m under the impression some people treat more like an ‘operative definition’ of self-awareness) is the mirror test. Great apes, dolphins, elephants and magpies pass it. Dunno about chickens—I guess not.
That would test a level of intelligence, but not the ability to percieve pain/pleasure/related-things, which is what I’m caring about.
Then self-aware is quite a bad word for it. I suspect that fish and newborn babies can feel pain and pleasure, but that they’re not ‘self-aware’ the way I’d use that word.
Nociception has been demonstrated in insects. Small insects.
Edit: Not to mention C. elegans, which has somewhere around three hundred neurons total.
So the Buddhists were right all along!
(FWIW, I assign a very small but non-zero ethical value to insects.)
Anthropomorphizing animals is justified based on the degree of similarity between their brains and ours. For example, we know that the parts of our brain we have found are responsible for strong emotions are also present in reptiles, so we might assume that reptiles also have strong emotions. Mammals are more similar to us, so we feel more moral obligation to them.