Surely what this shows is that the word ‘life’ doesn’t correspond to a category in reality except in the fuzziest of ways. So taboo it. What aspects of ‘life’ are you actually interested in?
I’m personally interested in several aspects or questions...
How far off is humanity from being able to synthesise a living being purely from matter that did not come from another living being?
Many people hold that living beings should be granted a right to not be wantonly deprived of life, other things being equal. But what are the attributes a being requires to qualify for such moral ‘protection’ ?
If an AI can be said to be alive, is it still alive when the execution of the code is temporarily suspended? If it is a scale, is one whose code has been slowed to one clock cycle per year less alive?
Morality isn’t based on alive-ness, it’s based on sentience, IMO. Beings have moral weight when they have preferences about how the universe should be.
Saying that moral weight is based on sentience is IMO largely a tautology. Sentience is mostly the word we use for “whatever poorly defined features of a mind give it moral weight”.
Short version of my other response: Sentience and life are probably both nonsense words, but if we’re picking a nonsense word to define rigorously and care about, it should be sentience.
Is “preference” a word we have any idea how to define rigorously?
I have the increasingly strong conviction that we ascribe emotions and values to things we can anthropomorphize, and there’s no real possibility of underlying philosophical coherence.
But I know that the quality that causes me to care about something, morally, is not whether it is capable of reproducing, or whether it is made of carbon. I care about things that are conscious in some way that is at least similar to the way I am conscious.
No, I don’t know what causes consciousness, no, I don’t know how to test for it. But basically, I only care about things that care about things. (And by extension, I care about non-caring things that are cared about).
I’m willing to extend this beyond human motivation. I’d give (some) moral standing to a hypothetical paperclip maximizer that experienced satisfaction when it created paperclips and experienced suffering when it failed. I wouldn’t give moral standing to an identical “zombie” paperclip maximizer. I give moral standing to animals (guessing as best I can which are likely to have evolved systems that produce suffering and satisfaction)
I give higher priority to human-like motivations (so in a sense, I’m totally fine with giving higher moral standing to things I can anthropomorphize). I’d sacrifice sentient clippies and chickens for humans, but in the abstract I’d rather the universe contain clippies and chickens than nothing sentient at all. (I think I’d prefer chickens to clippies because they are more likely to eventually produce something closer to human motivation).
Don’t worry—I am not under the impression my moral philosophy is all that coherent. But unless there’s a moral philosophy that at least loosely approximates my vague intuitions, I probably don’t care about it.
The main point, though, is that if we’re picking a hazy, nonsense word to define rigorously, it should be ‘sentience,’ not ‘life.’
(edit: might be meaning to use the word “sapient,” I can never get those straight”)
I read you as arguing for a narrower class that didn’t include the chicken. I’d sacrifice Clippy in a second for something valuable to humans, but I don’t really care whether the universe has non-self-aware animals.
I believe chickens are self-aware (albeit pretty dumb). I could be wrong, and don’t have a good way to test it. (Though I have read some things suggesting they ARE near the borderline of the what kind of sentience is worth worrying about)
I believe chickens are self-aware (albeit pretty dumb). I could be wrong, and don’t have a good way to test it.
A common test for that (which I’m under the impression some people treat more like an ‘operative definition’ of self-awareness) is the mirror test. Great apes, dolphins, elephants and magpies pass it. Dunno about chickens—I guess not.
Then self-aware is quite a bad word for it. I suspect that fish and newborn babies can feel pain and pleasure, but that they’re not ‘self-aware’ the way I’d use that word.
Anthropomorphizing animals is justified based on the degree of similarity between their brains and ours. For example, we know that the parts of our brain we have found are responsible for strong emotions are also present in reptiles, so we might assume that reptiles also have strong emotions. Mammals are more similar to us, so we feel more moral obligation to them.
If an AI can be said to be alive, is it still alive when the execution of the code is temporarily suspended? If it is a scale, is one whose code has been slowed to one clock cycle per year less alive?
Are cryonically suspended people alive? Are people in a coma alive?
Precisely. We could just taboo the word “life” and ask direct questions such as “what legal rights does it make sense for society to afford cryonically suspended people”, but that misses the point that the rest of society needs to be interfaced with, and they are not going to taboo “life”. If you’re trying to make the case in the public area that a cryonically suspended person deserves legal right RRR, which was previously available only to ‘living’ people, then you are going to get asked questions such as “well, what should ‘life’ mean?”, and it will be helpful to have a pre-prepared answer ready to hand.
How far off is humanity from being able to synthesise a living being purely from matter that did not come from another living being?
Just a random thought: if we did that, we’d probably be using information from another living being. Does it matter so much where the actual atoms come from?
Does it matter so much where the actual atoms come from?
Various religions predict that humans can’t create life, that being a power reserved solely for deities (according to their theology).
We don’t have to understand what they mean by “life” or “create life”, nor does what they mean have to be well defined, in order for us to understand that it is (to them) an important issue and that when scientists do this it will have real social consequences, which are worth thinking about and planning for in advance.
Well, obviously not; all atoms of the same type are the same. I assumed he meant molecules. (As in, able to synthesize all biologically relevant molecules. All molecules of the same type are the same too, of course.)
Surely what this shows is that the word ‘life’ doesn’t correspond to a category in reality except in the fuzziest of ways. So taboo it. What aspects of ‘life’ are you actually interested in?
I’m personally interested in several aspects or questions...
How far off is humanity from being able to synthesise a living being purely from matter that did not come from another living being?
Many people hold that living beings should be granted a right to not be wantonly deprived of life, other things being equal. But what are the attributes a being requires to qualify for such moral ‘protection’ ?
If an AI can be said to be alive, is it still alive when the execution of the code is temporarily suspended? If it is a scale, is one whose code has been slowed to one clock cycle per year less alive?
Morality isn’t based on alive-ness, it’s based on sentience, IMO. Beings have moral weight when they have preferences about how the universe should be.
Saying that moral weight is based on sentience is IMO largely a tautology. Sentience is mostly the word we use for “whatever poorly defined features of a mind give it moral weight”.
Short version of my other response: Sentience and life are probably both nonsense words, but if we’re picking a nonsense word to define rigorously and care about, it should be sentience.
Even granting that, it at least expresses that moral weight is a function of a mind, which is not entirely tautological.
Hence the word “largely”.
Yes, but saying that everything alive deserves moral consideration is a different position.
Is “preference” a word we have any idea how to define rigorously?
I have the increasingly strong conviction that we ascribe emotions and values to things we can anthropomorphize, and there’s no real possibility of underlying philosophical coherence.
Short answer: Rigorously? I don’t know.
But I know that the quality that causes me to care about something, morally, is not whether it is capable of reproducing, or whether it is made of carbon. I care about things that are conscious in some way that is at least similar to the way I am conscious.
No, I don’t know what causes consciousness, no, I don’t know how to test for it. But basically, I only care about things that care about things. (And by extension, I care about non-caring things that are cared about).
I’m willing to extend this beyond human motivation. I’d give (some) moral standing to a hypothetical paperclip maximizer that experienced satisfaction when it created paperclips and experienced suffering when it failed. I wouldn’t give moral standing to an identical “zombie” paperclip maximizer. I give moral standing to animals (guessing as best I can which are likely to have evolved systems that produce suffering and satisfaction)
I give higher priority to human-like motivations (so in a sense, I’m totally fine with giving higher moral standing to things I can anthropomorphize). I’d sacrifice sentient clippies and chickens for humans, but in the abstract I’d rather the universe contain clippies and chickens than nothing sentient at all. (I think I’d prefer chickens to clippies because they are more likely to eventually produce something closer to human motivation).
Don’t worry—I am not under the impression my moral philosophy is all that coherent. But unless there’s a moral philosophy that at least loosely approximates my vague intuitions, I probably don’t care about it.
The main point, though, is that if we’re picking a hazy, nonsense word to define rigorously, it should be ‘sentience,’ not ‘life.’
(edit: might be meaning to use the word “sapient,” I can never get those straight”)
The fact is that the meanings different people use for sentient vary much more than for sapient.
Interesting.
I read you as arguing for a narrower class that didn’t include the chicken. I’d sacrifice Clippy in a second for something valuable to humans, but I don’t really care whether the universe has non-self-aware animals.
I believe chickens are self-aware (albeit pretty dumb). I could be wrong, and don’t have a good way to test it. (Though I have read some things suggesting they ARE near the borderline of the what kind of sentience is worth worrying about)
A common test for that (which I’m under the impression some people treat more like an ‘operative definition’ of self-awareness) is the mirror test. Great apes, dolphins, elephants and magpies pass it. Dunno about chickens—I guess not.
That would test a level of intelligence, but not the ability to percieve pain/pleasure/related-things, which is what I’m caring about.
Then self-aware is quite a bad word for it. I suspect that fish and newborn babies can feel pain and pleasure, but that they’re not ‘self-aware’ the way I’d use that word.
Nociception has been demonstrated in insects. Small insects.
Edit: Not to mention C. elegans, which has somewhere around three hundred neurons total.
So the Buddhists were right all along!
(FWIW, I assign a very small but non-zero ethical value to insects.)
Anthropomorphizing animals is justified based on the degree of similarity between their brains and ours. For example, we know that the parts of our brain we have found are responsible for strong emotions are also present in reptiles, so we might assume that reptiles also have strong emotions. Mammals are more similar to us, so we feel more moral obligation to them.
Are cryonically suspended people alive? Are people in a coma alive?
Precisely. We could just taboo the word “life” and ask direct questions such as “what legal rights does it make sense for society to afford cryonically suspended people”, but that misses the point that the rest of society needs to be interfaced with, and they are not going to taboo “life”. If you’re trying to make the case in the public area that a cryonically suspended person deserves legal right RRR, which was previously available only to ‘living’ people, then you are going to get asked questions such as “well, what should ‘life’ mean?”, and it will be helpful to have a pre-prepared answer ready to hand.
Just a random thought: if we did that, we’d probably be using information from another living being. Does it matter so much where the actual atoms come from?
Various religions predict that humans can’t create life, that being a power reserved solely for deities (according to their theology).
We don’t have to understand what they mean by “life” or “create life”, nor does what they mean have to be well defined, in order for us to understand that it is (to them) an important issue and that when scientists do this it will have real social consequences, which are worth thinking about and planning for in advance.
Well, obviously not; all atoms of the same type are the same. I assumed he meant molecules. (As in, able to synthesize all biologically relevant molecules. All molecules of the same type are the same too, of course.)