I had another complaint about that tweet, which… you do not seem to have, but I want to bring up anyway.
Why do we assume that ‘consciousness’ or ‘sentience’ implies ‘morally relevant’ ? And that a lack of consciousness (if we could prove that), would also imply ‘not morally relevant’ ?
It seems bad to me to torture chickens even if turns out they aren’t self-aware. But lots of people seem to take this as a major crux for them.
If I torture a permanently brain-damaged comatose person to death, who no one will miss, is that ‘fine’ ?
I am angry about this assumption; it seems too convenient.
Torturing chickens or brain dead people is upsetting and horrible and distasteful to me. I don’t think it’s causing any direct harm or pain to the chicken/person though.
I still judge a human’s character if they find these things fun and amusing. People watch this kind of thing (torture of humans/other animals) on Netflix all the time, for all sorts of good and bad reasons.
Claim: Many things are happening on a below-consciousness level that ‘matter’ to a person. And if you disrupted those things without changing a person’s subjective experience of them (or did it without their notice), this should still count as harm.
This idea that ‘harm’ and the level of that harm is mostly a matter of the subjective experience of that harm goes against my model of trauma and suffering.
Trauma is stored in the body whether we are conscious of it or not. And in fact I think many people are not conscious of their traumas. I’d still call it ‘harm’ regardless of their conscious awareness.
I have friends who were circumcised before they could form memories. They don’t remember it. Through healing work or other signs of trauma, they realized that in fact this early surgery was likely traumatic. I think Eliezer is sort of saying that this only counts as harm to the degree that it consciously affects them later or something? I disagree with this take, and I think it goes against moral intuition. (If one sees a baby screaming in pain, the impulse is to relieve their ‘pain’ even if they might not be having a conscious experience of it.)
If I take a “non-sentient” chicken and cut off its wings, and I watch it as it helplessly tries to fly repeatedly, but is unable to, this strikes me as a form of harm to the chicken and its values even if the chicken is not having a subjective experience of its condition.
Also, from my investigations, much suffering does not reach the level of awareness. When a person investigates very closely and zooms in on experiences (such as through meditation), suffering is ‘found’ to be ‘occurring’ at a level of granularity and detail that was not previously accessible. But becoming aware of this suffering does not increase the amount of suffering that was occurring; you just become aware of the amount that was already there. It’s an “oh” moment. And this can actually help relieve the suffering, by becoming aware of it.
This suggests that maybe beings who lack the ability of awareness and observation to see their own condition actually are suffering more. This accords with my own journey in relieving personal suffering. More awareness was generally helpful. Whereas as a child, I was more ‘braindead’ in some way. Not very ‘conscious’.
One could make similar inquiries into ‘dissociation’. If a person is regularly dissociated and doesn’t feel things very intensely, does it make it more okay to hurt them?
Also my model of pain is that pain != suffering, which might be relevant here. Not sure.
If I take a “non-sentient” chicken and cut off its wings, and I watch it as it helplessly tries to fly repeatedly, but is unable to, this strikes me as a form of harm to the chicken and its values even if the chicken is not having a subjective experience of its condition.
I’m curious how you would distinguish between entities that can be harmed in a morally relevant way and entities that cannot. I use subjective experience to make this distinction, but it sounds like you’re using something like—thwarted intentions? telos-violation? I suspect we’d both agree that chickens are morally relevant and (say) pencils are not, and that snapping a pencil in half is not a morally-relevant action. But I’m curious what criterion you’re using to draw that boundary.
One could make similar inquiries into ‘dissociation’. If a person is regularly dissociated and doesn’t feel things very intensely, does it make it more okay to hurt them?
This is an interesting point; will think about it more.
Typically in questions of ethics, I factor the problem into two sub-questions:
Game theory: ought I care about other agents’ values because we have the potential to affect each other?
Ben’s preferences: do I personally care about this agent and them having their desires satisfied?
For the second, it’s on the table whether I care directly about chickens. I think at minimum I care about them the way I care about characters in like Undertale or something, where they’re not real but I imbue meaning into them and their lives.
That said it’s also on the table to me that a lot of my deeply felt feelings about why it’s horrible to be cruel to chickens, are similar to my deeply felt feelings of being terrified when I am standing on a glass bridge and looking down. I feel nauseous and like running and a bit like screaming for fear of falling; and yet there is nothing actually to be afraid of.
If I imagine someone repeatedly playing Undertale to kill all the characters in ways that make the characters maximally ‘upset’, this seems tasteless and a touch cruel, but not because the characters are conscious. Relatedly, if I found out that someone had built a profitable business that somehow required incidentally running massive numbers of simulations of the worst endings for all the characters in Undertale (e.g. some part of their very complex computer systems had hit an equilibrium of repeatedly computing this, and changing that wasn’t a sufficient economic bottleneck to be worth the time/money cost), this would again seem kind of distasteful, but in the present world it would not be very high on my list of things to fix, it would not make the top 1000.
For the first, suppose I do want to engage in game theory with chickens. Then I think all your (excellent) points about consciousness are directly applicable. You’re quite right that suffering doesn’t need to be conscious, and often I have become aware of a way that I have been averse to thinking about a subject or been scared of a person for no good reason that has been a major impediment in having a great career and great relationships, in ways that are “outside” my conscious experience. (Being more ‘braindead’.) I would have immensely appreciated someone helping me realize and fix these things about myself that were outside my conscious awareness.
Insofar as the chickens are having their wings clipped and kept in cages, it’s very clear that their intentions and desires are being stunted. On a similar note, I think all the points in Dormin’s essay Against Dog Ownership apply regardless of whether dogs are conscious — that the meaning dogs look for in life is not found in the submissive and lonely inner-city life that most of them experience. These lay out clear ways to be much kinder to a chicken or dog and to respect their desires.
But there is a step of the argument missing here. I think some people believe arguments that claim it’s worth engaging in game theory with chickens even if I think they’re only as real as characters in Undertale; but I have not read an argument that I find compelling.
The idea is that if we suppose chickens are indeed only as real as Undertale characters, I might still care about them because we have shared goals or something. Here’s a very concrete story where that would be the case: if someone made a human-level AI with Sans’ personality, and he was working to build a universe kind of like the universe I want to live in with things like LessWrong and Sea Shanties and Dostoyevsky in it, then I would go out of my way to – say – right injustices against him; and I hope he would do the same for me, because I want everyone to know that such agents will be defended by each other.
I think some people believe that humans and chickens have similar goals in this way in the extreme, but I don’t agree. I don’t think I would have much of a place in a chicken utopia, nor do I expect to find much of value in it.
Btw, coming at it from a different angle: Jessicata raises the hypothesis (in her recent post) that people put so much weight on ‘consciousness’ as a determinant of moral weight because it is relatively illegible and they believe outside the realm of things that civilization currently has a scientific understanding of, so that they can talk about it more freely and without the incredibly high level of undercutting and scrutiny that comes to scientific hypotheses. Quote:
Consciousness is related to moral patiency (in that e.g. animal consciousness is regarded as an argument in favor of treating animals as moral patients), and is notoriously difficult to discuss. I hypothesize that a lot of what is going on here is that:
1. There are many beliefs/representations that are used in different contexts to make decisions or say things.
2. The scientific method has criteria for discarding beliefs/representations, e.g. in cases of unfalsifiability, falsification by evidence, or complexity that is too high.
3. A scientific worldview will, therefore, contain a subset of the set of all beliefs had by someone.
4. It is unclear how to find the rest of the beliefs in the scientific worldview, since many have been discarded.
5. There is, therefore, a desire to be able to refer to beliefs/representations that didn’t make it into the scientific worldview, but which are still used to make decisions or say things; “consciousness” is a way of referring to beliefs/representations in a way inclusive of non-scientific beliefs.
I don’t think that was my point exactly. Rather, my point is that not all representations used by minds to process information make it into the scientific worldview, so there is a leftover component that is still cared about. That doesn’t mean people will think consciousness is more important than scientific information, and indeed scientific theories are conscious to at least some people.
Separately, many people have a desire to increase the importance of illegible things to reduce constraint, which is your hypothesis; I think this is an important factor but it wasn’t what I was saying.
Basically yes I care about the subjective experiences of entities. I’m curious about the use of the word “still” here. This implies you used to have a similar view to mine but changed it, if so what made you change your mind? Or have I just missed out on some massive shift in the discourse surrounding consciousness and moral weight? If the latter is the case (which it might be, I’m not plugged into a huge number of moral philosophy sources) that might explain some of my confusion.
People already implicitly consider your example to be acceptable given that vegetables are held in conditions of isolation that would be considered torture if they were counterfactually conscious and many people support being allowed to kill/euthanize vegetables in cases such as Terry Schiavo’s.
I’ve often thought about this, and this is the conclusion I’ve reached.
There would need to be some criteria that separates morality from immorality. Given that, consciousness (ie self-modelling) seems like the best criteria given our current knowledge. Obviously, there are gaps (like the comatose patient you mention), but we currently do not have a better metric to latch on to.
Why wouldn’t the ability to suffer be the criterion? Isn’t that built into the concept if sentience? “Sentient” literally means “having senses” but is often used as a synonym for “moral patient”.
I had another complaint about that tweet, which… you do not seem to have, but I want to bring up anyway.
Why do we assume that ‘consciousness’ or ‘sentience’ implies ‘morally relevant’ ? And that a lack of consciousness (if we could prove that), would also imply ‘not morally relevant’ ?
It seems bad to me to torture chickens even if turns out they aren’t self-aware. But lots of people seem to take this as a major crux for them.
If I torture a permanently brain-damaged comatose person to death, who no one will miss, is that ‘fine’ ?
I am angry about this assumption; it seems too convenient.
Torturing chickens or brain dead people is upsetting and horrible and distasteful to me. I don’t think it’s causing any direct harm or pain to the chicken/person though.
I still judge a human’s character if they find these things fun and amusing. People watch this kind of thing (torture of humans/other animals) on Netflix all the time, for all sorts of good and bad reasons.
Claim: Many things are happening on a below-consciousness level that ‘matter’ to a person. And if you disrupted those things without changing a person’s subjective experience of them (or did it without their notice), this should still count as harm.
This idea that ‘harm’ and the level of that harm is mostly a matter of the subjective experience of that harm goes against my model of trauma and suffering.
Trauma is stored in the body whether we are conscious of it or not. And in fact I think many people are not conscious of their traumas. I’d still call it ‘harm’ regardless of their conscious awareness.
I have friends who were circumcised before they could form memories. They don’t remember it. Through healing work or other signs of trauma, they realized that in fact this early surgery was likely traumatic. I think Eliezer is sort of saying that this only counts as harm to the degree that it consciously affects them later or something? I disagree with this take, and I think it goes against moral intuition. (If one sees a baby screaming in pain, the impulse is to relieve their ‘pain’ even if they might not be having a conscious experience of it.)
If I take a “non-sentient” chicken and cut off its wings, and I watch it as it helplessly tries to fly repeatedly, but is unable to, this strikes me as a form of harm to the chicken and its values even if the chicken is not having a subjective experience of its condition.
Also, from my investigations, much suffering does not reach the level of awareness. When a person investigates very closely and zooms in on experiences (such as through meditation), suffering is ‘found’ to be ‘occurring’ at a level of granularity and detail that was not previously accessible. But becoming aware of this suffering does not increase the amount of suffering that was occurring; you just become aware of the amount that was already there. It’s an “oh” moment. And this can actually help relieve the suffering, by becoming aware of it.
This suggests that maybe beings who lack the ability of awareness and observation to see their own condition actually are suffering more. This accords with my own journey in relieving personal suffering. More awareness was generally helpful. Whereas as a child, I was more ‘braindead’ in some way. Not very ‘conscious’.
One could make similar inquiries into ‘dissociation’. If a person is regularly dissociated and doesn’t feel things very intensely, does it make it more okay to hurt them?
Also my model of pain is that pain != suffering, which might be relevant here. Not sure.
I’m curious how you would distinguish between entities that can be harmed in a morally relevant way and entities that cannot. I use subjective experience to make this distinction, but it sounds like you’re using something like—thwarted intentions? telos-violation? I suspect we’d both agree that chickens are morally relevant and (say) pencils are not, and that snapping a pencil in half is not a morally-relevant action. But I’m curious what criterion you’re using to draw that boundary.
This is an interesting point; will think about it more.
Typically in questions of ethics, I factor the problem into two sub-questions:
Game theory: ought I care about other agents’ values because we have the potential to affect each other?
Ben’s preferences: do I personally care about this agent and them having their desires satisfied?
For the second, it’s on the table whether I care directly about chickens. I think at minimum I care about them the way I care about characters in like Undertale or something, where they’re not real but I imbue meaning into them and their lives.
That said it’s also on the table to me that a lot of my deeply felt feelings about why it’s horrible to be cruel to chickens, are similar to my deeply felt feelings of being terrified when I am standing on a glass bridge and looking down. I feel nauseous and like running and a bit like screaming for fear of falling; and yet there is nothing actually to be afraid of.
If I imagine someone repeatedly playing Undertale to kill all the characters in ways that make the characters maximally ‘upset’, this seems tasteless and a touch cruel, but not because the characters are conscious. Relatedly, if I found out that someone had built a profitable business that somehow required incidentally running massive numbers of simulations of the worst endings for all the characters in Undertale (e.g. some part of their very complex computer systems had hit an equilibrium of repeatedly computing this, and changing that wasn’t a sufficient economic bottleneck to be worth the time/money cost), this would again seem kind of distasteful, but in the present world it would not be very high on my list of things to fix, it would not make the top 1000.
For the first, suppose I do want to engage in game theory with chickens. Then I think all your (excellent) points about consciousness are directly applicable. You’re quite right that suffering doesn’t need to be conscious, and often I have become aware of a way that I have been averse to thinking about a subject or been scared of a person for no good reason that has been a major impediment in having a great career and great relationships, in ways that are “outside” my conscious experience. (Being more ‘braindead’.) I would have immensely appreciated someone helping me realize and fix these things about myself that were outside my conscious awareness.
Insofar as the chickens are having their wings clipped and kept in cages, it’s very clear that their intentions and desires are being stunted. On a similar note, I think all the points in Dormin’s essay Against Dog Ownership apply regardless of whether dogs are conscious — that the meaning dogs look for in life is not found in the submissive and lonely inner-city life that most of them experience. These lay out clear ways to be much kinder to a chicken or dog and to respect their desires.
But there is a step of the argument missing here. I think some people believe arguments that claim it’s worth engaging in game theory with chickens even if I think they’re only as real as characters in Undertale; but I have not read an argument that I find compelling.
The idea is that if we suppose chickens are indeed only as real as Undertale characters, I might still care about them because we have shared goals or something. Here’s a very concrete story where that would be the case: if someone made a human-level AI with Sans’ personality, and he was working to build a universe kind of like the universe I want to live in with things like LessWrong and Sea Shanties and Dostoyevsky in it, then I would go out of my way to – say – right injustices against him; and I hope he would do the same for me, because I want everyone to know that such agents will be defended by each other.
I think some people believe that humans and chickens have similar goals in this way in the extreme, but I don’t agree. I don’t think I would have much of a place in a chicken utopia, nor do I expect to find much of value in it.
Btw, coming at it from a different angle: Jessicata raises the hypothesis (in her recent post) that people put so much weight on ‘consciousness’ as a determinant of moral weight because it is relatively illegible and they believe outside the realm of things that civilization currently has a scientific understanding of, so that they can talk about it more freely and without the incredibly high level of undercutting and scrutiny that comes to scientific hypotheses. Quote:
I don’t think that was my point exactly. Rather, my point is that not all representations used by minds to process information make it into the scientific worldview, so there is a leftover component that is still cared about. That doesn’t mean people will think consciousness is more important than scientific information, and indeed scientific theories are conscious to at least some people.
Separately, many people have a desire to increase the importance of illegible things to reduce constraint, which is your hypothesis; I think this is an important factor but it wasn’t what I was saying.
Eliezer later states that he is referring to qualia specifically, which for me are (within a rounding error) totally equivalent to moral relevance.
Why is that? You’re still tying moral relevance to a subjective experience?
Basically yes I care about the subjective experiences of entities. I’m curious about the use of the word “still” here. This implies you used to have a similar view to mine but changed it, if so what made you change your mind? Or have I just missed out on some massive shift in the discourse surrounding consciousness and moral weight? If the latter is the case (which it might be, I’m not plugged into a huge number of moral philosophy sources) that might explain some of my confusion.
People already implicitly consider your example to be acceptable given that vegetables are held in conditions of isolation that would be considered torture if they were counterfactually conscious and many people support being allowed to kill/euthanize vegetables in cases such as Terry Schiavo’s.
I’ve often thought about this, and this is the conclusion I’ve reached.
There would need to be some criteria that separates morality from immorality. Given that, consciousness (ie self-modelling) seems like the best criteria given our current knowledge. Obviously, there are gaps (like the comatose patient you mention), but we currently do not have a better metric to latch on to.
Why wouldn’t the ability to suffer be the criterion? Isn’t that built into the concept if sentience? “Sentient” literally means “having senses” but is often used as a synonym for “moral patient”.