Typically in questions of ethics, I factor the problem into two sub-questions:
Game theory: ought I care about other agents’ values because we have the potential to affect each other?
Ben’s preferences: do I personally care about this agent and them having their desires satisfied?
For the second, it’s on the table whether I care directly about chickens. I think at minimum I care about them the way I care about characters in like Undertale or something, where they’re not real but I imbue meaning into them and their lives.
That said it’s also on the table to me that a lot of my deeply felt feelings about why it’s horrible to be cruel to chickens, are similar to my deeply felt feelings of being terrified when I am standing on a glass bridge and looking down. I feel nauseous and like running and a bit like screaming for fear of falling; and yet there is nothing actually to be afraid of.
If I imagine someone repeatedly playing Undertale to kill all the characters in ways that make the characters maximally ‘upset’, this seems tasteless and a touch cruel, but not because the characters are conscious. Relatedly, if I found out that someone had built a profitable business that somehow required incidentally running massive numbers of simulations of the worst endings for all the characters in Undertale (e.g. some part of their very complex computer systems had hit an equilibrium of repeatedly computing this, and changing that wasn’t a sufficient economic bottleneck to be worth the time/money cost), this would again seem kind of distasteful, but in the present world it would not be very high on my list of things to fix, it would not make the top 1000.
For the first, suppose I do want to engage in game theory with chickens. Then I think all your (excellent) points about consciousness are directly applicable. You’re quite right that suffering doesn’t need to be conscious, and often I have become aware of a way that I have been averse to thinking about a subject or been scared of a person for no good reason that has been a major impediment in having a great career and great relationships, in ways that are “outside” my conscious experience. (Being more ‘braindead’.) I would have immensely appreciated someone helping me realize and fix these things about myself that were outside my conscious awareness.
Insofar as the chickens are having their wings clipped and kept in cages, it’s very clear that their intentions and desires are being stunted. On a similar note, I think all the points in Dormin’s essay Against Dog Ownership apply regardless of whether dogs are conscious — that the meaning dogs look for in life is not found in the submissive and lonely inner-city life that most of them experience. These lay out clear ways to be much kinder to a chicken or dog and to respect their desires.
But there is a step of the argument missing here. I think some people believe arguments that claim it’s worth engaging in game theory with chickens even if I think they’re only as real as characters in Undertale; but I have not read an argument that I find compelling.
The idea is that if we suppose chickens are indeed only as real as Undertale characters, I might still care about them because we have shared goals or something. Here’s a very concrete story where that would be the case: if someone made a human-level AI with Sans’ personality, and he was working to build a universe kind of like the universe I want to live in with things like LessWrong and Sea Shanties and Dostoyevsky in it, then I would go out of my way to – say – right injustices against him; and I hope he would do the same for me, because I want everyone to know that such agents will be defended by each other.
I think some people believe that humans and chickens have similar goals in this way in the extreme, but I don’t agree. I don’t think I would have much of a place in a chicken utopia, nor do I expect to find much of value in it.
Typically in questions of ethics, I factor the problem into two sub-questions:
Game theory: ought I care about other agents’ values because we have the potential to affect each other?
Ben’s preferences: do I personally care about this agent and them having their desires satisfied?
For the second, it’s on the table whether I care directly about chickens. I think at minimum I care about them the way I care about characters in like Undertale or something, where they’re not real but I imbue meaning into them and their lives.
That said it’s also on the table to me that a lot of my deeply felt feelings about why it’s horrible to be cruel to chickens, are similar to my deeply felt feelings of being terrified when I am standing on a glass bridge and looking down. I feel nauseous and like running and a bit like screaming for fear of falling; and yet there is nothing actually to be afraid of.
If I imagine someone repeatedly playing Undertale to kill all the characters in ways that make the characters maximally ‘upset’, this seems tasteless and a touch cruel, but not because the characters are conscious. Relatedly, if I found out that someone had built a profitable business that somehow required incidentally running massive numbers of simulations of the worst endings for all the characters in Undertale (e.g. some part of their very complex computer systems had hit an equilibrium of repeatedly computing this, and changing that wasn’t a sufficient economic bottleneck to be worth the time/money cost), this would again seem kind of distasteful, but in the present world it would not be very high on my list of things to fix, it would not make the top 1000.
For the first, suppose I do want to engage in game theory with chickens. Then I think all your (excellent) points about consciousness are directly applicable. You’re quite right that suffering doesn’t need to be conscious, and often I have become aware of a way that I have been averse to thinking about a subject or been scared of a person for no good reason that has been a major impediment in having a great career and great relationships, in ways that are “outside” my conscious experience. (Being more ‘braindead’.) I would have immensely appreciated someone helping me realize and fix these things about myself that were outside my conscious awareness.
Insofar as the chickens are having their wings clipped and kept in cages, it’s very clear that their intentions and desires are being stunted. On a similar note, I think all the points in Dormin’s essay Against Dog Ownership apply regardless of whether dogs are conscious — that the meaning dogs look for in life is not found in the submissive and lonely inner-city life that most of them experience. These lay out clear ways to be much kinder to a chicken or dog and to respect their desires.
But there is a step of the argument missing here. I think some people believe arguments that claim it’s worth engaging in game theory with chickens even if I think they’re only as real as characters in Undertale; but I have not read an argument that I find compelling.
The idea is that if we suppose chickens are indeed only as real as Undertale characters, I might still care about them because we have shared goals or something. Here’s a very concrete story where that would be the case: if someone made a human-level AI with Sans’ personality, and he was working to build a universe kind of like the universe I want to live in with things like LessWrong and Sea Shanties and Dostoyevsky in it, then I would go out of my way to – say – right injustices against him; and I hope he would do the same for me, because I want everyone to know that such agents will be defended by each other.
I think some people believe that humans and chickens have similar goals in this way in the extreme, but I don’t agree. I don’t think I would have much of a place in a chicken utopia, nor do I expect to find much of value in it.