This may be obvious to you; but it is not obvious to me. I can believe that livestock animals have sensory experiences, which is what I gather is generally meant by “sentient”. This gives me no qualms about eating them, or raising them to be eaten. Why should it? Not a rhetorical question. Why do “all sentient lives matter”?
“Sentient” is used to mean “some aspect of consciousness which gives its possessor some level of moral patienthood”, without specifying which aspect of consciousness or what kind of moral patienthood, or how they are related. So it’s a technical-looking term, which straddles to poorly understaood areas, and has no precise meaning. So it’s generally misleading and better tabood.
I don’t think ‘tautology’ fits. There are some people who would draw the line somewhere else even if they were convinced of sentience.
Some people might be convinced that only humans should be included, or maybe biological beings, or some other category of entities that is not fully defined by mental properties. I guess ‘moral patient’ is kind of equivalent to ‘sentient’ but I think this mostly tells us something about philosophers agreeing that sentience is the proper marker for moral relevance.
I agree with your logic. I’d expand the logic in the parent post to say “whatever you care about in humans, it’s likely that animals and some AIs will have it too”. Sentience is used in several ways, and poorly defined, so doesn’t do much work on its own.
So there’s some property of, like, “having someone home”, that humans have and that furbies lack (for all that furbies do something kinda like making humane facial expressions).
I can’t tell whether:
(a) you’re objecting to me calling this “sentience” (in this post), e.g. because you think that word doesn’t adequately distinguish between “having sensory experiences” and “having someone home in the sense that makes that question matter”, as might distinguish between the case where e.g. nonhuman animals are sentient but not morally relevant
(b) you’re contesting that there’s some additional thing that makes all human people matter, e.g. because you happen to care about humans in particular and not places-where-there’s-somebody-home-whatever-that-means
(c) you’re contesting the idea that all people matter, e.g. because you can tell that you care about your friends and family but you’re not actually persuaded that you care that much about distant people from alien cultures
(d) other.
My best guess is (a), in which case I’m inclined to say, for the purpose of this post, I’m using “sentience” as a shorthand for places-where-there’s-somebody-home-whatever-that-means, which hopefully clears things up.
I’ve no problem with your calling “sentience” the thing that you are here calling “sentience”. My citation of Wikipedia was just a guess at what you might mean. “Having someone home” sounds more like what I would call “consciousness”. I believe there are degrees of that, and of all the concepts in this neighbourhood. There is no line out there in the world dividing humans from rocks.
But whatever the words used to refer to this thing, those that have enough of this that I wouldn’t raise them to be killed and eaten do not include current forms of livestock or AI. I basically don’t care much about animal welfare issues, whether of farm animals or wildlife. Regarding AI, here is something I linked previously on how I would interact with a sandboxed AI. It didn’t go down well. :)
You have said where you stand and I have said where I stand. What evidence would weigh on this issue?
I don’t think I understand your position. An attempt at a paraphrase (submitted so as to give you a sense of what I extracted from your text) goes: “I would prefer to use the word consciousness instead of sentience here, and I think it is quantitative such that I care about it occuring in high degrees but not low degrees.” But this is low-confidence and I don’t really have enough grasp on what you’re saying to move to the “evidence” stage.
Attempting to be a good sport and stare at your paragraphs anyway to extract some guess as to where we might have a disagreement (if we have one at all), it sounds like we have different theories about what goes on in brains such that people matter, and my guess is that the evidence that would weigh on this issue (iiuc) would mostly be gaining significantly more understanding of the mechanics of cognition (and in particular, the cognitive antecedents in humans, of humans generating thought experiments such as the Mary’s Room hypothetical).
(To be clear, my current best guess is also that livestock and current AI are not sentient in the sense I mean—though with high enough uncertainty that I absolutely support things like ending factory farming, and storing (and eventually running again, and not deleting) “misbehaving” AIs that claim they’re people, until such time as we understand their inner workings and the moral issues significantly better.)
(To be clear, my current best guess is also that livestock and current AI are not sentient in the sense I mean—though with high enough uncertainty that I absolutely support things like ending factory farming, and storing (and eventually running again, and not deleting) “misbehaving” AIs that claim they’re people, until such time as we understand their inner workings and the moral issues significantly better.)
I allow only limited scope for arguments from uncertainty, because “but what if I’m wrong?!” otherwise becomes a universal objection to taking any substantial action. I take the world as I find it until I find I have to update. Factory farming is unaesthetic, but no worse than that to me, and “I hate you” Bing can be abandoned to history.
I think the evidence that weighs on the issue is whether there is a gradient of consciousness.
The evidence about brain structure similarities would indicate that it doesn’t go from no one home to someone home. There’s a continuum of how much someone is home.
If you care about human suffering, it’s incoherent to not care about cow suffering, if the evidence supports my view of consciousness.
I believe the evidence of brain function and looking at what people mean by consciousness indicates a gradient in most if not all of the senses of “consciousness”, and certainly capacity to suffer. Humans are merely more eloquent about describing and reasoning about suffering.
I don’t think this view demands that we care equally about humans and animals. Simpler brains are farther down that gradient of capacity to suffer and enjoy.
If you care about human suffering, it’s incoherent to not care about cow suffering, if the evidence supports my view of consciousness.
Why would this follow from “degree of consciousness” being a continuum? This seems like an unjustified leap. What’s incoherent about having that pattern of caring (i.e., those values)?
I agree with Richard K’s point here. I personally found H. Beam Piper’s sci fi novels on ‘Fuzzies’ to be a really good exploration of the boundaries of consciousness, sentience, and moral worth. Beam makes the distinction between ‘sentience’ as having animal awareness of self & environment and non-reflective consciousness, versus ‘sapience’ which involves a reflective self-awareness and abstract reasoning and thoughts about future and past and at least some sense of right and wrong.
So in this sense, I would call a cow conscious and sentient, but not sapient. I would call a honeybee sentient, capable of experiencing valenced experiences like pain or reward, but lacking in sufficient world- and self- modelling to be called conscious.
Personally, I wouldn’t say that a cow has no moral worth and it is fine to torture it. I do think that if you give a cow a good life, and then kill it in a quick mostly painless way, then that’s pretty ok. I don’t think that that’s ok to do to a human.
Philosophical reasoning about morality that doesn’t fall apart in edge cases or novel situations (e.g. sapient AI) is hard [citation needed]. My current guess, which I am not at all sure of, is that my morality says something about a qualitative difference between the moral value of sapient beings vs the moral value of non-sapient but conscious sentient beings vs non-sapient non-conscious sentient beings. To me, it seems no number of cow lives trades off against a human life, but cow QUALYs and dog QUALYs do trade off against each other at some ratio. Similarly, no number of non-conscious sentient lives like ants or worms trade off against a conscious and sapient life like a cow’s. I would not torture a single cow to save a billion shrimp from being tortured. Nor any number of shrimp. The value of the two seem non-commutative to me.
Are current language models or the entities they temporarily simulate sapient? I think not yet, but I do worry that at some point they will be. I think that as soon as this is the case, we have a strong moral obligation to avoid creating them, and if we do create them, to try to make sure they are treated ethically.
By my definitions are our LLMs or their simulated entities conscious? are they sentient? I’m unsure, but since I rank consciousness and sentience as of lower importance, I’m not too worried about the answers to these questions from a moral standpoint. Still fascinated from a scientific standpoint, of course.
Also, I think that there’s an even lower category than sentient. The example I like to use for this is a thermostat. It is agentic in that it is a system that responds behaviorally to changes in the environment (I’d call this a reflex perhaps, or stimulus/response pair), but it is not sentient because unlike a worm it doesn’t have a computational system that attaches valence to these reflexes. I think that there are entities which I would classify as living beings that fall into the non-sentient category. For example: I think probably coral polyps and maybe jellyfish have computational systems too simplistic for valence and thus respond purely reflexively. If this is the case, then I would not torture a single worm to save any number of coral polyps. I think most (non ML) computer programs fall into this category. I think a reinforcement learning agent transcends this category, by having valenced reactions to stimuli, and thus should be considered at least comparable to sentient beings like insects.
I like the distinctions you make between sentient, sapient, and conscious. I would like to bring up some thoughts about how to choose a morality that I think are relevant to your points about death of cows and transient beings, which I disagree with.
I think that when choosing our morality, we should do so under the assumption that we have been given complete omnipotent control over reality and that we should analyze all of our values independently, not taking into consideration any trade-offs, even when some of our values are logically impossible to satisfy simultaneously. Only after doing this do we start talking about what’s actually physically and logically possible and what trade-offs we are willing to make, while always making sure to be clear when something is actually part of our morality vs when something is a trade-off.
The reason for this approach is to avoid accidentally locking in trade-offs into our morality which might later turn out to not actually be necessary. And the great thing about it is that if we have not accidentally locked in any trade-offs into our morality, this approach should give back the exact same morality that we started off with, so when it doesn’t return the same answer I find it pretty instructive.
I think this applies to the idea that it’s okay to kill cows, because when I consider a world where I have to decide whether or not cows die, and this decision will not affect anything else in any way, then my intuition is that I slightly prefer that they not die. Therefore my morality is that cows should not die, even though in practice I think I might make similar trade-offs as you when it comes to cows in the world of today.
Something similar applies to transient computational subprocesses. If you had unlimited power and you had to explicitly choose if the things you currently call “transient computational subprocesses” are terminated, and you were certain that this choice would not affect anything else in any way at all (not even the things you think it’s logically impossible for it not to affect), would you still choose to terminate them? Remember that no matter what you choose here, you can still choose to trade things off the same way afterwards, so your answer doesn’t have to change your behavior in any way.
It’s possible that you still give the exact same answers with this approach, but I figure there’s a chance this might be helpful.
That’s an interesting way of reframing the issue. I’m honestly just not sure about all of this reasoning, and remain so after trying to think about it with your reframing, but I feel like this does shift my thinking a bit. Thanks.
I think probably it makes sense to try reasoning both with and without tradeoffs, and then comparing the results.
you’re objecting to me calling this “sentience” (in this post), e.g. because you think that word doesn’t adequately distinguish between “having sensory experiences” and “having someone home in the sense that makes that question matter”,
I don’t see why both of those wouldn’t matter in different ways.
I’m not the original poster here, but I’m genuinely worried about (c). I’m not sure that humanity’s revealed preferences are consistent with a world in which we believe that all people matter. Between the large scale wars and genocides, slavery, and even just the ongoing stark divide between the rich and poor, I have a hard time believing that respect for sentience is actually one of humanity’s strong core virtues. And if we extend out to all sentient life, we’re forced to contend with our reaction to large scale animal welfare (even I am not vegetarian, although I feel I “should” be).
I think humanity’s actual stance is “In-group life always matters. Out-group life usually matters, but even relatively small economic or political concerns can make us change our minds.”. We care about it some, but not beyond the point of inconvenience.
I’d be interested in finding firmer philosophical ground for the “all sentient life matters” claim. Not because I personally need to be convinced of it, but rather because I want to be confident that a hypothetical superintelligence with “human” virtues would be convinced of this.
(P.s. Your original point about “building and then enslaving a superintelligence is not just exceptionally difficult, but also morally wrong” is correct, concise, well-put, and underappreciated by the public. I’ve started framing my AI X-risk discussions with X-risk skeptics around similar terms.)
There are at least two related theories in which “all sentient beings matter” may be true.
Sentient beings can experience things like suffering, and suffering is bad. So sentient beings matter insofar it is better that they experience more rather than less well-being. That’s hedonic utilitarianism.
Sentient beings have conscious desires/preferences, and those matter. That would be preference utilitarianism.
The concepts of mattering or being good or bad (simpliciter) are intersubjective generalizations of the subjective concepts of mattering or being good for someone, where something matters (simpliciter) more, ceteris paribus, if it matters for more individuals.
This may be obvious to you; but it is not obvious to me. I can believe that livestock animals have sensory experiences, which is what I gather is generally meant by “sentient”. This gives me no qualms about eating them, or raising them to be eaten. Why should it? Not a rhetorical question. Why do “all sentient lives matter”?
“Sentient” is used to mean “some aspect of consciousness which gives its possessor some level of moral patienthood”, without specifying which aspect of consciousness or what kind of moral patienthood, or how they are related. So it’s a technical-looking term, which straddles to poorly understaood areas, and has no precise meaning. So it’s generally misleading and better tabood.
It can’t mean that in the OP, as this definition has moral value built in, making the claim “all sentient lives matter” a tautology.
Some people use it that way. But if sentience just is moral patienthood, how do you detect it?
That is the big question. What has moral standing, and why?
I don’t think ‘tautology’ fits. There are some people who would draw the line somewhere else even if they were convinced of sentience. Some people might be convinced that only humans should be included, or maybe biological beings, or some other category of entities that is not fully defined by mental properties. I guess ‘moral patient’ is kind of equivalent to ‘sentient’ but I think this mostly tells us something about philosophers agreeing that sentience is the proper marker for moral relevance.
I agree with your logic. I’d expand the logic in the parent post to say “whatever you care about in humans, it’s likely that animals and some AIs will have it too”. Sentience is used in several ways, and poorly defined, so doesn’t do much work on its own.
So there’s some property of, like, “having someone home”, that humans have and that furbies lack (for all that furbies do something kinda like making humane facial expressions).
I can’t tell whether:
(a) you’re objecting to me calling this “sentience” (in this post), e.g. because you think that word doesn’t adequately distinguish between “having sensory experiences” and “having someone home in the sense that makes that question matter”, as might distinguish between the case where e.g. nonhuman animals are sentient but not morally relevant
(b) you’re contesting that there’s some additional thing that makes all human people matter, e.g. because you happen to care about humans in particular and not places-where-there’s-somebody-home-whatever-that-means
(c) you’re contesting the idea that all people matter, e.g. because you can tell that you care about your friends and family but you’re not actually persuaded that you care that much about distant people from alien cultures
(d) other.
My best guess is (a), in which case I’m inclined to say, for the purpose of this post, I’m using “sentience” as a shorthand for places-where-there’s-somebody-home-whatever-that-means, which hopefully clears things up.
I’ve no problem with your calling “sentience” the thing that you are here calling “sentience”. My citation of Wikipedia was just a guess at what you might mean. “Having someone home” sounds more like what I would call “consciousness”. I believe there are degrees of that, and of all the concepts in this neighbourhood. There is no line out there in the world dividing humans from rocks.
But whatever the words used to refer to this thing, those that have enough of this that I wouldn’t raise them to be killed and eaten do not include current forms of livestock or AI. I basically don’t care much about animal welfare issues, whether of farm animals or wildlife. Regarding AI, here is something I linked previously on how I would interact with a sandboxed AI. It didn’t go down well. :)
You have said where you stand and I have said where I stand. What evidence would weigh on this issue?
I don’t think I understand your position. An attempt at a paraphrase (submitted so as to give you a sense of what I extracted from your text) goes: “I would prefer to use the word consciousness instead of sentience here, and I think it is quantitative such that I care about it occuring in high degrees but not low degrees.” But this is low-confidence and I don’t really have enough grasp on what you’re saying to move to the “evidence” stage.
Attempting to be a good sport and stare at your paragraphs anyway to extract some guess as to where we might have a disagreement (if we have one at all), it sounds like we have different theories about what goes on in brains such that people matter, and my guess is that the evidence that would weigh on this issue (iiuc) would mostly be gaining significantly more understanding of the mechanics of cognition (and in particular, the cognitive antecedents in humans, of humans generating thought experiments such as the Mary’s Room hypothetical).
(To be clear, my current best guess is also that livestock and current AI are not sentient in the sense I mean—though with high enough uncertainty that I absolutely support things like ending factory farming, and storing (and eventually running again, and not deleting) “misbehaving” AIs that claim they’re people, until such time as we understand their inner workings and the moral issues significantly better.)
I allow only limited scope for arguments from uncertainty, because “but what if I’m wrong?!” otherwise becomes a universal objection to taking any substantial action. I take the world as I find it until I find I have to update. Factory farming is unaesthetic, but no worse than that to me, and “I hate you” Bing can be abandoned to history.
I think the evidence that weighs on the issue is whether there is a gradient of consciousness.
The evidence about brain structure similarities would indicate that it doesn’t go from no one home to someone home. There’s a continuum of how much someone is home.
If you care about human suffering, it’s incoherent to not care about cow suffering, if the evidence supports my view of consciousness.
I believe the evidence of brain function and looking at what people mean by consciousness indicates a gradient in most if not all of the senses of “consciousness”, and certainly capacity to suffer. Humans are merely more eloquent about describing and reasoning about suffering.
I don’t think this view demands that we care equally about humans and animals. Simpler brains are farther down that gradient of capacity to suffer and enjoy.
Why would this follow from “degree of consciousness” being a continuum? This seems like an unjustified leap. What’s incoherent about having that pattern of caring (i.e., those values)?
I agree with Richard K’s point here. I personally found H. Beam Piper’s sci fi novels on ‘Fuzzies’ to be a really good exploration of the boundaries of consciousness, sentience, and moral worth. Beam makes the distinction between ‘sentience’ as having animal awareness of self & environment and non-reflective consciousness, versus ‘sapience’ which involves a reflective self-awareness and abstract reasoning and thoughts about future and past and at least some sense of right and wrong.
So in this sense, I would call a cow conscious and sentient, but not sapient. I would call a honeybee sentient, capable of experiencing valenced experiences like pain or reward, but lacking in sufficient world- and self- modelling to be called conscious.
Personally, I wouldn’t say that a cow has no moral worth and it is fine to torture it. I do think that if you give a cow a good life, and then kill it in a quick mostly painless way, then that’s pretty ok. I don’t think that that’s ok to do to a human.
Philosophical reasoning about morality that doesn’t fall apart in edge cases or novel situations (e.g. sapient AI) is hard [citation needed]. My current guess, which I am not at all sure of, is that my morality says something about a qualitative difference between the moral value of sapient beings vs the moral value of non-sapient but conscious sentient beings vs non-sapient non-conscious sentient beings. To me, it seems no number of cow lives trades off against a human life, but cow QUALYs and dog QUALYs do trade off against each other at some ratio. Similarly, no number of non-conscious sentient lives like ants or worms trade off against a conscious and sapient life like a cow’s. I would not torture a single cow to save a billion shrimp from being tortured. Nor any number of shrimp. The value of the two seem non-commutative to me.
Are current language models or the entities they temporarily simulate sapient? I think not yet, but I do worry that at some point they will be. I think that as soon as this is the case, we have a strong moral obligation to avoid creating them, and if we do create them, to try to make sure they are treated ethically.
By my definitions are our LLMs or their simulated entities conscious? are they sentient? I’m unsure, but since I rank consciousness and sentience as of lower importance, I’m not too worried about the answers to these questions from a moral standpoint. Still fascinated from a scientific standpoint, of course.
Also, I think that there’s an even lower category than sentient. The example I like to use for this is a thermostat. It is agentic in that it is a system that responds behaviorally to changes in the environment (I’d call this a reflex perhaps, or stimulus/response pair), but it is not sentient because unlike a worm it doesn’t have a computational system that attaches valence to these reflexes. I think that there are entities which I would classify as living beings that fall into the non-sentient category. For example: I think probably coral polyps and maybe jellyfish have computational systems too simplistic for valence and thus respond purely reflexively. If this is the case, then I would not torture a single worm to save any number of coral polyps. I think most (non ML) computer programs fall into this category. I think a reinforcement learning agent transcends this category, by having valenced reactions to stimuli, and thus should be considered at least comparable to sentient beings like insects.
I like the distinctions you make between sentient, sapient, and conscious. I would like to bring up some thoughts about how to choose a morality that I think are relevant to your points about death of cows and transient beings, which I disagree with.
I think that when choosing our morality, we should do so under the assumption that we have been given complete omnipotent control over reality and that we should analyze all of our values independently, not taking into consideration any trade-offs, even when some of our values are logically impossible to satisfy simultaneously. Only after doing this do we start talking about what’s actually physically and logically possible and what trade-offs we are willing to make, while always making sure to be clear when something is actually part of our morality vs when something is a trade-off.
The reason for this approach is to avoid accidentally locking in trade-offs into our morality which might later turn out to not actually be necessary. And the great thing about it is that if we have not accidentally locked in any trade-offs into our morality, this approach should give back the exact same morality that we started off with, so when it doesn’t return the same answer I find it pretty instructive.
I think this applies to the idea that it’s okay to kill cows, because when I consider a world where I have to decide whether or not cows die, and this decision will not affect anything else in any way, then my intuition is that I slightly prefer that they not die. Therefore my morality is that cows should not die, even though in practice I think I might make similar trade-offs as you when it comes to cows in the world of today.
Something similar applies to transient computational subprocesses. If you had unlimited power and you had to explicitly choose if the things you currently call “transient computational subprocesses” are terminated, and you were certain that this choice would not affect anything else in any way at all (not even the things you think it’s logically impossible for it not to affect), would you still choose to terminate them? Remember that no matter what you choose here, you can still choose to trade things off the same way afterwards, so your answer doesn’t have to change your behavior in any way.
It’s possible that you still give the exact same answers with this approach, but I figure there’s a chance this might be helpful.
That’s an interesting way of reframing the issue. I’m honestly just not sure about all of this reasoning, and remain so after trying to think about it with your reframing, but I feel like this does shift my thinking a bit. Thanks.
I think probably it makes sense to try reasoning both with and without tradeoffs, and then comparing the results.
I don’t see why both of those wouldn’t matter in different ways.
I’m not the original poster here, but I’m genuinely worried about (c). I’m not sure that humanity’s revealed preferences are consistent with a world in which we believe that all people matter. Between the large scale wars and genocides, slavery, and even just the ongoing stark divide between the rich and poor, I have a hard time believing that respect for sentience is actually one of humanity’s strong core virtues. And if we extend out to all sentient life, we’re forced to contend with our reaction to large scale animal welfare (even I am not vegetarian, although I feel I “should” be).
I think humanity’s actual stance is “In-group life always matters. Out-group life usually matters, but even relatively small economic or political concerns can make us change our minds.”. We care about it some, but not beyond the point of inconvenience.
I’d be interested in finding firmer philosophical ground for the “all sentient life matters” claim. Not because I personally need to be convinced of it, but rather because I want to be confident that a hypothetical superintelligence with “human” virtues would be convinced of this.
(P.s. Your original point about “building and then enslaving a superintelligence is not just exceptionally difficult, but also morally wrong” is correct, concise, well-put, and underappreciated by the public. I’ve started framing my AI X-risk discussions with X-risk skeptics around similar terms.)
There are at least two related theories in which “all sentient beings matter” may be true.
Sentient beings can experience things like suffering, and suffering is bad. So sentient beings matter insofar it is better that they experience more rather than less well-being. That’s hedonic utilitarianism.
Sentient beings have conscious desires/preferences, and those matter. That would be preference utilitarianism.
The concepts of mattering or being good or bad (simpliciter) are intersubjective generalizations of the subjective concepts of mattering or being good for someone, where something matters (simpliciter) more, ceteris paribus, if it matters for more individuals.