We don’t know what the neural or cognitive correlates of ‘sentience’ are, but that doesn’t mean there is no such thing. And, sure, the process of learning what the correlates are may involve at least some revision to our concept of ‘sentience’; but this too doesn’t imply nihilism about our sentience-related moral judgments, because our moral judgments were always pointing at a vague empirical cluster rather than predicated upon a specific set of exact necessary and sufficient conditions.
“Empirical cluster” is a good way to look it[1]. The way I model this conversation so far is:
Rob’s point of view: X (sentience / personhood / whatever empathy is “trying” to detect) is an empirical cluster which obviously includes humans and doesn’t include rocks. A priori, we don’t know about cats: they are not in the “training set”, so to speak, requiring generalization. Vanessa is saying that cats, like humans, evoke empathy, therefore cats are in X. But, this is unsound! We don’t know that empathy is a sufficient condition! Cats and humans have important cognitive differences! Someday we’ll find a really good gears model that fits the data points we have (which include humans as a positive example and rocks as a negative example, but not cats) and only then we can decide whether cats are in X.
Vanessa’s point of view: X is an empirical cluster which obviously includes humans and cats, and doesn’t include rocks. Cats are totally inside the training set! Saying that “cats and humans have cognitive differences, therefore we need a gears model to decide whether X contains cats” makes as much sense as “women and men have cognitive differences, therefore we need a gears model to decide whether X contains [the other sex]”.
This doesn’t really explain where those different assumptions are coming from, though. For me, empathy is essentially the feeling that I care about something in the caring-about-people sense, so it’s almost tautologically the most direct evidence there is. Yes, finding out more facts can change how much empathy I feel towards something, but current level of empathy is still the obvious baseline for how much empathy I’ll feel in the future.
On the other hand, Rob… I’m guessing that Rob is trying to get something which looks more like “objective morality” (even if not fully subscribing to moral objectivism) and therefore appealing to some kind of cognitive science seems overwhelmingly better to him than trusting emotions, even when we barely understand the relevant cognitive science? But, I’m not sure.
Although, like I said, I’m not talking about moral judgement here (which I see as referring to social norms / reputation systems or attempts to influence social norms / reputation systems), just about individual preferences.
Vanessa is saying that cats, like humans, evoke empathy, therefore cats are in X. But, this is unsound! We don’t know that empathy is a sufficient condition! Cats and humans have important cognitive differences! Someday we’ll find a really good gears model that fits the data points we have (which include humans as a positive example and rocks as a negative example, but not cats) and only then we can decide whether cats are in X.
Another way of seeing why this view is correct is to note that empathy can be evoked by fictional characters, by entities in dreams, etc. If I read a book or view a painting that makes me empathize with the fictional character, this does not make the fictional character sentient.
(It might be evidence that if the fictional character were real, it would be sentient. But that’s not sufficient for a strong ‘reduce everything to empathy’ view. Once you allow that empathy routinely misfires in this way—indeed, that empathy can be misfiring even while the empathizing person realizes this and is not inclined to treat the fictional character as a true moral patient in reality—you lose a lot of the original reason to think ‘it’s all about empathy’ in the first place.)
I’m guessing that Rob is trying to get something which looks more like “objective morality” (even if not fully subscribing to moral objectivism) and therefore appealing to some kind of cognitive science seems overwhelmingly better to him than trusting emotions, even when we barely understand the relevant cognitive science? But, I’m not sure.
I’m saying that insofar as feelings like ‘I should treat my cat well’ assume things about the world, they’re assuming things like “cats exist”, “cats have minds”, “cats’ minds can be in particular states that are relevantly similar to positively and negatively valenced experience in my own mind”, “the cat’s mind is affected by sensory information it acquires from the environment”, “my actions can affect which sensory information the cat acquires”...
The concept “mind” (insofar as it’s contentful and refers to anything at all) refers to various states or processes of brains. So there’s a straight line from ‘caring about cats’ welfare’ to ‘caring about cats’ minds’ to ‘caring about which states the cat’s brain is in’. If you already get off the train somewhere on that straight line, then I’m not sure why.
Anger is a state of mind, and therefore (in some sense) a state of brains. It would be a mistake to say ‘anger is just a matter of angry-seeming behaviors; it’s the behaviors that matter, not the brain state’. The behaviors are typically usefulevidence about the brain state, but it’s still the brain state that we’re primarily discussing, and that we primarily care about.
(At least, ‘is this person’s brain actually angry?’ is the thing we mostly care about if it’s a friend we’re thinking about, or if we’re thinking about someone whose welfare and happiness matters to us. If we’re instead worried about someone physically attacking us, then sure, ‘are they going to exhibit angry-seeming behaviors?’ matters more in the moment then ‘are they really and truly angry in their heart of hearts?’.)
I expect some conceptual revision to be required to find the closest neural/cognitive correlate of ‘sentience’. But the same is plausibly true for ‘anger’, partly because anger is itself a thing that people typically think of as a sentient/conscious state!
One crude way of thinking about ‘sentience’ is that it’s just the disjunction of all the specific conscious states: anger, experiencing the color red, experiencing a sour taste, suffering, boredom...
Just as we can be uncertain about whether someone’s brain is ‘really’ angry, we can be uncertain about whether it’s experiencing any of the conscious states on the long list of candidates.
It would be obviously silly to say ‘we know with certainty that cats truly instantiate human-style anger in their brains, since, after all, my cat sometimes makes loud vocalizations and hisses at things’.
It would be even sillier to say ’whether cats are angry purely consists in whether they exhibit loud vocalizations, hiss at things, etc.; there’s no further important question about how their brains work, even though brain state obviously matters in the case of humans, when we ascribe “anger” to a human!
It isn’t any less silly to do those things in the case of the more general and abstract category, than to do it in the case of the concrete instance like ‘anger’.
Another way of seeing why this view is correct is to note that empathy can be evoked by fictional characters, by entities in dreams, etc. If I read a book or view a painting that makes me empathize with the fictional character, this does not make the fictional character sentient.
(It might be evidence that if the fictional character were real, it would be sentient. But that’s not sufficient for a strong ‘reduce everything to empathy’ view. Once you allow that empathy routinely misfires in this way—indeed, that empathy can be misfiring even while the empathizing person realizes this and is not inclined to treat the fictional character as a true moral patient in reality—you lose a lot of the original reason to think ‘it’s all about empathy’ in the first place.)
Good point! I agree that “I feel empathy towards X” is only sufficient to strongly[1] motivate me to help X is I also believe that X is “real”. But, I also believe that my interactions with cats are strong evidence that cats are “real”, despite my ignorance about the inner workings of cat brains. This is exactly the same as, my interactions with humans are strong evidence that humans are “real”, despite my ignorance about human brains. And, people justifiably knew that other people are “real” even before it was discovered that the brain is responsible for cognition.
The concept “mind” (insofar as it’s contentful and refers to anything at all) refers to various states or processes of brains. So there’s a straight line from ‘caring about cats’ welfare’ to ‘caring about cats’ minds’ to ‘caring about which states the cat’s brain is in’. If you already get off the train somewhere on that straight line, then I’m not sure why.
I agree that there’s a straight line[2]. But, the reason we know brains are relevant, is by observing that brain states are correlated with behavior. If instead of discovering that cognition runs on brains, we would discover it runs on transistor circuits, or computed somehow inside the liver, we would care about those transistor circuits / livers instead. So, your objection that “we don’t know enough about cat brains” is weak, since I do know that cat-brains produce cat-behavior, and given that correlation-with-behavior is the only reason we’re looking at brains in the first place, this knowledge counts for a lot, even if it’s far from a perfect picture of how cat brains work. I also don’t know have a perfect picture of how human brains work, but I know enough (from observing behavior!) to conclude that I care about humans.
I actually do feel some preference for fictional stories in which too-horrible things happen not to exist, even if I’m not consuming those stories, but that’s probably tangential.
I’m not sure I agree with “the concept of mind refers to various states or processes of brains”. We know that, for animals, there is a correspondence between minds and brains. But e.g. an AI can have a mind without having a brain. I guess you’re talking “brains” which are not necessarily biological? But then are “mind” and “brain” just synonyms? Or “brain” refers to some kind of strong reductionism? But, I can also imagine a different universe in which minds are ontologically fundamental ingredients of physics.
But you can still use behaviour/empathy to determine low cutoff of mind-similarity when you translate your utility function from native ontology to real mind-states. Caring about everything, that made you sad before doesn’t sound horrible, like not caring about anything that didn’t make you sad.
Not sure about Rob’s view, but I think a lot of people start out from this question from a quasi-dualistic perspective: some entities have “internal experiences”, “what-it’s-like-to-be-them”, basically some sort of invisible canvas on which internal experiences, including pleasure and pain, are projected. Then later, it comes to seem that basically everything is physical. So then they reason like “well, everything else in reality has eventually been reduced to physical things, so I’m not sure how, but eventually we will find a way to reduce the invisible canvases as well”. Then in principle, once we know how that reduction works, it could turn out that humans do have something corresponding to an invisible canvas but cats don’t.
As you might guess, I think this view of consciousness is somewhat confused, but it’s a sensible enough starting point in the absence of a reductionist theory of consciousness. I think the actual reduction looks more like an unbundling of the various functions that the ‘invisible canvas’ served in our previous models. So it seems likely that cats have states they find aversive, that they try to avoid, they take in sensory input to build a local model of the world, perhaps a global neuronal workspace, etc., all of which inclines me to have a certain amount of sympathy with them. What they probably don’t have is the meta-learned machinery which would make them think there is a hard problem of consciousness, but this doesn’t intuitively feel like it should make me care about them less.
I’m an eliminativist about phenomenal consciousness. :) So I’m pretty far from the dualist perspective, as these things go...!
But discovering that there are no souls doesn’t cause me to stop caring about human welfare. In the same way, discovering that there is no phenomenal consciousness doesn’t cause me to stop caring about human welfare.
Nor does it cause me to decide that ‘human welfare’ is purely a matter of ‘whether the human is smiling, whether they say they’re happy, etc.‘. If someone trapped a suffering human brain inside a robot or flesh suit that perpetually smiles, and I learned of this fact, I wouldn’t go ‘Oh, well the part I care about is the external behavior, not the brain state’. I’d go ‘holy shit no’ and try to find a way to alleviate the brain’s suffering and give it a better way to communicate.
Smiling, saying you’re happy, etc. matter to me almost entirely because I believe they correlate with particular brain states (e.g., the closest neural correlate for the folk concept of ‘happiness’). I don’t need a full reduction of ‘happiness’ in order to know that it has something to do with the state of brains. Ditto ‘sentience’, to the extent there’s a nearest-recoverable-concept corresponding to the folk notion.
“Empirical cluster” is a good way to look it[1]. The way I model this conversation so far is:
Rob’s point of view: X (sentience / personhood / whatever empathy is “trying” to detect) is an empirical cluster which obviously includes humans and doesn’t include rocks. A priori, we don’t know about cats: they are not in the “training set”, so to speak, requiring generalization. Vanessa is saying that cats, like humans, evoke empathy, therefore cats are in X. But, this is unsound! We don’t know that empathy is a sufficient condition! Cats and humans have important cognitive differences! Someday we’ll find a really good gears model that fits the data points we have (which include humans as a positive example and rocks as a negative example, but not cats) and only then we can decide whether cats are in X.
Vanessa’s point of view: X is an empirical cluster which obviously includes humans and cats, and doesn’t include rocks. Cats are totally inside the training set! Saying that “cats and humans have cognitive differences, therefore we need a gears model to decide whether X contains cats” makes as much sense as “women and men have cognitive differences, therefore we need a gears model to decide whether X contains [the other sex]”.
This doesn’t really explain where those different assumptions are coming from, though. For me, empathy is essentially the feeling that I care about something in the caring-about-people sense, so it’s almost tautologically the most direct evidence there is. Yes, finding out more facts can change how much empathy I feel towards something, but current level of empathy is still the obvious baseline for how much empathy I’ll feel in the future.
On the other hand, Rob… I’m guessing that Rob is trying to get something which looks more like “objective morality” (even if not fully subscribing to moral objectivism) and therefore appealing to some kind of cognitive science seems overwhelmingly better to him than trusting emotions, even when we barely understand the relevant cognitive science? But, I’m not sure.
Although, like I said, I’m not talking about moral judgement here (which I see as referring to social norms / reputation systems or attempts to influence social norms / reputation systems), just about individual preferences.
Another way of seeing why this view is correct is to note that empathy can be evoked by fictional characters, by entities in dreams, etc. If I read a book or view a painting that makes me empathize with the fictional character, this does not make the fictional character sentient.
(It might be evidence that if the fictional character were real, it would be sentient. But that’s not sufficient for a strong ‘reduce everything to empathy’ view. Once you allow that empathy routinely misfires in this way—indeed, that empathy can be misfiring even while the empathizing person realizes this and is not inclined to treat the fictional character as a true moral patient in reality—you lose a lot of the original reason to think ‘it’s all about empathy’ in the first place.)
I’m saying that insofar as feelings like ‘I should treat my cat well’ assume things about the world, they’re assuming things like “cats exist”, “cats have minds”, “cats’ minds can be in particular states that are relevantly similar to positively and negatively valenced experience in my own mind”, “the cat’s mind is affected by sensory information it acquires from the environment”, “my actions can affect which sensory information the cat acquires”...
The concept “mind” (insofar as it’s contentful and refers to anything at all) refers to various states or processes of brains. So there’s a straight line from ‘caring about cats’ welfare’ to ‘caring about cats’ minds’ to ‘caring about which states the cat’s brain is in’. If you already get off the train somewhere on that straight line, then I’m not sure why.
Anger is a state of mind, and therefore (in some sense) a state of brains. It would be a mistake to say ‘anger is just a matter of angry-seeming behaviors; it’s the behaviors that matter, not the brain state’. The behaviors are typically useful evidence about the brain state, but it’s still the brain state that we’re primarily discussing, and that we primarily care about.
(At least, ‘is this person’s brain actually angry?’ is the thing we mostly care about if it’s a friend we’re thinking about, or if we’re thinking about someone whose welfare and happiness matters to us. If we’re instead worried about someone physically attacking us, then sure, ‘are they going to exhibit angry-seeming behaviors?’ matters more in the moment then ‘are they really and truly angry in their heart of hearts?’.)
I expect some conceptual revision to be required to find the closest neural/cognitive correlate of ‘sentience’. But the same is plausibly true for ‘anger’, partly because anger is itself a thing that people typically think of as a sentient/conscious state!
One crude way of thinking about ‘sentience’ is that it’s just the disjunction of all the specific conscious states: anger, experiencing the color red, experiencing a sour taste, suffering, boredom...
Just as we can be uncertain about whether someone’s brain is ‘really’ angry, we can be uncertain about whether it’s experiencing any of the conscious states on the long list of candidates.
It would be obviously silly to say ‘we know with certainty that cats truly instantiate human-style anger in their brains, since, after all, my cat sometimes makes loud vocalizations and hisses at things’.
It would be even sillier to say ’whether cats are angry purely consists in whether they exhibit loud vocalizations, hiss at things, etc.; there’s no further important question about how their brains work, even though brain state obviously matters in the case of humans, when we ascribe “anger” to a human!
It isn’t any less silly to do those things in the case of the more general and abstract category, than to do it in the case of the concrete instance like ‘anger’.
Good point! I agree that “I feel empathy towards X” is only sufficient to strongly[1] motivate me to help X is I also believe that X is “real”. But, I also believe that my interactions with cats are strong evidence that cats are “real”, despite my ignorance about the inner workings of cat brains. This is exactly the same as, my interactions with humans are strong evidence that humans are “real”, despite my ignorance about human brains. And, people justifiably knew that other people are “real” even before it was discovered that the brain is responsible for cognition.
I agree that there’s a straight line[2]. But, the reason we know brains are relevant, is by observing that brain states are correlated with behavior. If instead of discovering that cognition runs on brains, we would discover it runs on transistor circuits, or computed somehow inside the liver, we would care about those transistor circuits / livers instead. So, your objection that “we don’t know enough about cat brains” is weak, since I do know that cat-brains produce cat-behavior, and given that correlation-with-behavior is the only reason we’re looking at brains in the first place, this knowledge counts for a lot, even if it’s far from a perfect picture of how cat brains work. I also don’t know have a perfect picture of how human brains work, but I know enough (from observing behavior!) to conclude that I care about humans.
I actually do feel some preference for fictional stories in which too-horrible things happen not to exist, even if I’m not consuming those stories, but that’s probably tangential.
I’m not sure I agree with “the concept of mind refers to various states or processes of brains”. We know that, for animals, there is a correspondence between minds and brains. But e.g. an AI can have a mind without having a brain. I guess you’re talking “brains” which are not necessarily biological? But then are “mind” and “brain” just synonyms? Or “brain” refers to some kind of strong reductionism? But, I can also imagine a different universe in which minds are ontologically fundamental ingredients of physics.
But you can still use behaviour/empathy to determine low cutoff of mind-similarity when you translate your utility function from native ontology to real mind-states. Caring about everything, that made you sad before doesn’t sound horrible, like not caring about anything that didn’t make you sad.
Not sure about Rob’s view, but I think a lot of people start out from this question from a quasi-dualistic perspective: some entities have “internal experiences”, “what-it’s-like-to-be-them”, basically some sort of invisible canvas on which internal experiences, including pleasure and pain, are projected. Then later, it comes to seem that basically everything is physical. So then they reason like “well, everything else in reality has eventually been reduced to physical things, so I’m not sure how, but eventually we will find a way to reduce the invisible canvases as well”. Then in principle, once we know how that reduction works, it could turn out that humans do have something corresponding to an invisible canvas but cats don’t.
As you might guess, I think this view of consciousness is somewhat confused, but it’s a sensible enough starting point in the absence of a reductionist theory of consciousness. I think the actual reduction looks more like an unbundling of the various functions that the ‘invisible canvas’ served in our previous models. So it seems likely that cats have states they find aversive, that they try to avoid, they take in sensory input to build a local model of the world, perhaps a global neuronal workspace, etc., all of which inclines me to have a certain amount of sympathy with them. What they probably don’t have is the meta-learned machinery which would make them think there is a hard problem of consciousness, but this doesn’t intuitively feel like it should make me care about them less.
I’m an eliminativist about phenomenal consciousness. :) So I’m pretty far from the dualist perspective, as these things go...!
But discovering that there are no souls doesn’t cause me to stop caring about human welfare. In the same way, discovering that there is no phenomenal consciousness doesn’t cause me to stop caring about human welfare.
Nor does it cause me to decide that ‘human welfare’ is purely a matter of ‘whether the human is smiling, whether they say they’re happy, etc.‘. If someone trapped a suffering human brain inside a robot or flesh suit that perpetually smiles, and I learned of this fact, I wouldn’t go ‘Oh, well the part I care about is the external behavior, not the brain state’. I’d go ‘holy shit no’ and try to find a way to alleviate the brain’s suffering and give it a better way to communicate.
Smiling, saying you’re happy, etc. matter to me almost entirely because I believe they correlate with particular brain states (e.g., the closest neural correlate for the folk concept of ‘happiness’). I don’t need a full reduction of ‘happiness’ in order to know that it has something to do with the state of brains. Ditto ‘sentience’, to the extent there’s a nearest-recoverable-concept corresponding to the folk notion.