Ah. By ‘sentient’ I mean something that feels, by ‘sapient’ something that thinks.
To be more fine-grained about it, I’d define functional sentience as having affective (and perhaps perceptual) cognitive states (in a sense broad enough that it’s obvious cows have them, and equally obvious tulips don’t), and phenomenal sentience as having a first-person ‘point of view’ (though I’m an eliminativist about phenomenal consciousness, so my overtures to it above can be treated as a sort of extended thought experiment).
Similarly, we might distinguish a low-level kind of sapience (the ability to form and manipulate mental representations of situations, generate expectations and generalizations, and update based on new information) from a higher-level kind closer to human sapience (perhaps involving abstract and/or hyper-productive representations à la language).
Based on those definitions, I’d say it’s obvious cows are functionally sentient and have low-level sapience, extremely unlikely they have high-level sapience, and unclear whether they have phenomenal sentience.
Are you using the term “sentience” in the standard dictionary sense [“Sentience is the ability to feel, perceive, or be conscious, or to experience subjectivity”: http://en.wikipedia.org/wiki/Sentience ] Or are you using the term in some revisionary sense?
Neither. I’m claiming that there’s a monstrous ambiguity in all of those definitions, and I’m tabooing ‘sentience’ and replacing it with two clearer terms. These terms may still be problematic, but at least their problematicity is less ambiguous.
I distinguished functional sapience from phenomenal sapience. Functional sapience means having all the standard behaviors and world-tracking states associated with joy, hunger, itchiness, etc. It’s defined in third-person terms. Phenomenal sapience means having a subjective vantage point on the world; being sapient in that sense means that it feels some way (in a very vague sense) to be such a being, whereas it wouldn’t ‘feel’ any way at all to be, for example, a rock.
To see the distinction, imagine that we built a robot, or encountered an alien species, that could simulate the behaviors of sapients in a skillful and dynamic way, without actually having any experiences of its own. Would such a being necessarily be sapient? Does consistently crying out and withdrawing from some stimulus require that you actually be in pain, or could you be a mindless automaton? My answer is ‘yes, in the functional sense; and maybe, in the phenomenal sense’. The phenomenal sense is a bit mysterious, in large part because the intuitive idea of it arises from first-person introspection and not from third-person modeling or description, hence it’s difficult (perhaps impossible!) to find definitive third-person indicators of this first-person class of properties.
At least if we discount radical philosophical scepticism about other minds, cows and other nonhuman vertebrates undergo phenomenal pain, anxiety, sadness, happiness and a whole bunch of phenomenal sensory experiences.
‘Radical philosophical scepticism about other minds’ I take to entail that nothing has a mind except me. In other words, you’re claiming that the only way to doubt that there’s something it’s subjectively like to be a cow, is to also doubt that there’s something it’s subjectively like to be any human other than myself.
I find this spectacularly implausible. Again, I’m an eliminativist, but I’ll put myself in a phenomenal realist’s shoes. The neural architecture shared in common by humans is vast in comparison to the architecture shared in common between humans and cows. And phenomenal consciousness is extremely poorly understood, so we have no idea what evolutionary function it might serve or what mechanisms might need to be in place before it arises in any recognizable form. So to that extent we must also be extremely uncertain about (a) at what point(s) first-person subjectivity arises phylogenetically, and (b) at what point first-person subjectivity arises developmentally.
This phylogeny-development analogy is very important. If I doubt that cows are phenomenally conscious, I might also doubt that I myself was conscious when I was a baby, or relatively late into my fetushood. That’s perhaps a little surprising, but it’s hardly a devastating ‘radical scepticism’; it’s a perfectly tenable hypothesis. By contrast, to doubt that my friends and family members are phenomenally conscious would be like doubting that I myself was phenomenally conscious when I was 5 years old, or when I was 20, or even last month. (Perhaps my phenomenal memories are confabulations.) Equating these two forms of skepticism will require a pretty devastating argument! What do you have in mind?
Ah. By ‘sentient’ I mean something that feels, by ‘sapient’ something that thinks.
To be more fine-grained about it, I’d define functional sentience as having affective (and perhaps perceptual) cognitive states (in a sense broad enough that it’s obvious cows have them, and equally obvious tulips don’t), and phenomenal sentience as having a first-person ‘point of view’ (though I’m an eliminativist about phenomenal consciousness, so my overtures to it above can be treated as a sort of extended thought experiment).
Similarly, we might distinguish a low-level kind of sapience (the ability to form and manipulate mental representations of situations, generate expectations and generalizations, and update based on new information) from a higher-level kind closer to human sapience (perhaps involving abstract and/or hyper-productive representations à la language).
Based on those definitions, I’d say it’s obvious cows are functionally sentient and have low-level sapience, extremely unlikely they have high-level sapience, and unclear whether they have phenomenal sentience.
Rob, many thanks for a thoughtful discussion above. But on one point, I’m confused. You say of cows that it’s “unclear whether they have phenomenal sentience.” Are you using the term “sentience” in the standard dictionary sense [“Sentience is the ability to feel, perceive, or be conscious, or to experience subjectivity”: http://en.wikipedia.org/wiki/Sentience ] Or are you using the term in some revisionary sense? At least if we discount radical philosophical scepticism about other minds, cows and other nonhuman vertebrates undergo phenomenal pain, anxiety, sadness, happiness and a whole bunch of phenomenal sensory experiences. For sure, cows are barely more sapient than a human prelinguistic toddler (though see e.g. http://www.appliedanimalbehaviour.com/article/S0168-1591(03)00294-6/abstract http://www.dailymail.co.uk/news/article-2006359/Moo-dini-Cow-unusual-intelligence-opens-farm-gate-tongue-herd-escape-shed.html ] But their limited capacity for abstract reasoning is a separate issue.
Neither. I’m claiming that there’s a monstrous ambiguity in all of those definitions, and I’m tabooing ‘sentience’ and replacing it with two clearer terms. These terms may still be problematic, but at least their problematicity is less ambiguous.
I distinguished functional sapience from phenomenal sapience. Functional sapience means having all the standard behaviors and world-tracking states associated with joy, hunger, itchiness, etc. It’s defined in third-person terms. Phenomenal sapience means having a subjective vantage point on the world; being sapient in that sense means that it feels some way (in a very vague sense) to be such a being, whereas it wouldn’t ‘feel’ any way at all to be, for example, a rock.
To see the distinction, imagine that we built a robot, or encountered an alien species, that could simulate the behaviors of sapients in a skillful and dynamic way, without actually having any experiences of its own. Would such a being necessarily be sapient? Does consistently crying out and withdrawing from some stimulus require that you actually be in pain, or could you be a mindless automaton? My answer is ‘yes, in the functional sense; and maybe, in the phenomenal sense’. The phenomenal sense is a bit mysterious, in large part because the intuitive idea of it arises from first-person introspection and not from third-person modeling or description, hence it’s difficult (perhaps impossible!) to find definitive third-person indicators of this first-person class of properties.
‘Radical philosophical scepticism about other minds’ I take to entail that nothing has a mind except me. In other words, you’re claiming that the only way to doubt that there’s something it’s subjectively like to be a cow, is to also doubt that there’s something it’s subjectively like to be any human other than myself.
I find this spectacularly implausible. Again, I’m an eliminativist, but I’ll put myself in a phenomenal realist’s shoes. The neural architecture shared in common by humans is vast in comparison to the architecture shared in common between humans and cows. And phenomenal consciousness is extremely poorly understood, so we have no idea what evolutionary function it might serve or what mechanisms might need to be in place before it arises in any recognizable form. So to that extent we must also be extremely uncertain about (a) at what point(s) first-person subjectivity arises phylogenetically, and (b) at what point first-person subjectivity arises developmentally.
This phylogeny-development analogy is very important. If I doubt that cows are phenomenally conscious, I might also doubt that I myself was conscious when I was a baby, or relatively late into my fetushood. That’s perhaps a little surprising, but it’s hardly a devastating ‘radical scepticism’; it’s a perfectly tenable hypothesis. By contrast, to doubt that my friends and family members are phenomenally conscious would be like doubting that I myself was phenomenally conscious when I was 5 years old, or when I was 20, or even last month. (Perhaps my phenomenal memories are confabulations.) Equating these two forms of skepticism will require a pretty devastating argument! What do you have in mind?