why do people equate conciousness & sentience with moral patienthood? your close circle is not more conscious or more sentient than people far away, but you care about your close circle more anyways. unless you are SBF or ghandi
Ethics is a global concept, not many local ones. That I care more about myself than about people far away from me doesn’t mean that this makes an ethical difference.
Re: moral patienthood, I understand the Sam Harris position (paraphrased by him here as “Morality and values depend on the existence of conscious minds—and specifically on the fact that such minds can experience various forms of well-being and suffering in this universe.”) as saying that anything else that supposedly matters, only matters because conscious minds care about it. Like, a painting has no more intrinsic value in the universe than any other random arrangement of atoms like a rock; its value stems purely from conscious minds caring about it. Same with concepts like beauty and virtue and biodiversity and anything else that’s not directly about conscious minds.
And re: caring more about one’s close circle: well, everyone in your close circle has their own close circle they care about, and if you repeat that exercise often enough, the vast majority of people in the world are in someone’s close circle.
Well presumably because they’re not equating “moral patienthood” with “object of my personal caring”.
Something can be a moral patient, who you care about to the extent you’re compelled by moral claims, or who’s rights you are deontologically prohibited from trampling on, without your caring about that being in particular.
You might make the claim that calling something a moral patient is the same as saying that you care (at least a little bit) about its wellbeing, but not everyone buys that calim.
your close circle is not more conscious or more sentient than people far away, but you care about your close circle more anyways
Or, more specifically, this is a non-sequitor to my deonotology, which holds regardless of whether I personally like or privately wish for the wellbeing of any particular entity.
One argument I’ve encountered is that sentient creatures are precisely those creatures that we can form cooperative agreements with. (Counter-argument: one might think that e.g. the relationship with a pet is also a cooperative one [perhaps more obviously if you train them to do something important, and you feed them], while also thinking that pets aren’t sentient.)
Another is that some people’s approach to the Prisoner’s Dilemma is to decide “Anyone who’s sufficiently similar to me can be expected to make the same choice as me, and it’s best for all of us if we cooperate, so I’ll cooperate when encountering them”; and some of them may figure that sentience alone is sufficient similarity.
We need better, more specific terms to break up the horrible mishmash of correlated-but-not-truly-identical ideas bound up in words like consciousness and sentient.
At the very least, let’s distinguish sentient vs sapient. All animals all sentient, only smart ones are sapient (maybe only humans depending on how strict your definition).
Some other terms needing disambiguation…
Current LLMs are very knowledgeable, somewhat but not very intelligent, somewhat creative, and lack coherence. They have some self-awareness but it seems to lack some aspects that animals have around “feeling self state”, but some researchers are working on adding these aspects to experimental architectures.
What a mess our words based on observing ourselves make of trying to divide reality at the joints when we try to analyze non-human entities like animals and AI!
why do people equate conciousness & sentience with moral patienthood? your close circle is not more conscious or more sentient than people far away, but you care about your close circle more anyways. unless you are SBF or ghandi
Ethics is a global concept, not many local ones. That I care more about myself than about people far away from me doesn’t mean that this makes an ethical difference.
Re: moral patienthood, I understand the Sam Harris position (paraphrased by him here as “Morality and values depend on the existence of conscious minds—and specifically on the fact that such minds can experience various forms of well-being and suffering in this universe.”) as saying that anything else that supposedly matters, only matters because conscious minds care about it. Like, a painting has no more intrinsic value in the universe than any other random arrangement of atoms like a rock; its value stems purely from conscious minds caring about it. Same with concepts like beauty and virtue and biodiversity and anything else that’s not directly about conscious minds.
And re: caring more about one’s close circle: well, everyone in your close circle has their own close circle they care about, and if you repeat that exercise often enough, the vast majority of people in the world are in someone’s close circle.
Well presumably because they’re not equating “moral patienthood” with “object of my personal caring”.
Something can be a moral patient, who you care about to the extent you’re compelled by moral claims, or who’s rights you are deontologically prohibited from trampling on, without your caring about that being in particular.
You might make the claim that calling something a moral patient is the same as saying that you care (at least a little bit) about its wellbeing, but not everyone buys that calim.
Or, more specifically, this is a non-sequitor to my deonotology, which holds regardless of whether I personally like or privately wish for the wellbeing of any particular entity.
One argument I’ve encountered is that sentient creatures are precisely those creatures that we can form cooperative agreements with. (Counter-argument: one might think that e.g. the relationship with a pet is also a cooperative one [perhaps more obviously if you train them to do something important, and you feed them], while also thinking that pets aren’t sentient.)
Another is that some people’s approach to the Prisoner’s Dilemma is to decide “Anyone who’s sufficiently similar to me can be expected to make the same choice as me, and it’s best for all of us if we cooperate, so I’ll cooperate when encountering them”; and some of them may figure that sentience alone is sufficient similarity.
We need better, more specific terms to break up the horrible mishmash of correlated-but-not-truly-identical ideas bound up in words like consciousness and sentient.
At the very least, let’s distinguish sentient vs sapient. All animals all sentient, only smart ones are sapient (maybe only humans depending on how strict your definition).
https://english.stackexchange.com/questions/594810/is-there-a-word-meaning-both-sentient-and-sapient
Some other terms needing disambiguation… Current LLMs are very knowledgeable, somewhat but not very intelligent, somewhat creative, and lack coherence. They have some self-awareness but it seems to lack some aspects that animals have around “feeling self state”, but some researchers are working on adding these aspects to experimental architectures.
What a mess our words based on observing ourselves make of trying to divide reality at the joints when we try to analyze non-human entities like animals and AI!