why do people equate conciousness & sentience with moral patienthood? your close circle is not more conscious or more sentient than people far away, but you care about your close circle more anyways. unless you are SBF or ghandi
Ethics is a global concept, not many local ones. That I care more about myself than about people far away from me doesn’t mean that this makes an ethical difference.
I didn’t use the word “ethics” in my comment, so are you making a definitional statement, to distinguish between [universal value system] and [subjective value system] or just authoritatively saying that I’m wrong?
Are you claiming moral realism? I don’t really believe that. If “ethics” is global, why should I care about “ethics”? Sorry if that sounds callous, I do actually care about the world, just trying to pin down what you mean.
Yeah definitional. I think “I should do x” means about the same as “It’s ethical to do x”. In the latter the indexical “I” has disappeared, indicating that it’s a global statement, not a local one, objective rather than subjective. But “I care about doing x” is local/subjective because it doesn’t contain words like “should”, “ethical”, or “moral patienthood”.
Re: moral patienthood, I understand the Sam Harris position (paraphrased by him here as “Morality and values depend on the existence of conscious minds—and specifically on the fact that such minds can experience various forms of well-being and suffering in this universe.”) as saying that anything else that supposedly matters, only matters because conscious minds care about it. Like, a painting has no more intrinsic value in the universe than any other random arrangement of atoms like a rock; its value stems purely from conscious minds caring about it. Same with concepts like beauty and virtue and biodiversity and anything else that’s not directly about conscious minds.
And re: caring more about one’s close circle: well, everyone in your close circle has their own close circle they care about, and if you repeat that exercise often enough, the vast majority of people in the world are in someone’s close circle.
I definitely think that if I was not conscious then I would not coherently want things. But that conscious minds are the only things that can truly care, does not mean that conscious minds are the only things we should terminally care about.
The close circle composition isn’t enough to justify Singerian altruism from egoist assumptions, because of the value falloff. With each degree of connection, I love the stranger less.
Well, if there were no minds to care about things, what would it even mean that something should be terminally cared about?
Re: value falloff: sure, but if you start with your close circle, and then aggregate the preferences of that close circle (who has close circles of their own), and rinse and repeat, then this falloff for any individual becomes comparatively much less significant for society as a whole.
Uh, there are minds. I think you and I both agree on this. Not really sure what the “what if no one existed” thought experiment is supposed to gesture at. I am very happy that I exist and that I experience things. I agree that if I didn’t exist then I wouldn’t care about things
I think your method double counts the utility. In the absurd case, if I care about you and you care about me, and I care about you caring about me caring about you… then two people who like each other enough have infinite value. unless the repeating sum converges. How likely is the converging sum exactly right such that a selfish person should love all humans equally? Also even if it was balanced, if two well-connected socialites in latin america break up then this would significantly change the moral calculus for millions of people!
Being real for a moment, I think my friends (degree 1) are happier if I am friends with their friends (degree 2), want us to be at least on good terms, and would be sad if I fought with them. But my friends don’t care that much how I feel about the friends of their friends (degree 3)
Apologies if I gave the impression that “a selfish person should love all humans equally”; while I’m sympathetic to arguments from e.g. Parfit’s book Reasons and Persons[1], I don’t go anywhere that far. I was making a weaker and (I think) uncontroversial claim, something closer to Adam Smith’s invisible hand: that aggregating over every individual’s selfish focus on close family ties, overall results in moral concerns becoming relatively more spread out, because the close circles of your close circle aren’t exactly identical to your own.
Like that distances in time and space are similar. So if you imagine people in the distant past having the choice for a better life at their current time, in exchange for there being no people in the far future, then you wish they’d care about more than just their own present time. A similar logic argues against applying a very high discount rate to your moral concern for beings that are very distant to you in e.g. space, close ties, etc.
One argument I’ve encountered is that sentient creatures are precisely those creatures that we can form cooperative agreements with. (Counter-argument: one might think that e.g. the relationship with a pet is also a cooperative one [perhaps more obviously if you train them to do something important, and you feed them], while also thinking that pets aren’t sentient.)
Another is that some people’s approach to the Prisoner’s Dilemma is to decide “Anyone who’s sufficiently similar to me can be expected to make the same choice as me, and it’s best for all of us if we cooperate, so I’ll cooperate when encountering them”; and some of them may figure that sentience alone is sufficient similarity.
We need better, more specific terms to break up the horrible mishmash of correlated-but-not-truly-identical ideas bound up in words like consciousness and sentient.
At the very least, let’s distinguish sentient vs sapient. All animals all sentient, only smart ones are sapient (maybe only humans depending on how strict your definition).
Some other terms needing disambiguation…
Current LLMs are very knowledgeable, somewhat but not very intelligent, somewhat creative, and lack coherence. They have some self-awareness but it seems to lack some aspects that animals have around “feeling self state”, but some researchers are working on adding these aspects to experimental architectures.
What a mess our words based on observing ourselves make of trying to divide reality at the joints when we try to analyze non-human entities like animals and AI!
Well presumably because they’re not equating “moral patienthood” with “object of my personal caring”.
Something can be a moral patient, who you care about to the extent you’re compelled by moral claims, or who’s rights you are deontologically prohibited from trampling on, without your caring about that being in particular.
You might make the claim that calling something a moral patient is the same as saying that you care (at least a little bit) about its wellbeing, but not everyone buys that calim.
your close circle is not more conscious or more sentient than people far away, but you care about your close circle more anyways
Or, more specifically, this is a non-sequitor to my deonotology, which holds regardless of whether I personally like or privately wish for the wellbeing of any particular entity.
why do people equate conciousness & sentience with moral patienthood? your close circle is not more conscious or more sentient than people far away, but you care about your close circle more anyways. unless you are SBF or ghandi
Ethics is a global concept, not many local ones. That I care more about myself than about people far away from me doesn’t mean that this makes an ethical difference.
I didn’t use the word “ethics” in my comment, so are you making a definitional statement, to distinguish between [universal value system] and [subjective value system] or just authoritatively saying that I’m wrong?
Are you claiming moral realism? I don’t really believe that. If “ethics” is global, why should I care about “ethics”? Sorry if that sounds callous, I do actually care about the world, just trying to pin down what you mean.
Yeah definitional. I think “I should do x” means about the same as “It’s ethical to do x”. In the latter the indexical “I” has disappeared, indicating that it’s a global statement, not a local one, objective rather than subjective. But “I care about doing x” is local/subjective because it doesn’t contain words like “should”, “ethical”, or “moral patienthood”.
Re: moral patienthood, I understand the Sam Harris position (paraphrased by him here as “Morality and values depend on the existence of conscious minds—and specifically on the fact that such minds can experience various forms of well-being and suffering in this universe.”) as saying that anything else that supposedly matters, only matters because conscious minds care about it. Like, a painting has no more intrinsic value in the universe than any other random arrangement of atoms like a rock; its value stems purely from conscious minds caring about it. Same with concepts like beauty and virtue and biodiversity and anything else that’s not directly about conscious minds.
And re: caring more about one’s close circle: well, everyone in your close circle has their own close circle they care about, and if you repeat that exercise often enough, the vast majority of people in the world are in someone’s close circle.
I definitely think that if I was not conscious then I would not coherently want things. But that conscious minds are the only things that can truly care, does not mean that conscious minds are the only things we should terminally care about.
The close circle composition isn’t enough to justify Singerian altruism from egoist assumptions, because of the value falloff. With each degree of connection, I love the stranger less.
Well, if there were no minds to care about things, what would it even mean that something should be terminally cared about?
Re: value falloff: sure, but if you start with your close circle, and then aggregate the preferences of that close circle (who has close circles of their own), and rinse and repeat, then this falloff for any individual becomes comparatively much less significant for society as a whole.
Uh, there are minds. I think you and I both agree on this. Not really sure what the “what if no one existed” thought experiment is supposed to gesture at. I am very happy that I exist and that I experience things. I agree that if I didn’t exist then I wouldn’t care about things
I think your method double counts the utility. In the absurd case, if I care about you and you care about me, and I care about you caring about me caring about you… then two people who like each other enough have infinite value. unless the repeating sum converges. How likely is the converging sum exactly right such that a selfish person should love all humans equally? Also even if it was balanced, if two well-connected socialites in latin america break up then this would significantly change the moral calculus for millions of people!
Being real for a moment, I think my friends (degree 1) are happier if I am friends with their friends (degree 2), want us to be at least on good terms, and would be sad if I fought with them. But my friends don’t care that much how I feel about the friends of their friends (degree 3)
Apologies if I gave the impression that “a selfish person should love all humans equally”; while I’m sympathetic to arguments from e.g. Parfit’s book Reasons and Persons[1], I don’t go anywhere that far. I was making a weaker and (I think) uncontroversial claim, something closer to Adam Smith’s invisible hand: that aggregating over every individual’s selfish focus on close family ties, overall results in moral concerns becoming relatively more spread out, because the close circles of your close circle aren’t exactly identical to your own.
Like that distances in time and space are similar. So if you imagine people in the distant past having the choice for a better life at their current time, in exchange for there being no people in the far future, then you wish they’d care about more than just their own present time. A similar logic argues against applying a very high discount rate to your moral concern for beings that are very distant to you in e.g. space, close ties, etc.
One argument I’ve encountered is that sentient creatures are precisely those creatures that we can form cooperative agreements with. (Counter-argument: one might think that e.g. the relationship with a pet is also a cooperative one [perhaps more obviously if you train them to do something important, and you feed them], while also thinking that pets aren’t sentient.)
Another is that some people’s approach to the Prisoner’s Dilemma is to decide “Anyone who’s sufficiently similar to me can be expected to make the same choice as me, and it’s best for all of us if we cooperate, so I’ll cooperate when encountering them”; and some of them may figure that sentience alone is sufficient similarity.
we completely dominate dogs. society treat them well because enough humans love dogs.
I do think that cooperation between people is the origin of religion, and its moral rulesets which create tiny little societies that can hunt stags.
We need better, more specific terms to break up the horrible mishmash of correlated-but-not-truly-identical ideas bound up in words like consciousness and sentient.
At the very least, let’s distinguish sentient vs sapient. All animals all sentient, only smart ones are sapient (maybe only humans depending on how strict your definition).
https://english.stackexchange.com/questions/594810/is-there-a-word-meaning-both-sentient-and-sapient
Some other terms needing disambiguation… Current LLMs are very knowledgeable, somewhat but not very intelligent, somewhat creative, and lack coherence. They have some self-awareness but it seems to lack some aspects that animals have around “feeling self state”, but some researchers are working on adding these aspects to experimental architectures.
What a mess our words based on observing ourselves make of trying to divide reality at the joints when we try to analyze non-human entities like animals and AI!
Well presumably because they’re not equating “moral patienthood” with “object of my personal caring”.
Something can be a moral patient, who you care about to the extent you’re compelled by moral claims, or who’s rights you are deontologically prohibited from trampling on, without your caring about that being in particular.
You might make the claim that calling something a moral patient is the same as saying that you care (at least a little bit) about its wellbeing, but not everyone buys that calim.
Or, more specifically, this is a non-sequitor to my deonotology, which holds regardless of whether I personally like or privately wish for the wellbeing of any particular entity.