“Self-aware” is one soul-free interpretation of sentient/sapient, often experimentally measured by the mirror test. By that metric, humans are not sentient until well into the second year, and most species we would consider non-sentient fail it. Of course, treating non-self-aware human babies as non-sentient animals is quite problematic. Peter Singer is one of the few people brave enough to tread into this topic.
The mirror test is interesting for sure, especially in a cross-species context. However, I’m far from convinced about the straightforward reading of “the expected response indicates the subject has an internal map of oneself.” Since you read the Wikipedia article down that far, you could also scroll down to the “Criticisms” section and see a variety of objections to that.
Besides all that, even if we assume self-awareness is the thing you seem to be making of it, I’m not clear how that would draw moral-worth line so neatly between humans (or some humans) and literally everything else. From a consequentialist perspective, if I assume that dogs or rats can experience pain and suffering, it seems weird to neglect them from my utility function on the basis they don’t jump through that particular (ambiguous, methodologically-questionable) experimental hoop.
Oh, I agree that the mirror test is quite imperfect. The practical issue is how to draw a Schelling somewhere sensible. Clearly mosquitoes can be treated as non-sentient, clearly most humans cannot be. Treating human fetuses and some mammals as non-sentient is rather controversial. Just “experiencing pain” is probably too wide a net for moral worth, as nociceptors are present in most animals, including the aforementioned mosquito. Suffering is probably a more restrictive term, but I am not aware of a measurable definition of it. It is also probably sometimes too narrow, as most of us would find it immoral to harm people who do not experience suffering due to a mental or a physical issue, like pain insensitivity or asymbolia.
Clearly mosquitoes can be treated as non-sentient,
Disagree that it’s clear. I’ve had interactions with insects that I could only parse as “interaction between two sentient beings, although there’s a wide gulf of expectation and sensation and emotion and so forth which pushes it right up to the edges of that category.” I’ve not had many interactions with mosquitos beyond “You try to suck my blood because you’re hungry and I’m a warm, CO2-breathing blood source in your vicinity”, but I assume that there’s something it feels like to be a mosquito, that it has a little mosquito mind that might not be very flexible or impressive when weighted against a human one, but it’s there, it’s what the mosquito uses to navigate its environment and organize its behavior intelligibly, and all of its searching for mates and blood and a nice place to lay eggs is felt as a drive… that in short it’s not just a tiny little bloodsucking p-zombie. That doesn’t mean I accord it much moral weight either—I won’t shed any tears over it if I should smash it while reflexively brushing it aside, even though I’m aware arthropods have nociception and, complex capacity for emotional suffering or not, they still feel pain and I prefer not to inflict that needlessly (or without a safeword).
But I couldn’t agree it isn’t sentient, that it’s just squishy clockwork.
Just “experiencing pain” is probably too wide a net for moral worth, as nociceptors are present in most animals, including the aforementioned mosquito.
It seems to me that the problem you’re really trying to solve is how to sort the world into neat piles marked “okay to inflict my desires on regardless of consequences” and “not okay to do that to.” Which is probably me just stating the obvious, but the reason I call attention to it is I literally don’t get that. The universe just is not so tidy; personhood or whatever word you wish to use is not just one thing, and the things that make it up seem to behave such that the question is less like “Is this a car or not?” and more like “Is this car worth 50,000 dollars, to me, at this time?”
Suffering is probably a more restrictive term, but I am not aware of a measurable definition of it.
That is ever the problem—you can’t even technically demonstrate without lots of inference that your best friend or your mother really suffer. This is why I don’t like drawing binary boundaries on that basis.
It is also probably sometimes too narrow, as most of us would find it immoral to harm people who do not experience suffering due to a mental or a physical issue, like pain insensitivity or asymbolia.
Though strangely enough, plenty of LWers seem to consider many disorders with similarly pervasive consequences for experience to result in “lives barely worth living...”
My (but not necessarily yours) concern with all this is a version of the repugnant conclusion: if you assign some moral worth to mosquitoes or bacteria, and you allow for non-asymptotic accumulation based on the number of specimen, then there is some number of bacteria whose moral worth is at least one human. If you don’t allow for accumulation, then there is no difference between killing one mosquito and 3^^^3 of them. If you impose asymptotic accumulation (no amount of mosquitoes have moral worth equal to that of one human, or one cat), then the goalpost simply shifts to a different lifeform (how many cats are worth a human?). Imposing an artificial Schelling fence at least provides some solution, though far from universal. Thus I’m OK with ignoring suffering or moral worth of some lifeforms. I would not approve of needlessly torturing them, but mostly because of the anguish it causes humans like you.
You seem to suggest that there is more than one dimension to moral worth, but, just like with utility function or with deontological ethics, eventually it comes down to making a decision, and all your dimensions converge into one.
My (but not necessarily yours) concern with all this is a version of the repugnant conclusion: if you assign some moral worth to mosquitoes or bacteria, and you allow for non-asymptotic accumulation based on the number of specimen, then there is some number of bacteria whose moral worth is at least one human.
Sure, that registers—if there were a thriving microbial ecosystem on Mars, I’d consider it immoral to wipe it out utterly simply for the sake of one human being. Though I think my function-per-individual is more complicated than that; wiping it out because that one human is a hypochondriac is more-wrong in my perception than wiping it out because, let’s say, that one human is an astronaut stranded in some sort of weird microbial mat, and the only way to release them before they die is to let loose an earthly extremophile which will, as a consequence, propagate across Mars and destroy all remaining holdouts of the local biosphere. That latter is very much more a tossup, such that I don’t view other humans going ‘Duh, save the human!’ as exactly committing an atrocity or compounded the wrong. Sometimes reality just presents you with situations that are not ideal, or where there is no good choice. No-win situations happen, unsatisfying resolutions and all. That doesn’t mean do nothing; it just means trying to set up my ethical and moral framework to make it impossible feels silly.
Imposing an artificial Schelling fence at least provides some solution, though far from universal.
To be honest, that’s all this debate really seems to be to me—where do we set that fence? And I’m convinced that the decision point is more cultural and personal than anything, such that the resulting discussion does not usefully generalize.
You seem to suggest that there is more than one dimension to moral worth, but, just like with utility function or with deontological ethics, eventually it comes down to making a decision, and all your dimensions converge into one.
And once I do, even if my decision was as rational as it can be under the circumstances and I’ve identified a set of priorities most folks would applaud in principle, there’s still the potential for regrets and no-win situations. While a moral system that genuinely solved that problem would please me greatly, I see no sign that you’ve stumbled upon it here.
“Self-aware” is one soul-free interpretation of sentient/sapient, often experimentally measured by the mirror test. By that metric, humans are not sentient until well into the second year, and most species we would consider non-sentient fail it. Of course, treating non-self-aware human babies as non-sentient animals is quite problematic. Peter Singer is one of the few people brave enough to tread into this topic.
The mirror test is interesting for sure, especially in a cross-species context. However, I’m far from convinced about the straightforward reading of “the expected response indicates the subject has an internal map of oneself.” Since you read the Wikipedia article down that far, you could also scroll down to the “Criticisms” section and see a variety of objections to that.
Moreover, when asked to choose between the interpretation that the test isn’t sufficient for its stated purpose, and the interpretation that six-year olds in Fiji aren’t self-aware I rather suspect the former is more likely.
Besides all that, even if we assume self-awareness is the thing you seem to be making of it, I’m not clear how that would draw moral-worth line so neatly between humans (or some humans) and literally everything else. From a consequentialist perspective, if I assume that dogs or rats can experience pain and suffering, it seems weird to neglect them from my utility function on the basis they don’t jump through that particular (ambiguous, methodologically-questionable) experimental hoop.
Oh, I agree that the mirror test is quite imperfect. The practical issue is how to draw a Schelling somewhere sensible. Clearly mosquitoes can be treated as non-sentient, clearly most humans cannot be. Treating human fetuses and some mammals as non-sentient is rather controversial. Just “experiencing pain” is probably too wide a net for moral worth, as nociceptors are present in most animals, including the aforementioned mosquito. Suffering is probably a more restrictive term, but I am not aware of a measurable definition of it. It is also probably sometimes too narrow, as most of us would find it immoral to harm people who do not experience suffering due to a mental or a physical issue, like pain insensitivity or asymbolia.
Disagree that it’s clear. I’ve had interactions with insects that I could only parse as “interaction between two sentient beings, although there’s a wide gulf of expectation and sensation and emotion and so forth which pushes it right up to the edges of that category.” I’ve not had many interactions with mosquitos beyond “You try to suck my blood because you’re hungry and I’m a warm, CO2-breathing blood source in your vicinity”, but I assume that there’s something it feels like to be a mosquito, that it has a little mosquito mind that might not be very flexible or impressive when weighted against a human one, but it’s there, it’s what the mosquito uses to navigate its environment and organize its behavior intelligibly, and all of its searching for mates and blood and a nice place to lay eggs is felt as a drive… that in short it’s not just a tiny little bloodsucking p-zombie. That doesn’t mean I accord it much moral weight either—I won’t shed any tears over it if I should smash it while reflexively brushing it aside, even though I’m aware arthropods have nociception and, complex capacity for emotional suffering or not, they still feel pain and I prefer not to inflict that needlessly (or without a safeword).
But I couldn’t agree it isn’t sentient, that it’s just squishy clockwork.
It seems to me that the problem you’re really trying to solve is how to sort the world into neat piles marked “okay to inflict my desires on regardless of consequences” and “not okay to do that to.” Which is probably me just stating the obvious, but the reason I call attention to it is I literally don’t get that. The universe just is not so tidy; personhood or whatever word you wish to use is not just one thing, and the things that make it up seem to behave such that the question is less like “Is this a car or not?” and more like “Is this car worth 50,000 dollars, to me, at this time?”
That is ever the problem—you can’t even technically demonstrate without lots of inference that your best friend or your mother really suffer. This is why I don’t like drawing binary boundaries on that basis.
Though strangely enough, plenty of LWers seem to consider many disorders with similarly pervasive consequences for experience to result in “lives barely worth living...”
My (but not necessarily yours) concern with all this is a version of the repugnant conclusion: if you assign some moral worth to mosquitoes or bacteria, and you allow for non-asymptotic accumulation based on the number of specimen, then there is some number of bacteria whose moral worth is at least one human. If you don’t allow for accumulation, then there is no difference between killing one mosquito and 3^^^3 of them. If you impose asymptotic accumulation (no amount of mosquitoes have moral worth equal to that of one human, or one cat), then the goalpost simply shifts to a different lifeform (how many cats are worth a human?). Imposing an artificial Schelling fence at least provides some solution, though far from universal. Thus I’m OK with ignoring suffering or moral worth of some lifeforms. I would not approve of needlessly torturing them, but mostly because of the anguish it causes humans like you.
You seem to suggest that there is more than one dimension to moral worth, but, just like with utility function or with deontological ethics, eventually it comes down to making a decision, and all your dimensions converge into one.
Sure, that registers—if there were a thriving microbial ecosystem on Mars, I’d consider it immoral to wipe it out utterly simply for the sake of one human being. Though I think my function-per-individual is more complicated than that; wiping it out because that one human is a hypochondriac is more-wrong in my perception than wiping it out because, let’s say, that one human is an astronaut stranded in some sort of weird microbial mat, and the only way to release them before they die is to let loose an earthly extremophile which will, as a consequence, propagate across Mars and destroy all remaining holdouts of the local biosphere. That latter is very much more a tossup, such that I don’t view other humans going ‘Duh, save the human!’ as exactly committing an atrocity or compounded the wrong. Sometimes reality just presents you with situations that are not ideal, or where there is no good choice. No-win situations happen, unsatisfying resolutions and all. That doesn’t mean do nothing; it just means trying to set up my ethical and moral framework to make it impossible feels silly.
To be honest, that’s all this debate really seems to be to me—where do we set that fence? And I’m convinced that the decision point is more cultural and personal than anything, such that the resulting discussion does not usefully generalize.
And once I do, even if my decision was as rational as it can be under the circumstances and I’ve identified a set of priorities most folks would applaud in principle, there’s still the potential for regrets and no-win situations. While a moral system that genuinely solved that problem would please me greatly, I see no sign that you’ve stumbled upon it here.
Why stop there? Humans have also had interactions with lightning that they could only parse as interactions between two sentient beings!