Bizarre. In lieu of a reply by Eliezer himself clarifying things, I am left to understand he thinks that some portion of humans otherwise possessing the structural and anatomical necessities for sensation don’t experience anything even when all their sense organs are working fine, and that animals in general are basically just meat-automata with no inner life at all. Even when they’re communicating about those inner states and have the same structural correlates of various sensations we’d expect to see, and react in ways that sure look like expression of sensation or emotion (even if you sometimes need to be familiar with their particular body language).
That feels a lot more like a strawman than anything, because it’s just so obviously bollocks. If I step on my cat’s tail by mistake, she doesn’t yowl and run from me because “Nociceptor activation threshold met; initiate yowl-and-run subroutine.” She does it because it’s painful and it startled her. I know there are people who honestly believe something like that about nonhuman life across the board, but I hadn’t gotten the impression Eliezer was one.
Sentient vs. Sapient is one of the most common word confusions in the English language. If someone says “sentient,” but the context appears to suggest “sapient,” they probably mean sapient.
The bit about that that’s bothering me is “sapient” is a term of art—it’s science fiction shorthand employed with a purpose (it denotes personhood for the reader, in a field where blatantly-nonhuman but unambiguously-personlike entities are common). It divides the field of hypothetical entities into two neat, clean categories: people no matter what their substrate, appearance, anatomy or drives, and everything from animals of every sort to plants and grains of sand.
It just seems like a weird way of dividing up the world, and more of a cultural artefact than anything; a marker on the map which corresponds to nothing in the territory.
People often use ‘sentient’ to mean ‘sapient’, and it may be that Eliezer intends the latter. It’s at least pretty plausible that animals and very young infants are not sapient, namely not capable of judgement, and that this capacity is what would endow one with a certain autonomy.
I respectfully disagree, sapience is an acquired subjective quality, therefore it is trivial to disregard. Now sentience is orders of magnitude more complex. I was going to say “inherent” to the species, but is it? Now this is supposed to be “the easy problem” go figure that.
1 presumes that minimalist descriptions of superficially-visible output are all you need to reconstruct the actual drivers behind the behavior. 2 presumes that the evolutionarily-shared neural architecture and its basic components of perception, cognition and soforth are not seperated by a barrier of magical reality fluid.
Ah. If you’re saying that 1) implies lesser internal machinery than 2), and that the internal machinery (cognition and soforth) is what’s important, then I agree.
The problem I think is just that, to me, they both sound to me like perfectly reasonable (if vague) descriptions of complex, sentient human pain. It seemed like you were saying nociceptors and subroutines were incapable of producing pain and startlement.
The problem I think is just that, to me, they both sound to me like perfectly reasonable (if vague) descriptions of complex, sentient human pain.
1 sounds to me like an attempt to capture output in the form of a flowchart. It’s like trying to describe the flocking behavior of birds by reference to the Boids cellular automaton—and insisting not that there are similar principles at work in how the birds go about solving the problem of flocking, but that birds literally run an instance of Boids in their heads and that’s all there is to their flocking behavior.
I agree that “Eliezer believes animals and nonpathological infants are just meat-automata who don’t actually possess the mental states they communicating about” is a strawman. I’m not really sure what remains to be cleared up. Can you clarify the question?
Basically what I asked Eliezer: What sense of the word “sentient” is he using, such that babies plausibly don’t qualify? My de facto read of the term and a little digging around Google show two basic senses:
-Possessing sensory experiences (I’m pretty sure insects and even worms do that) -SF/F writer’s term for “assume this fictional entity is a person” (akin to “sapient”; it’s a binary personhood marker, or a secularized soul—it tells the reader to react accordingly to this character’s experiences and behavior)
The latter, applied to the real world, sounds rather more like “soul” than anything coherent and obvious. The former, denied in babies, sounds bizarre and obviously untrue. So...I’m missing something, and I’d like to know what it is.
Maybe the best way to approach this question is backwards. I assume you believe that people (at least) have some moral worth such that they ought not be owned, whimsically destroyed, etc. I also assume you believe that stones (at least) have no moral worth and can be owned, whimsically destroyed, etc. without any immediate moral consequences. So 1) tell me where you think the line is (even if its a very fuzzy, circumstantial one) and 2) tell me in virtue of what something has or lacks such moral worth.
...or 3) toss out my questions and tell me how you think it goes on your own terms.
I assume you believe that people (at least) have some moral worth such that they ought not be owned, whimsically destroyed, etc
Essentially. I don’t consider it a fact-about-the-world per se, but that captures my alief pretty well.
I also assume you believe that stones (at least) have no moral worth and can be owned, whimsically destroyed, etc without any immediate moral consequences.
Eh. Actually I have some squick to cavalier destruction or disruption of inanimate objects, but they don’t register out as the same thing. So we’ll go with that.
…or 3) toss out my questions and tell me how you think it goes on your own terms.
To what extent does an entity respond dynamically to both present and historical conditions in terms impacts on its health, wellbeing, emotional and perceptual experiences, social interactions and so on? To what extent is it capable of experiencing pain and suffering? To what extent does modifying my behavior in response to these things constitute a negative burden on myself or others? To what extent do present circumstances bear on all those things?
Those aren’t so much terms in an equation as independent axes of variance. There are probably some I haven’t listed. They define the shape of the space; the actual answer to your question is lurking somewhere in there.
Thanks, that’s helpful. Given what you’ve said, I doubt you and EY would disagree on much. EY says in his metaethics sequence that moral facts and categories like ‘moral worth’ or ‘autonomy’ are derived properties. In other words, they don’t refer to anything fundamental about the world, but supervene on some complex set of fundamental facts. Given that that’s his view, I think he was just using ‘sentience’ as a shorthand for something like what you’ve written: note that many of the considerations you describe are importantly related to a capacity for complex experiences.
note that many of the considerations you describe are importantly related to a capacity for complex experiences.
Except I’ve interacted with bugs in ways that satisfied that criterion (and that did parse out as morally-good), so clearly the devil’s in the details. If Eliezer suspects young children may reliably not qualify, and I suspect that insects may at least occasionally qualify, we’re clearly drawing very different lines and have very different underlying assumptions about reality.
I assume you believe that people (at least) have some moral worth such that they ought not be owned, whimsically destroyed, etc. I also assume you believe that stones (at least) have no moral worth and can be owned, whimsically destroyed, etc. without any immediate moral consequences. So 1) tell me where you think the line is (even if its a very fuzzy, circumstantial one)
What makes you think there’s a line? I care more about killing (or torturing) a dog than a stone, but less so than a human. Pulling the wings off flies provokes a similar, if weaker, reaction. A continuum might complicate the math slightly, but …
“Self-aware” is one soul-free interpretation of sentient/sapient, often experimentally measured by the mirror test. By that metric, humans are not sentient until well into the second year, and most species we would consider non-sentient fail it. Of course, treating non-self-aware human babies as non-sentient animals is quite problematic. Peter Singer is one of the few people brave enough to tread into this topic.
The mirror test is interesting for sure, especially in a cross-species context. However, I’m far from convinced about the straightforward reading of “the expected response indicates the subject has an internal map of oneself.” Since you read the Wikipedia article down that far, you could also scroll down to the “Criticisms” section and see a variety of objections to that.
Besides all that, even if we assume self-awareness is the thing you seem to be making of it, I’m not clear how that would draw moral-worth line so neatly between humans (or some humans) and literally everything else. From a consequentialist perspective, if I assume that dogs or rats can experience pain and suffering, it seems weird to neglect them from my utility function on the basis they don’t jump through that particular (ambiguous, methodologically-questionable) experimental hoop.
Oh, I agree that the mirror test is quite imperfect. The practical issue is how to draw a Schelling somewhere sensible. Clearly mosquitoes can be treated as non-sentient, clearly most humans cannot be. Treating human fetuses and some mammals as non-sentient is rather controversial. Just “experiencing pain” is probably too wide a net for moral worth, as nociceptors are present in most animals, including the aforementioned mosquito. Suffering is probably a more restrictive term, but I am not aware of a measurable definition of it. It is also probably sometimes too narrow, as most of us would find it immoral to harm people who do not experience suffering due to a mental or a physical issue, like pain insensitivity or asymbolia.
Clearly mosquitoes can be treated as non-sentient,
Disagree that it’s clear. I’ve had interactions with insects that I could only parse as “interaction between two sentient beings, although there’s a wide gulf of expectation and sensation and emotion and so forth which pushes it right up to the edges of that category.” I’ve not had many interactions with mosquitos beyond “You try to suck my blood because you’re hungry and I’m a warm, CO2-breathing blood source in your vicinity”, but I assume that there’s something it feels like to be a mosquito, that it has a little mosquito mind that might not be very flexible or impressive when weighted against a human one, but it’s there, it’s what the mosquito uses to navigate its environment and organize its behavior intelligibly, and all of its searching for mates and blood and a nice place to lay eggs is felt as a drive… that in short it’s not just a tiny little bloodsucking p-zombie. That doesn’t mean I accord it much moral weight either—I won’t shed any tears over it if I should smash it while reflexively brushing it aside, even though I’m aware arthropods have nociception and, complex capacity for emotional suffering or not, they still feel pain and I prefer not to inflict that needlessly (or without a safeword).
But I couldn’t agree it isn’t sentient, that it’s just squishy clockwork.
Just “experiencing pain” is probably too wide a net for moral worth, as nociceptors are present in most animals, including the aforementioned mosquito.
It seems to me that the problem you’re really trying to solve is how to sort the world into neat piles marked “okay to inflict my desires on regardless of consequences” and “not okay to do that to.” Which is probably me just stating the obvious, but the reason I call attention to it is I literally don’t get that. The universe just is not so tidy; personhood or whatever word you wish to use is not just one thing, and the things that make it up seem to behave such that the question is less like “Is this a car or not?” and more like “Is this car worth 50,000 dollars, to me, at this time?”
Suffering is probably a more restrictive term, but I am not aware of a measurable definition of it.
That is ever the problem—you can’t even technically demonstrate without lots of inference that your best friend or your mother really suffer. This is why I don’t like drawing binary boundaries on that basis.
It is also probably sometimes too narrow, as most of us would find it immoral to harm people who do not experience suffering due to a mental or a physical issue, like pain insensitivity or asymbolia.
Though strangely enough, plenty of LWers seem to consider many disorders with similarly pervasive consequences for experience to result in “lives barely worth living...”
My (but not necessarily yours) concern with all this is a version of the repugnant conclusion: if you assign some moral worth to mosquitoes or bacteria, and you allow for non-asymptotic accumulation based on the number of specimen, then there is some number of bacteria whose moral worth is at least one human. If you don’t allow for accumulation, then there is no difference between killing one mosquito and 3^^^3 of them. If you impose asymptotic accumulation (no amount of mosquitoes have moral worth equal to that of one human, or one cat), then the goalpost simply shifts to a different lifeform (how many cats are worth a human?). Imposing an artificial Schelling fence at least provides some solution, though far from universal. Thus I’m OK with ignoring suffering or moral worth of some lifeforms. I would not approve of needlessly torturing them, but mostly because of the anguish it causes humans like you.
You seem to suggest that there is more than one dimension to moral worth, but, just like with utility function or with deontological ethics, eventually it comes down to making a decision, and all your dimensions converge into one.
My (but not necessarily yours) concern with all this is a version of the repugnant conclusion: if you assign some moral worth to mosquitoes or bacteria, and you allow for non-asymptotic accumulation based on the number of specimen, then there is some number of bacteria whose moral worth is at least one human.
Sure, that registers—if there were a thriving microbial ecosystem on Mars, I’d consider it immoral to wipe it out utterly simply for the sake of one human being. Though I think my function-per-individual is more complicated than that; wiping it out because that one human is a hypochondriac is more-wrong in my perception than wiping it out because, let’s say, that one human is an astronaut stranded in some sort of weird microbial mat, and the only way to release them before they die is to let loose an earthly extremophile which will, as a consequence, propagate across Mars and destroy all remaining holdouts of the local biosphere. That latter is very much more a tossup, such that I don’t view other humans going ‘Duh, save the human!’ as exactly committing an atrocity or compounded the wrong. Sometimes reality just presents you with situations that are not ideal, or where there is no good choice. No-win situations happen, unsatisfying resolutions and all. That doesn’t mean do nothing; it just means trying to set up my ethical and moral framework to make it impossible feels silly.
Imposing an artificial Schelling fence at least provides some solution, though far from universal.
To be honest, that’s all this debate really seems to be to me—where do we set that fence? And I’m convinced that the decision point is more cultural and personal than anything, such that the resulting discussion does not usefully generalize.
You seem to suggest that there is more than one dimension to moral worth, but, just like with utility function or with deontological ethics, eventually it comes down to making a decision, and all your dimensions converge into one.
And once I do, even if my decision was as rational as it can be under the circumstances and I’ve identified a set of priorities most folks would applaud in principle, there’s still the potential for regrets and no-win situations. While a moral system that genuinely solved that problem would please me greatly, I see no sign that you’ve stumbled upon it here.
-Possessing sensory experiences (I’m pretty sure insects and even worms do that)
Are you claiming that insects and worms possess functioning sense-organs, or that they possess subjective experience of the resulting sense-data? I find the latter somewhat unlikely wrt insects and worms. Regarding babies, it doesn’t seem “obviously untrue” to me that babies lack subjective experience. Though, nor does it seem obviously true.
A nervous system is just a lump of matter, the same as any other. Another object with functioning sense-organs is my laptop, yet I wouldn’t say my laptop possesses subjective experience.
A nervous system is just a lump of matter, the same as any other.
So you will have no objection to me replacing your brain with an intricately-carved wooden replica, then?
Another object with functioning sense-organs is my laptop, yet I wouldn’t say my laptop possesses subjective experience.
How would you know if it did?
If you don’t think a nervous system is relevant there, I’m curious to know what you think is behind you having subjective experiences, and if you believe in p-zombies. Your laptop doesn’t organize that sense input and integrate it into a complex system. But even simple organisms do that.
Your response suggests you do understand the distinction between possessing sensory information and subjective experience of the same. As such, I suppose my job here is complete. But nevertheless:
The important thing is not the composition of an object, but its functionality. An intricately-carved wooden machine that correctly carried out the functionality of my brain would be a fine replacement, even if it lacks the elan vital neural matter supposedly has.
My laptop doesn’t have subjective experience. You do. An elephant most likely does. What about Watson?. The robots in those robot soccer competitions? Or BigDog?
I mostly understand “sentient” as most people use the term as the second meaning. Eliezer in particular seems to use “sentient” and “person” pretty much interchangeably here, for example, without really defining either, so I understand him to use the word similarly.
The latter, applied to the real world, sounds rather more like “soul” than anything coherent and obvious.
Were I inclined to turn this assertion into a question, it would probably be something like “what properties does a typical adult have that a typical 1-year-old lacks which makes it more OK to kill the latter than the former?”
Were I inclined to turn this assertion into a question, it would probably be something like “what properties does a typical adult have that a typical 1-year-old lacks which makes it more OK to kill the latter than the former?” Is that the question you’re asking?
-SF/F writer’s term for “assume this fictional entity is a person” (akin to “sapient”; it’s a binary personhood marker, or a secularized soul—it tells the reader to react accordingly to this character’s experiences and behavior)
I realize you seem to have deleted your account, but: this.
Bizarre. In lieu of a reply by Eliezer himself clarifying things, I am left to understand he thinks that some portion of humans otherwise possessing the structural and anatomical necessities for sensation don’t experience anything even when all their sense organs are working fine, and that animals in general are basically just meat-automata with no inner life at all. Even when they’re communicating about those inner states and have the same structural correlates of various sensations we’d expect to see, and react in ways that sure look like expression of sensation or emotion (even if you sometimes need to be familiar with their particular body language).
That feels a lot more like a strawman than anything, because it’s just so obviously bollocks. If I step on my cat’s tail by mistake, she doesn’t yowl and run from me because “Nociceptor activation threshold met; initiate yowl-and-run subroutine.” She does it because it’s painful and it startled her. I know there are people who honestly believe something like that about nonhuman life across the board, but I hadn’t gotten the impression Eliezer was one.
Someone clear this up for me?
Sentient vs. Sapient is one of the most common word confusions in the English language. If someone says “sentient,” but the context appears to suggest “sapient,” they probably mean sapient.
The bit about that that’s bothering me is “sapient” is a term of art—it’s science fiction shorthand employed with a purpose (it denotes personhood for the reader, in a field where blatantly-nonhuman but unambiguously-personlike entities are common). It divides the field of hypothetical entities into two neat, clean categories: people no matter what their substrate, appearance, anatomy or drives, and everything from animals of every sort to plants and grains of sand.
It just seems like a weird way of dividing up the world, and more of a cultural artefact than anything; a marker on the map which corresponds to nothing in the territory.
People often use ‘sentient’ to mean ‘sapient’, and it may be that Eliezer intends the latter. It’s at least pretty plausible that animals and very young infants are not sapient, namely not capable of judgement, and that this capacity is what would endow one with a certain autonomy.
“Soul”, gotcha. Binary personhood marker. Reified concept not sufficiently unpacked. Okie.
That’s a rather uncharitable misinterpretation of what hen wrote, caused by anger and frustration, I’m guessing.
No, just the expressed befuddlement.
I respectfully disagree, sapience is an acquired subjective quality, therefore it is trivial to disregard. Now sentience is orders of magnitude more complex. I was going to say “inherent” to the species, but is it? Now this is supposed to be “the easy problem” go figure that.
1)”Nociceptor activation threshold met; initiate yowl-and-run subroutine.” 2)She does it because it’s painful and it startled her.
What’s the difference between 1 and 2?
1 presumes that minimalist descriptions of superficially-visible output are all you need to reconstruct the actual drivers behind the behavior. 2 presumes that the evolutionarily-shared neural architecture and its basic components of perception, cognition and soforth are not seperated by a barrier of magical reality fluid.
Ah. If you’re saying that 1) implies lesser internal machinery than 2), and that the internal machinery (cognition and soforth) is what’s important, then I agree.
The problem I think is just that, to me, they both sound to me like perfectly reasonable (if vague) descriptions of complex, sentient human pain. It seemed like you were saying nociceptors and subroutines were incapable of producing pain and startlement.
1 sounds to me like an attempt to capture output in the form of a flowchart. It’s like trying to describe the flocking behavior of birds by reference to the Boids cellular automaton—and insisting not that there are similar principles at work in how the birds go about solving the problem of flocking, but that birds literally run an instance of Boids in their heads and that’s all there is to their flocking behavior.
I agree that “Eliezer believes animals and nonpathological infants are just meat-automata who don’t actually possess the mental states they communicating about” is a strawman. I’m not really sure what remains to be cleared up. Can you clarify the question?
Basically what I asked Eliezer: What sense of the word “sentient” is he using, such that babies plausibly don’t qualify? My de facto read of the term and a little digging around Google show two basic senses:
-Possessing sensory experiences (I’m pretty sure insects and even worms do that)
-SF/F writer’s term for “assume this fictional entity is a person” (akin to “sapient”; it’s a binary personhood marker, or a secularized soul—it tells the reader to react accordingly to this character’s experiences and behavior)
The latter, applied to the real world, sounds rather more like “soul” than anything coherent and obvious. The former, denied in babies, sounds bizarre and obviously untrue. So...I’m missing something, and I’d like to know what it is.
Maybe the best way to approach this question is backwards. I assume you believe that people (at least) have some moral worth such that they ought not be owned, whimsically destroyed, etc. I also assume you believe that stones (at least) have no moral worth and can be owned, whimsically destroyed, etc. without any immediate moral consequences. So 1) tell me where you think the line is (even if its a very fuzzy, circumstantial one) and 2) tell me in virtue of what something has or lacks such moral worth.
...or 3) toss out my questions and tell me how you think it goes on your own terms.
Essentially. I don’t consider it a fact-about-the-world per se, but that captures my alief pretty well.
Eh. Actually I have some squick to cavalier destruction or disruption of inanimate objects, but they don’t register out as the same thing. So we’ll go with that.
To what extent does an entity respond dynamically to both present and historical conditions in terms impacts on its health, wellbeing, emotional and perceptual experiences, social interactions and so on? To what extent is it capable of experiencing pain and suffering? To what extent does modifying my behavior in response to these things constitute a negative burden on myself or others? To what extent do present circumstances bear on all those things?
Those aren’t so much terms in an equation as independent axes of variance. There are probably some I haven’t listed. They define the shape of the space; the actual answer to your question is lurking somewhere in there.
Thanks, that’s helpful. Given what you’ve said, I doubt you and EY would disagree on much. EY says in his metaethics sequence that moral facts and categories like ‘moral worth’ or ‘autonomy’ are derived properties. In other words, they don’t refer to anything fundamental about the world, but supervene on some complex set of fundamental facts. Given that that’s his view, I think he was just using ‘sentience’ as a shorthand for something like what you’ve written: note that many of the considerations you describe are importantly related to a capacity for complex experiences.
Except I’ve interacted with bugs in ways that satisfied that criterion (and that did parse out as morally-good), so clearly the devil’s in the details. If Eliezer suspects young children may reliably not qualify, and I suspect that insects may at least occasionally qualify, we’re clearly drawing very different lines and have very different underlying assumptions about reality.
What makes you think there’s a line? I care more about killing (or torturing) a dog than a stone, but less so than a human. Pulling the wings off flies provokes a similar, if weaker, reaction. A continuum might complicate the math slightly, but …
“Self-aware” is one soul-free interpretation of sentient/sapient, often experimentally measured by the mirror test. By that metric, humans are not sentient until well into the second year, and most species we would consider non-sentient fail it. Of course, treating non-self-aware human babies as non-sentient animals is quite problematic. Peter Singer is one of the few people brave enough to tread into this topic.
The mirror test is interesting for sure, especially in a cross-species context. However, I’m far from convinced about the straightforward reading of “the expected response indicates the subject has an internal map of oneself.” Since you read the Wikipedia article down that far, you could also scroll down to the “Criticisms” section and see a variety of objections to that.
Moreover, when asked to choose between the interpretation that the test isn’t sufficient for its stated purpose, and the interpretation that six-year olds in Fiji aren’t self-aware I rather suspect the former is more likely.
Besides all that, even if we assume self-awareness is the thing you seem to be making of it, I’m not clear how that would draw moral-worth line so neatly between humans (or some humans) and literally everything else. From a consequentialist perspective, if I assume that dogs or rats can experience pain and suffering, it seems weird to neglect them from my utility function on the basis they don’t jump through that particular (ambiguous, methodologically-questionable) experimental hoop.
Oh, I agree that the mirror test is quite imperfect. The practical issue is how to draw a Schelling somewhere sensible. Clearly mosquitoes can be treated as non-sentient, clearly most humans cannot be. Treating human fetuses and some mammals as non-sentient is rather controversial. Just “experiencing pain” is probably too wide a net for moral worth, as nociceptors are present in most animals, including the aforementioned mosquito. Suffering is probably a more restrictive term, but I am not aware of a measurable definition of it. It is also probably sometimes too narrow, as most of us would find it immoral to harm people who do not experience suffering due to a mental or a physical issue, like pain insensitivity or asymbolia.
Disagree that it’s clear. I’ve had interactions with insects that I could only parse as “interaction between two sentient beings, although there’s a wide gulf of expectation and sensation and emotion and so forth which pushes it right up to the edges of that category.” I’ve not had many interactions with mosquitos beyond “You try to suck my blood because you’re hungry and I’m a warm, CO2-breathing blood source in your vicinity”, but I assume that there’s something it feels like to be a mosquito, that it has a little mosquito mind that might not be very flexible or impressive when weighted against a human one, but it’s there, it’s what the mosquito uses to navigate its environment and organize its behavior intelligibly, and all of its searching for mates and blood and a nice place to lay eggs is felt as a drive… that in short it’s not just a tiny little bloodsucking p-zombie. That doesn’t mean I accord it much moral weight either—I won’t shed any tears over it if I should smash it while reflexively brushing it aside, even though I’m aware arthropods have nociception and, complex capacity for emotional suffering or not, they still feel pain and I prefer not to inflict that needlessly (or without a safeword).
But I couldn’t agree it isn’t sentient, that it’s just squishy clockwork.
It seems to me that the problem you’re really trying to solve is how to sort the world into neat piles marked “okay to inflict my desires on regardless of consequences” and “not okay to do that to.” Which is probably me just stating the obvious, but the reason I call attention to it is I literally don’t get that. The universe just is not so tidy; personhood or whatever word you wish to use is not just one thing, and the things that make it up seem to behave such that the question is less like “Is this a car or not?” and more like “Is this car worth 50,000 dollars, to me, at this time?”
That is ever the problem—you can’t even technically demonstrate without lots of inference that your best friend or your mother really suffer. This is why I don’t like drawing binary boundaries on that basis.
Though strangely enough, plenty of LWers seem to consider many disorders with similarly pervasive consequences for experience to result in “lives barely worth living...”
My (but not necessarily yours) concern with all this is a version of the repugnant conclusion: if you assign some moral worth to mosquitoes or bacteria, and you allow for non-asymptotic accumulation based on the number of specimen, then there is some number of bacteria whose moral worth is at least one human. If you don’t allow for accumulation, then there is no difference between killing one mosquito and 3^^^3 of them. If you impose asymptotic accumulation (no amount of mosquitoes have moral worth equal to that of one human, or one cat), then the goalpost simply shifts to a different lifeform (how many cats are worth a human?). Imposing an artificial Schelling fence at least provides some solution, though far from universal. Thus I’m OK with ignoring suffering or moral worth of some lifeforms. I would not approve of needlessly torturing them, but mostly because of the anguish it causes humans like you.
You seem to suggest that there is more than one dimension to moral worth, but, just like with utility function or with deontological ethics, eventually it comes down to making a decision, and all your dimensions converge into one.
Sure, that registers—if there were a thriving microbial ecosystem on Mars, I’d consider it immoral to wipe it out utterly simply for the sake of one human being. Though I think my function-per-individual is more complicated than that; wiping it out because that one human is a hypochondriac is more-wrong in my perception than wiping it out because, let’s say, that one human is an astronaut stranded in some sort of weird microbial mat, and the only way to release them before they die is to let loose an earthly extremophile which will, as a consequence, propagate across Mars and destroy all remaining holdouts of the local biosphere. That latter is very much more a tossup, such that I don’t view other humans going ‘Duh, save the human!’ as exactly committing an atrocity or compounded the wrong. Sometimes reality just presents you with situations that are not ideal, or where there is no good choice. No-win situations happen, unsatisfying resolutions and all. That doesn’t mean do nothing; it just means trying to set up my ethical and moral framework to make it impossible feels silly.
To be honest, that’s all this debate really seems to be to me—where do we set that fence? And I’m convinced that the decision point is more cultural and personal than anything, such that the resulting discussion does not usefully generalize.
And once I do, even if my decision was as rational as it can be under the circumstances and I’ve identified a set of priorities most folks would applaud in principle, there’s still the potential for regrets and no-win situations. While a moral system that genuinely solved that problem would please me greatly, I see no sign that you’ve stumbled upon it here.
Why stop there? Humans have also had interactions with lightning that they could only parse as interactions between two sentient beings!
Are you claiming that insects and worms possess functioning sense-organs, or that they possess subjective experience of the resulting sense-data? I find the latter somewhat unlikely wrt insects and worms. Regarding babies, it doesn’t seem “obviously untrue” to me that babies lack subjective experience. Though, nor does it seem obviously true.
I’m trying to figure out why you think there’s a difference between the two, at least when dealing with anything possessing a nervous system.
A nervous system is just a lump of matter, the same as any other. Another object with functioning sense-organs is my laptop, yet I wouldn’t say my laptop possesses subjective experience.
So you will have no objection to me replacing your brain with an intricately-carved wooden replica, then?
How would you know if it did?
If you don’t think a nervous system is relevant there, I’m curious to know what you think is behind you having subjective experiences, and if you believe in p-zombies. Your laptop doesn’t organize that sense input and integrate it into a complex system. But even simple organisms do that.
Your response suggests you do understand the distinction between possessing sensory information and subjective experience of the same. As such, I suppose my job here is complete. But nevertheless:
The important thing is not the composition of an object, but its functionality. An intricately-carved wooden machine that correctly carried out the functionality of my brain would be a fine replacement, even if it lacks the elan vital neural matter supposedly has.
My laptop doesn’t have subjective experience. You do. An elephant most likely does. What about Watson?. The robots in those robot soccer competitions? Or BigDog?
My opinion on zombies is lw standard.
How would you know if it did?
Ah! I understand, now. Thanks for clarifying.
I mostly understand “sentient” as most people use the term as the second meaning. Eliezer in particular seems to use “sentient” and “person” pretty much interchangeably here, for example, without really defining either, so I understand him to use the word similarly.
Were I inclined to turn this assertion into a question, it would probably be something like “what properties does a typical adult have that a typical 1-year-old lacks which makes it more OK to kill the latter than the former?”
Is that the question you’re asking?
More or less, yeah.
I realize you seem to have deleted your account, but: this.