This is mostly just arguing over semantics. Just replace “philosophical zombie” with whatever your preferred term is for a physical human who lacks any qualia.
If an argument is about semantics, this is not a good response. That is...
Just replace “philosophical zombie” with whatever your preferred term is for
An important part of normal human conversations is error correction. Suppose I say “three, as an even number, …”; the typical thing to do is to silently think “probably he meant odd instead of even; I will simply edit my memory of the sentence accordingly and continue to listen.” But in technical contexts, this is often a mistake; if I write a proof that hinges on the evenness of three, that proof is wrong, and it’s worth flagging the discrepancy and raising it.
Technical contexts also benefit from specificity of language. If I have a term used to refer to the belief that “three is even,” using that term to also refer to the belief that “three is odd” will be the source of no end of confusion. (“Threevenism is false!” “What do you mean? Of course Threevenism is true.”) So if there is a technical concept that specifically refers to X, using it to refer to Y will lead to the same sort of confusion; use a different word!
That is, on the object level: it is not at all sensible to think that philosophical zombies are useful as a concept; the idea is deeply confused. Separately, it seems highly possible that people vary in their internal experience, such that some people experience ‘qualia’ and other people don’t. If the main reason we think people have qualia is that they say that they do, and Dennett says that he doesn’t, then the standard argument doesn’t go through for him. Whether that difference will end up being deep and meaningful or merely cosmetic seems unclear, and more likely discerned through psychological study of multiple humans, in much the same way that the question of mental imagery was best attacked by a survey.
This variability suggests it’s likely a questionable thing to use as a foundation for other theories. For example, it seems to me like it would be unfortunate if someone thought it was fine to torture some humans and not others, because “only the qualia of being tortured is bad,” because it seems to me like torturing humans is likely bad for different reasons.
That is, on the object level: it is not at all sensible to think that philosophical zombies are useful as a concept; the idea is deeply confused.
Suppose you made a human-level AI. Suppose there was some doubt about whether it was genuinely conscious. Wouldn’t that amount to the question of whether or not it was a zombie?
Separately, it seems highly possible that people vary in their internal experience, such that some people experience ‘qualia’ and other people don’t. If the main reason we think people have qualia is that they say that they do, and Dennett says that he doesn’t, then the standard argument doesn’t go through for him.
Suppose there was some doubt about whether it was genuinely conscious. Wouldn’t that amount to the question of whether or not it was a zombie?
No. There are a few places this doubt could be localized, but it won’t be in ‘whether or not zombies are possible.’ By definition we can’t get physical evidence about whether or not it’s a zombie (a zombie is in all physical respects similar to a non-zombie, except non-zombies beam their experience to a universe causally downstream of us, where it becomes “what it is like to be a non-zombie,” and zombies don’t), in exactly the same way we can’t get physical evidence about whether or not we’re zombies. In trying to differentiate between different physical outcomes, only physicalist theories are useful.
The doubt will likely be localized in ‘what it means to be conscious’ or ‘how to measure whether or not something is conscious’ or ‘how to manufacture consciousness’, where one hopes that answers to one question inform the others.
Perhaps instead the doubt is localized in ‘what decisions are motivated by facts about consciousness.’ If there is ‘something it’s like to be Alexa,’ what does that mean about the behavior of Amazon or its customers? In a similar way, it seems highly likely that the inner lives of non-human animals parallel ours in specific ways (and don’t in others), and even if we agree exactly on what their inner lives are like we might disagree on what that implies about how humans should treat them.
This is mostly just arguing over semantics. Just replace “philosophical zombie” with whatever your preferred term is for a physical human who lacks any qualia.
If an argument is about semantics, this is not a good response. That is...
An important part of normal human conversations is error correction. Suppose I say “three, as an even number, …”; the typical thing to do is to silently think “probably he meant odd instead of even; I will simply edit my memory of the sentence accordingly and continue to listen.” But in technical contexts, this is often a mistake; if I write a proof that hinges on the evenness of three, that proof is wrong, and it’s worth flagging the discrepancy and raising it.
Technical contexts also benefit from specificity of language. If I have a term used to refer to the belief that “three is even,” using that term to also refer to the belief that “three is odd” will be the source of no end of confusion. (“Threevenism is false!” “What do you mean? Of course Threevenism is true.”) So if there is a technical concept that specifically refers to X, using it to refer to Y will lead to the same sort of confusion; use a different word!
That is, on the object level: it is not at all sensible to think that philosophical zombies are useful as a concept; the idea is deeply confused. Separately, it seems highly possible that people vary in their internal experience, such that some people experience ‘qualia’ and other people don’t. If the main reason we think people have qualia is that they say that they do, and Dennett says that he doesn’t, then the standard argument doesn’t go through for him. Whether that difference will end up being deep and meaningful or merely cosmetic seems unclear, and more likely discerned through psychological study of multiple humans, in much the same way that the question of mental imagery was best attacked by a survey.
This variability suggests it’s likely a questionable thing to use as a foundation for other theories. For example, it seems to me like it would be unfortunate if someone thought it was fine to torture some humans and not others, because “only the qualia of being tortured is bad,” because it seems to me like torturing humans is likely bad for different reasons.
Suppose you made a human-level AI. Suppose there was some doubt about whether it was genuinely conscious. Wouldn’t that amount to the question of whether or not it was a zombie?
Or it’s terminological confusion.
No. There are a few places this doubt could be localized, but it won’t be in ‘whether or not zombies are possible.’ By definition we can’t get physical evidence about whether or not it’s a zombie (a zombie is in all physical respects similar to a non-zombie, except non-zombies beam their experience to a universe causally downstream of us, where it becomes “what it is like to be a non-zombie,” and zombies don’t), in exactly the same way we can’t get physical evidence about whether or not we’re zombies. In trying to differentiate between different physical outcomes, only physicalist theories are useful.
The doubt will likely be localized in ‘what it means to be conscious’ or ‘how to measure whether or not something is conscious’ or ‘how to manufacture consciousness’, where one hopes that answers to one question inform the others.
Perhaps instead the doubt is localized in ‘what decisions are motivated by facts about consciousness.’ If there is ‘something it’s like to be Alexa,’ what does that mean about the behavior of Amazon or its customers? In a similar way, it seems highly likely that the inner lives of non-human animals parallel ours in specific ways (and don’t in others), and even if we agree exactly on what their inner lives are like we might disagree on what that implies about how humans should treat them.