That is, on the object level: it is not at all sensible to think that philosophical zombies are useful as a concept; the idea is deeply confused.
Suppose you made a human-level AI. Suppose there was some doubt about whether it was genuinely conscious. Wouldn’t that amount to the question of whether or not it was a zombie?
Separately, it seems highly possible that people vary in their internal experience, such that some people experience ‘qualia’ and other people don’t. If the main reason we think people have qualia is that they say that they do, and Dennett says that he doesn’t, then the standard argument doesn’t go through for him.
Suppose there was some doubt about whether it was genuinely conscious. Wouldn’t that amount to the question of whether or not it was a zombie?
No. There are a few places this doubt could be localized, but it won’t be in ‘whether or not zombies are possible.’ By definition we can’t get physical evidence about whether or not it’s a zombie (a zombie is in all physical respects similar to a non-zombie, except non-zombies beam their experience to a universe causally downstream of us, where it becomes “what it is like to be a non-zombie,” and zombies don’t), in exactly the same way we can’t get physical evidence about whether or not we’re zombies. In trying to differentiate between different physical outcomes, only physicalist theories are useful.
The doubt will likely be localized in ‘what it means to be conscious’ or ‘how to measure whether or not something is conscious’ or ‘how to manufacture consciousness’, where one hopes that answers to one question inform the others.
Perhaps instead the doubt is localized in ‘what decisions are motivated by facts about consciousness.’ If there is ‘something it’s like to be Alexa,’ what does that mean about the behavior of Amazon or its customers? In a similar way, it seems highly likely that the inner lives of non-human animals parallel ours in specific ways (and don’t in others), and even if we agree exactly on what their inner lives are like we might disagree on what that implies about how humans should treat them.
Suppose you made a human-level AI. Suppose there was some doubt about whether it was genuinely conscious. Wouldn’t that amount to the question of whether or not it was a zombie?
Or it’s terminological confusion.
No. There are a few places this doubt could be localized, but it won’t be in ‘whether or not zombies are possible.’ By definition we can’t get physical evidence about whether or not it’s a zombie (a zombie is in all physical respects similar to a non-zombie, except non-zombies beam their experience to a universe causally downstream of us, where it becomes “what it is like to be a non-zombie,” and zombies don’t), in exactly the same way we can’t get physical evidence about whether or not we’re zombies. In trying to differentiate between different physical outcomes, only physicalist theories are useful.
The doubt will likely be localized in ‘what it means to be conscious’ or ‘how to measure whether or not something is conscious’ or ‘how to manufacture consciousness’, where one hopes that answers to one question inform the others.
Perhaps instead the doubt is localized in ‘what decisions are motivated by facts about consciousness.’ If there is ‘something it’s like to be Alexa,’ what does that mean about the behavior of Amazon or its customers? In a similar way, it seems highly likely that the inner lives of non-human animals parallel ours in specific ways (and don’t in others), and even if we agree exactly on what their inner lives are like we might disagree on what that implies about how humans should treat them.