Unknown, I’m surprised at you. The AI could easily say “I know that …” while neither being nor claiming to be conscious. When a human speaks in the first person, we understand them to be referring to a conscious self, but an unconscious AI could very well use a similar pattern of words merely as a user-friendly (Friendly?) convenience of communication, like Clippy. (Interestingly, the linked article dilvulges that Clippy is apparently a Bayesian. The reader is invited to make up her own “paperclip maximizer” joke.)
Furthermore, I don’t think the anti-zombie argument, properly understood, really says that no unconscious entity could claim to be conscious in conversation. I thought the conclusion was that any entity that is physically identical (or identical enough, per the GAZP) to a conscious being, is also conscious. Maybe a really good unconscious chatbot could pass a Turing test, but it would necessarily have a different internal structure from a conscious being: presumably given a sufficiently advanced cognitive science, we could look at its inner workings and say whether it’s conscious.
Hell, I can write a Python script in five minutes that says it knows all those things. In a few weeks, I could write one that solves Peano arithmetic and generates statements like that ad infinitum. But will it be conscious? Not a chance.
Unknown, I’m surprised at you. The AI could easily say “I know that …” while neither being nor claiming to be conscious. When a human speaks in the first person, we understand them to be referring to a conscious self, but an unconscious AI could very well use a similar pattern of words merely as a user-friendly (Friendly?) convenience of communication, like Clippy. (Interestingly, the linked article dilvulges that Clippy is apparently a Bayesian. The reader is invited to make up her own “paperclip maximizer” joke.)
Furthermore, I don’t think the anti-zombie argument, properly understood, really says that no unconscious entity could claim to be conscious in conversation. I thought the conclusion was that any entity that is physically identical (or identical enough, per the GAZP) to a conscious being, is also conscious. Maybe a really good unconscious chatbot could pass a Turing test, but it would necessarily have a different internal structure from a conscious being: presumably given a sufficiently advanced cognitive science, we could look at its inner workings and say whether it’s conscious.
Hell, I can write a Python script in five minutes that says it knows all those things. In a few weeks, I could write one that solves Peano arithmetic and generates statements like that ad infinitum. But will it be conscious? Not a chance.