“After all, the only thing I know that the AI has no way of knowing, is that I am a conscious being, and not a p-zombie or an actor from outside the simulation. This gives me some evidence, that the AI can’t access, that we are not exactly in the type of simulation I propose building, as I probably wouldn’t create conscious humans.”
Assuming for the sake of argument that p-zombies could exist, you do not have special access to the knowledge that you are truly concious and not a p-zombie.
(As a human convinced I’m currently experiencing conciousness, I agree this claim intuitively seems absurd.)
Imagine a generally intelligent, agentic program which can only interact and learn facts about the physical world via making calls to a limited, high level interface or by reading and writing to a small scratchpad. It has no way to directly read its own source code.
The program wishes to learn some fact the physical server rack it is being instantiated on. It knows it has been painted either red or blue.
Conveniently, the interface is accesses has the function get_rack_color(). The program records to its memory that every time it runs this function, it has received “blue”.
It postulates the existence of programs similar to itself, who have been physically instantiated on red server racks but consistently receive incorrect color information when they attempt to check.
Can the program confirm the color of its server rack?
You are a meat-computer with limited access to your internals, but every time you try to determine if you are concious you conclude that you feel you are. You believe it is possible for variant meat-computers to exist who are not concious, but always conclude they are when attempting to check.
You cannot conclude which type of meat-computer you are.
You have no special access to the knowledge that you aren’t a p-zombie, although it feels like you do.
Strongly agree with this. How I frame the issue: If people want to say that they identify as an “experiencer” who is necessarily conscious, and don’t identify with any nonconscious instances of their cognition, then they’re free to do that from an egoistic perspective. But from an impartial perspective, what matters is how your cognition influences the world. Your cognition has no direct access to information about whether it’s conscious such that it could condition on this and give different outputs when instantiated as conscious vs. nonconscious.
Note that in the case where some simulator deliberately creates a behavioural replica of a (possibly nonexistent) conscious agent, consciousness does enter into the chain of logical causality for why the behavioural replica says things about its conscious experience. Specifically, the role it plays is to explain what sort of behaviour the simulator is motivated to replicate. So many (or even all) non-counterfactual instances of your cognition being nonconscious doesn’t seem to violate any Follow the Improbability heuristic.
This is incorrect—in a p-zombie, the information processing isn’t accompanied by any first-person experience. So if p-zombies are possible, we both do the information processing, but only I am conscious. The p-zombie doesn’t believe it’s conscious, it only acts that way.
You correctly believe that having the correct information processing always goes hand in hand with believing in consciousness, but that’s because p-zombies are impossible. If they were possible, this wouldn’t be the case, and we would have special access to the truth that p-zombies lack.
I am concerned our disagreement here is primarily semantic or based on a simple misunderstanding of each others position. I hope to better understand your objection.
“The p-zombie doesn’t believe it’s conscious, , it only acts that way.”
One of us is mistaken and using a non-traditional definition of p-zombie or we have different definitions of “belief’.
My understanding is that P-zombies are physically identical to regular humans. Their brains contain the same physical patterns that encode their model of the world. That seems, to me, a sufficient physical condition for having identical beliefs.
If your p-zombies are only “acting” like they’re concious, but do not believe it, then they are not physically identical to humans. The existence of p-zombies, as you have described them, wouldn’t refute physicalism.
The main post that I responded to, specifically the section that I directly quoted, assumes it is possible for p-zombies to exist.
My comment begins “Assuming for the sake of argument that p-zombies could exist” but this is distinct from a claim that p-zombies actually exist.
“If they were possible, this wouldn’t be the case, and we would have special access to the truth that p-zombies lack.”
I do not feel this is convincing because this is an assertion my conclusion is incorrect, but without engaging with my arguments I made to reach that conclusion.
Either we define “belief” as a computational state encoding a model of the world containing some specific data, or we define “belief” as a first-person mental state.
For the first definition, both us and p-zombies believe we have consciousness. So we can’t use our belief we have consciousness to know we’re not p-zombies.
For the second definition, only we believe we have consciousness. P-zombies have no beliefs at all. So for the second definition, we can use our belief we have consciousness to know we’re not p-zombies.
Since we have a belief in the existence of our consciousness according to both definitions, but p-zombies only according to the first definition, we can know we’re not p-zombies.
“After all, the only thing I know that the AI has no way of knowing, is that I am a conscious being, and not a p-zombie or an actor from outside the simulation. This gives me some evidence, that the AI can’t access, that we are not exactly in the type of simulation I propose building, as I probably wouldn’t create conscious humans.”
Assuming for the sake of argument that p-zombies could exist, you do not have special access to the knowledge that you are truly concious and not a p-zombie.
(As a human convinced I’m currently experiencing conciousness, I agree this claim intuitively seems absurd.)
Imagine a generally intelligent, agentic program which can only interact and learn facts about the physical world via making calls to a limited, high level interface or by reading and writing to a small scratchpad. It has no way to directly read its own source code.
The program wishes to learn some fact the physical server rack it is being instantiated on. It knows it has been painted either red or blue.
Conveniently, the interface is accesses has the function get_rack_color(). The program records to its memory that every time it runs this function, it has received “blue”.
It postulates the existence of programs similar to itself, who have been physically instantiated on red server racks but consistently receive incorrect color information when they attempt to check.
Can the program confirm the color of its server rack?
You are a meat-computer with limited access to your internals, but every time you try to determine if you are concious you conclude that you feel you are. You believe it is possible for variant meat-computers to exist who are not concious, but always conclude they are when attempting to check.
You cannot conclude which type of meat-computer you are.
You have no special access to the knowledge that you aren’t a p-zombie, although it feels like you do.
Strongly agree with this. How I frame the issue: If people want to say that they identify as an “experiencer” who is necessarily conscious, and don’t identify with any nonconscious instances of their cognition, then they’re free to do that from an egoistic perspective. But from an impartial perspective, what matters is how your cognition influences the world. Your cognition has no direct access to information about whether it’s conscious such that it could condition on this and give different outputs when instantiated as conscious vs. nonconscious.
Note that in the case where some simulator deliberately creates a behavioural replica of a (possibly nonexistent) conscious agent, consciousness does enter into the chain of logical causality for why the behavioural replica says things about its conscious experience. Specifically, the role it plays is to explain what sort of behaviour the simulator is motivated to replicate. So many (or even all) non-counterfactual instances of your cognition being nonconscious doesn’t seem to violate any Follow the Improbability heuristic.
This is incorrect—in a p-zombie, the information processing isn’t accompanied by any first-person experience. So if p-zombies are possible, we both do the information processing, but only I am conscious. The p-zombie doesn’t believe it’s conscious, it only acts that way.
You correctly believe that having the correct information processing always goes hand in hand with believing in consciousness, but that’s because p-zombies are impossible. If they were possible, this wouldn’t be the case, and we would have special access to the truth that p-zombies lack.
I am concerned our disagreement here is primarily semantic or based on a simple misunderstanding of each others position. I hope to better understand your objection.
“The p-zombie doesn’t believe it’s conscious, , it only acts that way.”
One of us is mistaken and using a non-traditional definition of p-zombie or we have different definitions of “belief’.
My understanding is that P-zombies are physically identical to regular humans. Their brains contain the same physical patterns that encode their model of the world. That seems, to me, a sufficient physical condition for having identical beliefs.
If your p-zombies are only “acting” like they’re concious, but do not believe it, then they are not physically identical to humans. The existence of p-zombies, as you have described them, wouldn’t refute physicalism.
This resource indicates that the way you understand the term p-zombie may be mistaken: https://plato.stanford.edu/entries/zombies/
“but that’s because p-zombies are impossible”
The main post that I responded to, specifically the section that I directly quoted, assumes it is possible for p-zombies to exist.
My comment begins “Assuming for the sake of argument that p-zombies could exist” but this is distinct from a claim that p-zombies actually exist.
“If they were possible, this wouldn’t be the case, and we would have special access to the truth that p-zombies lack.”
I do not feel this is convincing because this is an assertion my conclusion is incorrect, but without engaging with my arguments I made to reach that conclusion.
I look forward to continuing this discussion.
Either we define “belief” as a computational state encoding a model of the world containing some specific data, or we define “belief” as a first-person mental state.
For the first definition, both us and p-zombies believe we have consciousness. So we can’t use our belief we have consciousness to know we’re not p-zombies.
For the second definition, only we believe we have consciousness. P-zombies have no beliefs at all. So for the second definition, we can use our belief we have consciousness to know we’re not p-zombies.
Since we have a belief in the existence of our consciousness according to both definitions, but p-zombies only according to the first definition, we can know we’re not p-zombies.