Well, the externally visible outcome is [preserved]
Yes, I’m specifically focused on the behaviour of an honest self-report
What does “encapsulates”means? Are you saying that fine grained information gets lost? Note that the basic fact of running on the metal is not lost.
fine-grained information becomes irrelevant implementation details. If the neuron still fires, or doesn’t, smaller noise doesn’t matter. The only reason I point this out is specifically as it applies to the behaviour of a self-report (which we will circle back to in a moment). If it doesn’t materially effect the output so powerfully that it alters that final outcome, then it is not responsible for outward behaviour.
Yes. That doesn’t mean the experience is, because a computational Zombie will produce the same outputs even if it lacks consciousness, uncoincidentally.
A computational duplicate of a believer in consciousness and qualia will continue to state that it has them , whether it does or not, because its a computational duplicate , so it produces the same output in response to the same input
You haven’t eliminated the possibility of a functional duplicate still being a functional duplicate if it lacks conscious experience.
I’m saying that we have ruled out that a functional duplicate could lack conscious experience because:
we have established conscious experience as part of the causal chain to be able to feel something and then output a description through voice or typing that is based on that feeling. If conscious experience was part of that causal chain, and the causal chain consists purely of neuron firings, then conscious experience is contained in that functionality.
We can’t invoke the idea that smaller details (than neuron firings) are where consciousness manifests, because unless those smaller details affect neuronal firing patterns enough to cause the subject to speak about what it feels like to be sentient, then they are not part of that causal chain, which sentience must be a part of.
Yes, I’m specifically focused on the behaviour of an honest self-report
fine-grained information becomes irrelevant implementation details. If the neuron still fires, or doesn’t, smaller noise doesn’t matter. The only reason I point this out is specifically as it applies to the behaviour of a self-report (which we will circle back to in a moment). If it doesn’t materially effect the output so powerfully that it alters that final outcome, then it is not responsible for outward behaviour.
I’m saying that we have ruled out that a functional duplicate could lack conscious experience because:
we have established conscious experience as part of the causal chain to be able to feel something and then output a description through voice or typing that is based on that feeling. If conscious experience was part of that causal chain, and the causal chain consists purely of neuron firings, then conscious experience is contained in that functionality.
We can’t invoke the idea that smaller details (than neuron firings) are where consciousness manifests, because unless those smaller details affect neuronal firing patterns enough to cause the subject to speak about what it feels like to be sentient, then they are not part of that causal chain, which sentience must be a part of.