The argument against p-zombies is that the reason for our talk of consciousness is literally our consciousness, and hence there is no reason for a being not otherwise deliberately programmed to reproduce talk about consciousness to do it if it weren’t conscious. It is a corollary of this that a zombie, which is physically identical, and therefore not deliberately programmed to imitate talk of consciousness but must still reproduce it, must talk about consciousness for the same reason we do. That is, the zombies must be conscious.
A faithful synaptic-level silicone WBE, if it independently starts talking about it at all, must be talking about it for the same reason as us (ie. consciousness), since it hasn’t been deliberately programmed to fake consciousness-talk. Or, something extremely unlikely has happened.
Note that supposing that how the synapses are implemented could matter for consciousness, even while the macro-scale behaviour of the brain is identical, is equivalent to supposing that consciousness doesn’t actually play any role in our consciousness-talk, since David Chalmers would write just as many papers on the Hard Problem regardless of whether we flipped the “consciousness” bit in every synapse in his brain.
But isn’t it still possible that a simulation that lost its consciousness would still retain memories about consciousness that were sufficient, even without access to real consciousness, to generate potentially even ‘novel’ content about consciousness?
That’s possible, although then the consciousness-related utterances would be of the form “oh my, I seem to have suddenly stopped being conscious” or the like (if you believe that consciousness plays a causal role in human utterances such as “yep, i introspected on my consciousness and it’s still there”), implying that such a simulation would not have been a faithful synaptic-level WBE, having clearly differing macro-level behaviour.
The argument against p-zombies is that the reason for our talk of consciousness is literally our consciousness, and hence there is no reason for a being not otherwise deliberately programmed to reproduce talk about consciousness to do it if it weren’t conscious.
A functional duplicate will talk the same way as whomever it is a duplicate of.
A faithful synaptic-level silicone WBE, if it independently starts talking about it at all, must be talking about it for the same reason as us (ie. consciousness),
A WBE of a specific person will respond to the same stimuli in the same way as that person. Logically, that will
be for the reason that it is a duplicate, Physically, the “reason” or, ultimate cause, could be quite different, since the WBE is physically different.
since it hasn’t been deliberately programmed to fake consciousness-talk.
It has been programmed to be a functional duplicate of a specific individual.,
Or, something extremely unlikely has happened.
Something unlikely to happen naturally has happened. A WBE is an artificial construct which is exactly the same as an person in some ways,a nd radically different in others.
Note that supposing that how the synapses are implemented could matter for consciousness, even while the macro-scale behaviour of the brain is identical, is equivalent to supposing that consciousness doesn’t actually play any role in our consciousness-talk,
Actually it isn’t, for reasons that are widely misunderstood: kidney dyalisis machines don’t need nephrons, but that doens’t mean nephrons are causally idle in kidneys.
The argument against p-zombies is that the reason for our talk of consciousness is literally our consciousness, and hence there is no reason for a being not otherwise deliberately programmed to reproduce talk about consciousness to do it if it weren’t conscious. It is a corollary of this that a zombie, which is physically identical, and therefore not deliberately programmed to imitate talk of consciousness but must still reproduce it, must talk about consciousness for the same reason we do. That is, the zombies must be conscious.
A faithful synaptic-level silicone WBE, if it independently starts talking about it at all, must be talking about it for the same reason as us (ie. consciousness), since it hasn’t been deliberately programmed to fake consciousness-talk. Or, something extremely unlikely has happened.
Note that supposing that how the synapses are implemented could matter for consciousness, even while the macro-scale behaviour of the brain is identical, is equivalent to supposing that consciousness doesn’t actually play any role in our consciousness-talk, since David Chalmers would write just as many papers on the Hard Problem regardless of whether we flipped the “consciousness” bit in every synapse in his brain.
But isn’t it still possible that a simulation that lost its consciousness would still retain memories about consciousness that were sufficient, even without access to real consciousness, to generate potentially even ‘novel’ content about consciousness?
That’s possible, although then the consciousness-related utterances would be of the form “oh my, I seem to have suddenly stopped being conscious” or the like (if you believe that consciousness plays a causal role in human utterances such as “yep, i introspected on my consciousness and it’s still there”), implying that such a simulation would not have been a faithful synaptic-level WBE, having clearly differing macro-level behaviour.
A functional duplicate will talk the same way as whomever it is a duplicate of.
A WBE of a specific person will respond to the same stimuli in the same way as that person. Logically, that will be for the reason that it is a duplicate, Physically, the “reason” or, ultimate cause, could be quite different, since the WBE is physically different.
It has been programmed to be a functional duplicate of a specific individual.,
Something unlikely to happen naturally has happened. A WBE is an artificial construct which is exactly the same as an person in some ways,a nd radically different in others.
Actually it isn’t, for reasons that are widely misunderstood: kidney dyalisis machines don’t need nephrons, but that doens’t mean nephrons are causally idle in kidneys.