Hmm. So, on the Chalmers view, when the AI concludes that it has no way of knowing whether it is epiphenomenally conscious and abandons the belief that it is mysteriously so, would the consciousness ‘evaporate,’ or are there qualia of not being aware of any qualia? It seems that Chalmers might say that in non-zombie worlds the epiphenomenal-AI would still be conscious of various things (like the ‘redness’ of red) but just not conscious of its consciousness. [Given our ‘bridging laws’ the epiphenomenal self can only think “cogito ergo sum” when the physical self does.]
Credulous
Karma: 2
Richard,
So what, in our world, would be the subjective experience of the AI in Eliezer’s example when it corrects its internal make-up such that it no longer performs computations and makes utterances as though it was aware of qualia?