I assume we could also try to extract what an AI “feels” when it speaks of redness of red, and compare it with a similar redness extract from the human mind.
Well, what happens if we do this and we find out that these representations are totally different? Or, moreover, that the AI’s representation of “red” does not seem to align (either in meaning or in structure) with any human-extracted concept or perception? How do we then try to figure out the essence of artificial consciousness, given that comparisons with what we (at that point would) understand best, i.e., human qualia, would no longer output something we can interpret?
I think it is extremely likely that minds with fundamentally different structures perceive the world in fundamentally different ways, so I think the situation in the paragraph above is not only possible, but in fact overwhelmingly likely, conditional on us managing to develop the type of qualia-identifying tech you are talking about. It certainly seems to me that, in such a spot, there would be a fair bit more to answer about this topic.
Well, what happens if we do this and we find out that these representations are totally different? Or, moreover, that the AI’s representation of “red” does not seem to align (either in meaning or in structure) with any human-extracted concept or perception?
I would say that it is a fantastic step forward in our understanding, resolving empirically a question we did not known an answer to.
How do we then try to figure out the essence of artificial consciousness, given that comparisons with what we (at that point would) understand best, i.e., human qualia, would no longer output something we can interpret?
That would be a great stepping stone for further research.
I think it is extremely likely that minds with fundamentally different structures perceive the world in fundamentally different ways, so I think the situation in the paragraph above is not only possible, but in fact overwhelmingly likely, conditional on us managing to develop the type of qualia-identifying tech you are talking about.
I’d love to see this prediction tested, wouldn’t you?
I agree with all of that; my intent was only to make clear (by giving an example) that even after the development of the technology you mentioned in your initial comment, there would likely still be something that “remains” to be analyzed.
Well, what happens if we do this and we find out that these representations are totally different? Or, moreover, that the AI’s representation of “red” does not seem to align (either in meaning or in structure) with any human-extracted concept or perception? How do we then try to figure out the essence of artificial consciousness, given that comparisons with what we (at that point would) understand best, i.e., human qualia, would no longer output something we can interpret?
I think it is extremely likely that minds with fundamentally different structures perceive the world in fundamentally different ways, so I think the situation in the paragraph above is not only possible, but in fact overwhelmingly likely, conditional on us managing to develop the type of qualia-identifying tech you are talking about. It certainly seems to me that, in such a spot, there would be a fair bit more to answer about this topic.
I would say that it is a fantastic step forward in our understanding, resolving empirically a question we did not known an answer to.
That would be a great stepping stone for further research.
I’d love to see this prediction tested, wouldn’t you?
I agree with all of that; my intent was only to make clear (by giving an example) that even after the development of the technology you mentioned in your initial comment, there would likely still be something that “remains” to be analyzed.
Yeah, that was my question. Would there be something that remains, and it sounds like Chalmers and others would say that there would be.