I think you’re missing (or I am) the distinction between feeling and reporting a feeling. Comparing reports is clearly insufficient across humans or LLMs.
Hmm, I am probably missing something. I thought if a human honestly reports a feeling, we kind of trust them that they felt it? So if an AI reports a feeling, and then there is a conduit where the distillate of that feeling is transmitted to a human, who reports the same feeling, it would go some ways toward accepting that the AI had qualia? I think you are saying that this does not address Chalmers’ point.
I thought if a human honestly reports a feeling, we kind of trust them that they felt it?
Out of politeness, sure, but not rigorously. The “hard problem of consciousness” is that we don’t know if what they felt is the same as what we interpret their report to be.
I think you’re missing (or I am) the distinction between feeling and reporting a feeling. Comparing reports is clearly insufficient across humans or LLMs.
Hmm, I am probably missing something. I thought if a human honestly reports a feeling, we kind of trust them that they felt it? So if an AI reports a feeling, and then there is a conduit where the distillate of that feeling is transmitted to a human, who reports the same feeling, it would go some ways toward accepting that the AI had qualia? I think you are saying that this does not address Chalmers’ point.
Out of politeness, sure, but not rigorously. The “hard problem of consciousness” is that we don’t know if what they felt is the same as what we interpret their report to be.