I think this restates the hard problem, rather than reducing it.
Suppose we understand brain well enough to figure out what makes one experience specific qualia
We first have to define and detect qualia. As long as it’s only self-reported, there’s no way to know if two person’s qualia are similar, nor how to test a “qualia transducer”.
I think you’re missing (or I am) the distinction between feeling and reporting a feeling. Comparing reports is clearly insufficient across humans or LLMs.
Hmm, I am probably missing something. I thought if a human honestly reports a feeling, we kind of trust them that they felt it? So if an AI reports a feeling, and then there is a conduit where the distillate of that feeling is transmitted to a human, who reports the same feeling, it would go some ways toward accepting that the AI had qualia? I think you are saying that this does not address Chalmers’ point.
I thought if a human honestly reports a feeling, we kind of trust them that they felt it?
Out of politeness, sure, but not rigorously. The “hard problem of consciousness” is that we don’t know if what they felt is the same as what we interpret their report to be.
I think this restates the hard problem, rather than reducing it.
We first have to define and detect qualia. As long as it’s only self-reported, there’s no way to know if two person’s qualia are similar, nor how to test a “qualia transducer”.
The testing seems easy, one person feels the quale, the other reports the feeling, they compare, what am I missing?
I think you’re missing (or I am) the distinction between feeling and reporting a feeling. Comparing reports is clearly insufficient across humans or LLMs.
Hmm, I am probably missing something. I thought if a human honestly reports a feeling, we kind of trust them that they felt it? So if an AI reports a feeling, and then there is a conduit where the distillate of that feeling is transmitted to a human, who reports the same feeling, it would go some ways toward accepting that the AI had qualia? I think you are saying that this does not address Chalmers’ point.
Out of politeness, sure, but not rigorously. The “hard problem of consciousness” is that we don’t know if what they felt is the same as what we interpret their report to be.