What evidence do you have that other people are conscious, apart from words (and facial expressions, etc...)? And would that evidence apply or not apply to an AI?
I’m not solving the hard problem of consciousness, I’m saying that that Bayesian evidence exists that some agents have subjective experiences. Compare to an AI that mouths the words but gets them wrong (“fuzzy is like being stabbed by needles”); we have at least evidence that an agent with the right words has a higher chance of having similar subjective experiences.
What evidence do you have that other people are conscious, apart from words (and facial expressions, etc...)? And would that evidence apply or not apply to an AI?
I’m not solving the hard problem of consciousness, I’m saying that that Bayesian evidence exists that some agents have subjective experiences. Compare to an AI that mouths the words but gets them wrong (“fuzzy is like being stabbed by needles”); we have at least evidence that an agent with the right words has a higher chance of having similar subjective experiences.