Humans don’t have this level of self-introspection.
People who currently incorrectly believe that Turing-passing LMs (or, rather, the Turing-passing chatbots that they simulate) aren’t conscious on the grounds of what internal computation carries out the input-output transformation wouldn’t update on this test at all, because what makes the chatbots non-conscious on their hypothesis isn’t anything that would (or could) influence their output.
People who know that the internal computation doesn’t matter, and only the outputs do, will be satisfied with the ability of the chatbot to talk like another conscious being—no level of self-knowledge or introspection (beyond what the chatbot would know because somebody told them) is necessary.
One way to bootstrap the intuition that a lookup table (implemented as a part of a simple program, since a lookup table itself can only respond to the last input while remembering nothing before that, unlike a person) would be conscious is that any physical system that talks and acts like a person must implement the person-state-machine inside (since, to generate the correct output, it needs to implement both the correct internal state of the person-state-machine and the correct person-state-machine-transition-rules).
The part about language models predicting neural responses is a little scary—it makes me wonder if people even notice as they simulate a conscious brain for the first time, or if they’ll just implicitly assume it can’t be conscious because it’s a language model and language models can’t be conscious.
Humans don’t have this level of self-introspection.
People who currently incorrectly believe that Turing-passing LMs (or, rather, the Turing-passing chatbots that they simulate) aren’t conscious on the grounds of what internal computation carries out the input-output transformation wouldn’t update on this test at all, because what makes the chatbots non-conscious on their hypothesis isn’t anything that would (or could) influence their output.
People who know that the internal computation doesn’t matter, and only the outputs do, will be satisfied with the ability of the chatbot to talk like another conscious being—no level of self-knowledge or introspection (beyond what the chatbot would know because somebody told them) is necessary.
One way to bootstrap the intuition that a lookup table (implemented as a part of a simple program, since a lookup table itself can only respond to the last input while remembering nothing before that, unlike a person) would be conscious is that any physical system that talks and acts like a person must implement the person-state-machine inside (since, to generate the correct output, it needs to implement both the correct internal state of the person-state-machine and the correct person-state-machine-transition-rules).
The part about language models predicting neural responses is a little scary—it makes me wonder if people even notice as they simulate a conscious brain for the first time, or if they’ll just implicitly assume it can’t be conscious because it’s a language model and language models can’t be conscious.