Yes, Turing Test is absolutely unsatisfying for determining consciousness. We would need interpretability tools for that.
This is another reason to stop training new models and use whatever we already have, to construct complicated LMA. If we stick to explicit scaffolding rules, LMAs are not going to be conscious, unless LLMs already are.
Yes, Turing Test is absolutely unsatisfying for determining consciousness. We would need interpretability tools for that.
This is another reason to stop training new models and use whatever we already have, to construct complicated LMA. If we stick to explicit scaffolding rules, LMAs are not going to be conscious, unless LLMs already are.