I don’t think that in the example you give, you’re making a token-predicting transformer out of a human emulation; you’re making a token-predicting transformer out of a virtual system with a human emulation as a component. In the system, the words “what’s your earliest memory?” appearing on the paper are going to trigger all sorts of interesting (emulated) neural mechanisms that eventually lead to a verbal response, but the token predictor doesn’t necessarily need to emulate any of that. In fact, if the emulation is deterministic, it can just memorize whatever response is given. Maybe gradient descent is likely to make the LLM conscious in order to efficiently memorize the outputs of a partly conscious system, but that’s not obvious.
If you have a brain emulation, the best way to get a conscious LLM seems to me like it would be finding a way to tokenize emulation states and training it on those.
Not necessarily, a lot of information is being discarded when you’re only looking at the paper/verbal output. As an extreme example, if the emulated brain had been instructed (or had the memory of being instructed) to say the number of characters written on the paper and nothing else, the computational properties of the system as a whole would be much simpler than of the emulation.
I might be missing the point. I agree with you that an architecture that predicts tokens isn’t necessarily non-conscious. I just don’t think the fact that a system predicts tokens generated by a conscious process is reason to suspect that the system itself is conscious without some other argument.