Since consciousness seems useful for all these different species, in a convergent-evolution pattern even across very different brain architectures (mammals vs birds), then I believe we should expect it to be useful in our homonid-simulator-trained model. If so, we should be able to measure this difference to a next-token-predictor trained on an equivalent number of tokens of a dataset of, for instance, math problems.
What do you mean by difference here? Increase in performance due to consciousness? Or differences in functions?
I’m not sure we could measure this difference. It seems very likely to me that consciousness evolved before, say, language and complex agency. But complex language and complex agency might not require consciousness, and may capture all of the benefits that would be captured by consciousness, so consciousness wouldn’t result in greater performance.
However, it could be that
humans do not consistently have complex language and complex agency, and humans with agency are fallible as agents, so consciousness in most humans is still useful to us as a species (or to our genes),
building complex language and complex agency on top of consciousness is the locally cheapest way to build them, so consciousness would still be useful to us, or
we reached a local maximum in terms of genetic fitness, or evolutionary pressures are too weak on us now, and it’s not really possible to evolve away consciousness while preserving complex language and complex agency. So consciousness isn’t useful to us, but can’t be practically gotten rid of without loss in fitness.
Some other possibilities:
The adaptive value of consciousness is really just to give us certain motivations, e.g. finding our internal processing mysterious, nonphysical or interesting makes it seem special to us, and this makes us
value sensations for their own sake, so seek sensations and engage in sensory play, which may help us learn more about ourselves or the world (according to Nicholas Humphrey, as discussed here, here and here),
value our lives more and work harder to prevent early death, and/or
develop spiritual or moral beliefs and adaptive associated practices,
Consciousness is just the illusion of the phenomenality of what’s introspectively accessible to us. Furthermore, we might incorrectly believe in its phenomenality just because of the fact that much of the processing we have introspective access to is wired in and its causes are not introspectively accessible, but instead cognitively impenetrable. The full illusion could be a special case of humans incorrectly using supernatural explanations for unexplained but interesting and subjectively important or profound phenomena.
What do you mean by difference here? Increase in performance due to consciousness? Or differences in functions?
I’m not sure we could measure this difference. It seems very likely to me that consciousness evolved before, say, language and complex agency. But complex language and complex agency might not require consciousness, and may capture all of the benefits that would be captured by consciousness, so consciousness wouldn’t result in greater performance.
However, it could be that
humans do not consistently have complex language and complex agency, and humans with agency are fallible as agents, so consciousness in most humans is still useful to us as a species (or to our genes),
building complex language and complex agency on top of consciousness is the locally cheapest way to build them, so consciousness would still be useful to us, or
we reached a local maximum in terms of genetic fitness, or evolutionary pressures are too weak on us now, and it’s not really possible to evolve away consciousness while preserving complex language and complex agency. So consciousness isn’t useful to us, but can’t be practically gotten rid of without loss in fitness.
Some other possibilities:
The adaptive value of consciousness is really just to give us certain motivations, e.g. finding our internal processing mysterious, nonphysical or interesting makes it seem special to us, and this makes us
value sensations for their own sake, so seek sensations and engage in sensory play, which may help us learn more about ourselves or the world (according to Nicholas Humphrey, as discussed here, here and here),
value our lives more and work harder to prevent early death, and/or
develop spiritual or moral beliefs and adaptive associated practices,
Consciousness is just the illusion of the phenomenality of what’s introspectively accessible to us. Furthermore, we might incorrectly believe in its phenomenality just because of the fact that much of the processing we have introspective access to is wired in and its causes are not introspectively accessible, but instead cognitively impenetrable. The full illusion could be a special case of humans incorrectly using supernatural explanations for unexplained but interesting and subjectively important or profound phenomena.