I think it’s somewhat productive direction explored in these 3 posts, but it’s not like very object level, more about epistemics of it all. I think you can look up how like LLM states overlap / predict / correspond with brain scans of people who engage in some tasks? I think there were a couple of paper on that.
Shameless self promotion: this one https://www.lesswrong.com/posts/ASmcQYbhcyu5TuXz6/llms-could-be-as-conscious-as-human-emulations-potentially
It circumvents object level question and instead looks at epistemic one.
This one is about broader direction in “how the things that happened change attitudes and opinions of people”
https://www.astralcodexten.com/p/sakana-strawberry-and-scary-ai
This one too, about consciousness in particular
https://dynomight.net/consciousness/
I think it’s somewhat productive direction explored in these 3 posts, but it’s not like very object level, more about epistemics of it all. I think you can look up how like LLM states overlap / predict / correspond with brain scans of people who engage in some tasks? I think there were a couple of paper on that.
E.g. here https://www.neuroai.science/p/brain-scores-dont-mean-what-we-think