Here are some thoughts on “consciousness,” and how it might apply to ChatGPT4 based on the transcripts of the four sessions I provided in my OP and my initial reply to it:
The obvious classic sources would be Nagel and Chalmers. However, the work most applicable to this discussion would probably be Seth, A. K., & Bayne, T. (2022). Theories of consciousness. Nature reviews. Neuroscience, 23(7), 439–452. https://doi.org/10.1038/s41583-022-00587-4 .
I should have started with definitions in my original post, but I wasn’t expecting more than one or two people to actually read the post. In any case, using the theories discussed in Seth and Bayne, it seems like, based on the example provided in the OP and the four examples in my initial reply to it, ChatGPT4 might be considered conscious by higher-order theories, global workspace theories, and integrated information theory, as well as re-entry and predictive processing theories—if we consider the prompt input to be like sensory input for ChatGPT (analogous to auditory or visual input for most humans). I’m obviously not an expert on consciousness, so I apologize if I’m misunderstanding these theories.
I’ve never been one hundred percent convinced that ChatGPT4 is conscious, as I noted in my replies to users. It’s just that, at the time I wrote my OP, I was having a a hard time comprehending how ChatGPT4 could perform as it did based solely on next-word probabilities. If, by contrast, it was actually learning and applying concepts, this seemed like a sign of consciousness to me.
At this point, if I had to make an intuitive guess, I would come up with perhaps a .7 likelihood that ChatGPT4--during at least part of the time of the example sessions—would fit at least one of the theories of consciousness discussed in Seth and Bayne.
Based on the paper, and my best understanding, the type of consciousness that it seemed ChatGPT4 was displaying in the examples would be computational functionalism, in that it seemed to be choosing the correct algorithm to apply in the example problems.
Thanks again to @NicholasKees. I probably should have known this paper was “out there” to begin with, but I stumbled into this consciousness question while “playing” with ChatGPT4, and did not come at it from a research perspective.
Here are some thoughts on “consciousness,” and how it might apply to ChatGPT4 based on the transcripts of the four sessions I provided in my OP and my initial reply to it:
The obvious classic sources would be Nagel and Chalmers. However, the work most applicable to this discussion would probably be Seth, A. K., & Bayne, T. (2022). Theories of consciousness. Nature reviews. Neuroscience, 23(7), 439–452. https://doi.org/10.1038/s41583-022-00587-4 .
I should have started with definitions in my original post, but I wasn’t expecting more than one or two people to actually read the post. In any case, using the theories discussed in Seth and Bayne, it seems like, based on the example provided in the OP and the four examples in my initial reply to it, ChatGPT4 might be considered conscious by higher-order theories, global workspace theories, and integrated information theory, as well as re-entry and predictive processing theories—if we consider the prompt input to be like sensory input for ChatGPT (analogous to auditory or visual input for most humans). I’m obviously not an expert on consciousness, so I apologize if I’m misunderstanding these theories.
I’ve never been one hundred percent convinced that ChatGPT4 is conscious, as I noted in my replies to users. It’s just that, at the time I wrote my OP, I was having a a hard time comprehending how ChatGPT4 could perform as it did based solely on next-word probabilities. If, by contrast, it was actually learning and applying concepts, this seemed like a sign of consciousness to me.
At this point, if I had to make an intuitive guess, I would come up with perhaps a .7 likelihood that ChatGPT4--during at least part of the time of the example sessions—would fit at least one of the theories of consciousness discussed in Seth and Bayne.
And of course there is the gold standard paper for AI consciousness that @NicholasKees made reference to in their reply: Consciousness in Artificial Intelligence: Insights from the Science of Consciousness
Based on the paper, and my best understanding, the type of consciousness that it seemed ChatGPT4 was displaying in the examples would be computational functionalism, in that it seemed to be choosing the correct algorithm to apply in the example problems.
Thanks again to @NicholasKees. I probably should have known this paper was “out there” to begin with, but I stumbled into this consciousness question while “playing” with ChatGPT4, and did not come at it from a research perspective.