Experimenting with beta.character.ai
Transcript Link: https://imgur.com/a/EnRLfLc (I am deadzone in this conversation).
This transcript convinced me that this Large Language Model was definitely more conscious than a monkey, and not far away from human-level consciousness. I believe with slightly more parameters, we can get something indistinguishable from humans—and it will be able to contribute to open source and discover new theorems etc.
To me, it just feels like we are 5-6 years from AGI. To be more concrete I think 100x GPT-3′s parameters will easily be superhuman at all tasks.
What is your definition of “conscious”, other than, “I know it when I see it”?
I think it is a modelling insight. When your model of reality is sufficiently accurate, you also model yourself as a part of that reality and become conscious. I think this chatbot had a very good model of what I was saying to generate the appropriate response.
Some control systems include models of themselves. Does that make them conscious?
My computer can give me a detailed report about its hardware and software components. Does that make it conscious?
I say no to both of these.
Yes, but control systems do not have an accurate model of reality in the sense that it cannot model my mind at all, and I am a part of reality.
I took your earlier comment to be talking specifically about modelling oneself, and claiming that this is the attainment of consciousness: “you also model yourself as a part of that reality and become conscious.” Modelling other people did not appear to come into it.
The systems I mentioned model themselves. Yet they are not conscious. Therefore modelling oneself is insufficient for consciousness. (I doubt that it is necessary either.)