If you asked a dog whether Humans were conscious, would he say yes? I think probably not. When a dog is asked whether a Human is “conscious”, he might mention things like:
Human seem to be somewhat aware of their surroundings, but they are not conscious beings really.
They seem to be up and awake so often, but spend almost all their time blankly staring into objects in their hands or on table tops, or randomly moving objects around in the den.
They have very little understanding about the importance of the pack, hierarchy, safety in numbers, etc.
They have nearly no understanding of the most important dangers in life. Like strange smells and sounds, or uniformed strangers approaching the den.
In this same way many (perhaps most?) AI experts might never agree that LLMs or AGI systems have achieved “consciousness”? As, “consciousness” is just a word used to describe how a specific being thinks, observes, prioritizes and takes action in the world, be it Human, other animals, AI, etc.
“Consciousness” operates so differently in different types of beings. Depending on the being’s ability to truly expand their understanding about intelligence, awareness, and perception, they may never truly agree on the consciousness of other types of beings or entities, beyond their own?
[Question] Would AI experts ever agree that AGI systems have attained “consciousness”?
If you asked a dog whether Humans were conscious, would he say yes? I think probably not. When a dog is asked whether a Human is “conscious”, he might mention things like:
Human seem to be somewhat aware of their surroundings, but they are not conscious beings really.
They seem to be up and awake so often, but spend almost all their time blankly staring into objects in their hands or on table tops, or randomly moving objects around in the den.
They have very little understanding about the importance of the pack, hierarchy, safety in numbers, etc.
They have nearly no understanding of the most important dangers in life. Like strange smells and sounds, or uniformed strangers approaching the den.
In this same way many (perhaps most?) AI experts might never agree that LLMs or AGI systems have achieved “consciousness”? As, “consciousness” is just a word used to describe how a specific being thinks, observes, prioritizes and takes action in the world, be it Human, other animals, AI, etc.
“Consciousness” operates so differently in different types of beings. Depending on the being’s ability to truly expand their understanding about intelligence, awareness, and perception, they may never truly agree on the consciousness of other types of beings or entities, beyond their own?