I’m not sure why your Venn diagram doesn’t have “sentient” lying entirely within “conscious”
Noggin-scratcher made a much improved version of my diagram that does exactly what you suggest: https://i.imgur.com/5yAhnJg.png I prefer that one!
Why is sapience without sentience or self-awareness marked “impossible”?
In my opinion, a sapient entity is an entity that posseses something like “general purpose intelligence”. Wouldn’t such an entity quickly realize that it’s a distinct entity? That would mean it’s self-aware.
Maybe it doesn’t realize it at first, but when you communicate with it, you should be able to “make it self-aware” quite quickly. If it’s unable to become self-aware, I wouldn’t assign the entity “human level sapience”.
I do agree that I can imagine a very intelligent machine that is completely unable to gather information about its own status. Edge case!
It may be that sentience is literally just the intersection of self-awareness and consciousness
Using the definition proposed in the article, I can easily imagine an entity with an “internal observer” which has full knowledge of its own conditions, still without the ability to feel pleasure or pain.
Consciousness may be nothing more or less than self-awareness.
I do agree that I can imagine a very intelligent machine that is completely unable to gather information about its own status. Edge case!
I think it’s less edge than it might at first seem. Even far more complex and powerful GPT-like models may be incapable of self-awareness, and we may deliberately create such systems for AI safety reasons.
Using the definition proposed in the article, I can easily imagine an entity with an “internal observer” which has full knowledge of its own conditions, still without the ability to feel pleasure or pain.
I think it would have to never have any preferences for its experiences (or anticipated future experiences), not just pleasure or pain. It is conceivable though, so I agree that my supposition misses the mark.
How would you define self-awareness?
As in the post, self-awareness to me seems vaguely defined. I tend to think of both self-awareness and sapience as being in principle more like scales of many factors than binary categories.
In practice we can easily identify capabilities that separate us from animals in reasoning, communicating, and modelling, and proudly proclaim ourselves to be sapient where all the rest aren’t. This seems pretty clear cut, but various computer systems seem to be crossing the animal-human gap in some aspects of “sapience” and already surpassed humans in some others.
Self-awareness seems to be harder to find a clear boundary, and really scales seem more appropriate here. There are various “self-aware” behaviour clues, and they seem to cover a range of different capabilities. In general I’d probably call it being able to model their own actions, experiences, and capabilities in some manner, and update those models.
Cats seem to be above whatever subjective “boundary” I have in mind, even though most of them fail the mirror test. I suspect the mirror test is further toward “sapience”, such as being able to form a new model that predicts the actions of the reflection perfectly in terms of their own actions as viewed from the outside. I suspect that cats do have a pretty decently detailed self-model, but mostly can’t do that last rather complex transformation to match up the outside view to their internal model.
Noggin-scratcher made a much improved version of my diagram that does exactly what you suggest: https://i.imgur.com/5yAhnJg.png
I prefer that one!
In my opinion, a sapient entity is an entity that posseses something like “general purpose intelligence”. Wouldn’t such an entity quickly realize that it’s a distinct entity? That would mean it’s self-aware.
Maybe it doesn’t realize it at first, but when you communicate with it, you should be able to “make it self-aware” quite quickly. If it’s unable to become self-aware, I wouldn’t assign the entity “human level sapience”.
I do agree that I can imagine a very intelligent machine that is completely unable to gather information about its own status. Edge case!
Using the definition proposed in the article, I can easily imagine an entity with an “internal observer” which has full knowledge of its own conditions, still without the ability to feel pleasure or pain.
How would you define self-awareness?
I think it’s less edge than it might at first seem. Even far more complex and powerful GPT-like models may be incapable of self-awareness, and we may deliberately create such systems for AI safety reasons.
I think it would have to never have any preferences for its experiences (or anticipated future experiences), not just pleasure or pain. It is conceivable though, so I agree that my supposition misses the mark.
As in the post, self-awareness to me seems vaguely defined. I tend to think of both self-awareness and sapience as being in principle more like scales of many factors than binary categories.
In practice we can easily identify capabilities that separate us from animals in reasoning, communicating, and modelling, and proudly proclaim ourselves to be sapient where all the rest aren’t. This seems pretty clear cut, but various computer systems seem to be crossing the animal-human gap in some aspects of “sapience” and already surpassed humans in some others.
Self-awareness seems to be harder to find a clear boundary, and really scales seem more appropriate here. There are various “self-aware” behaviour clues, and they seem to cover a range of different capabilities. In general I’d probably call it being able to model their own actions, experiences, and capabilities in some manner, and update those models.
Cats seem to be above whatever subjective “boundary” I have in mind, even though most of them fail the mirror test. I suspect the mirror test is further toward “sapience”, such as being able to form a new model that predicts the actions of the reflection perfectly in terms of their own actions as viewed from the outside. I suspect that cats do have a pretty decently detailed self-model, but mostly can’t do that last rather complex transformation to match up the outside view to their internal model.