So my theory is that I can perceive myself as a human mind mostly because the self-reflecting model—which is me—has trained to perceive other human mind so well that it learned to generalize to itself.
What’s your theory for why consciousness is actually your ability to perceive yourself as human mind? From your explanation it seems to be
You think (and say) you have consciousness.
When you examine why you think it, you use your ability to perceive yourself as human mind.
Therefore consciousness is your ability to perceive yourself as human mind.
You are basically saying that consciousness detector in the brain is an “algorithm of awareness” detector (and algorithm of awareness can work as “algorithm of awareness” detector). But what are the actual reasons to believe it? Only that if it is awareness, then it explains why you can detect it? It certainly is not a perfect detector, because some people will explicitly say “no, my definition of consciousness is not about awareness”. And because it doesn’t automatically fits into “If you have a conscious mind subjectively perceiving anything about the outside world, it has to feel like something” if you just replace “conscious” by “able to percieve itself”.
Those are all great points. Regarding your first question, no, that’s not the reasoning I have. I think consciousness is the ability to reflect on myself firstly because it feels like the ability to reflect on myself. Kind of like the reason that I believe I can see is that when I open my eyes I start seeing things and if I interact with those things they really are mostly where I see them, nothing more sophisticated than that. There’s a bunch of longer more theoretical arguments I can bring for this point, but I never thought I should because I was kind of taking it as a given. It well may be me falling into the typical mind fallacy, if you say some people say otherwise. So if you have different intuitions about the consciousness, can you tell:
How do you subjectively, from the first person view, know that you are conscious?
Can you genuinely imagine being conscious but not self aware from the first person view?
If you get to talk to and interact with, an alien or an AI of unknown power and architecture, how would you go about finding out if they are conscious?
And because it doesn’t automatically fits into “If you have a conscious mind subjectively perceiving anything about the outside world, it has to feel like something” if you just replace “conscious” by “able to percieve itself”.
Well, no, it doesn’t fit quite as simple, but overall I think it works out. If you have an agent able to reflect on itself and model itself perceiving something, it’s going to reflect on the fact that it perceives something. I.e. it’s going to have some mental representation for both the perception and for itself perceiving it. It will be able to reason about itself perceiving things, and if it can communicate it will probably also talk about it. Different perceptions will be in relation to each other (e.g. sky is not the same color as grass, and grass color is associated with summer and warmth and so on). And, perhaps most importantly, it will have models of other such agents perceiving things and it will on the high abstract level that they have the same perceptions in them. But it will only have the access to the lower level data for such perceptions from its own sensory inputs, not others’, so it won’t be able to tell for sure what it “feels like” to them, because it won’t be getting theirs stream of low-level sensory inputs.
In short, I think—and please do correct me if you have a counterexample—that we have reasons to expect such an agent to make any claim humans make (given similar circumstances and training examples), and we can make any testable claim about such an agent that we can make about a human.
To me it looks like the defining feature of consciousness intuition is one’s certainty in having it, so I define consciousness as the only thing one can be certain about and then I know I am conscious by executing “cogito ergo sum”.
I can imagine disabling specific features associated with awareness starting with memory: seeing something without remembering feels like seeing something and then forgetting about it. Usually when you don’t remember seeing something recent it means your perception wasn’t conscious, but you certainly forgot some conscious moments in the past.
Then I can imagine not having any thoughts. It is harder for long periods of time, but I can create short durations of just seeing that, as far as I remember, are not associated with any thoughts.
At that point it becomes harder to describe this process as self-awareness. You could argue that if there is representation of the lower level somewhere in the high level, then it is still modeling. But there is no more reason to consider these levels parts of the same system, than to consider any sender-receiver pair as self-modeling system.
I don’t know. It’s all ethics, so I’ll probably just check for some arbitrary similarity-to-human-mind metric.
we have reasons to expect such an agent to make any claim humans make
Depending on detailed definitions of “reflect on itself” and “model itself perceiving” I think you can make an agent that wouldn’t claim to be perfectly certain in its own consciousness. For example, I don’t see a reason why some simple cartesian agent with direct read-only access to its own code would think in terms of consciousness.
But it will only have the access to the lower level data for such perceptions from its own sensory inputs, not others’, so it won’t be able to tell for sure what it “feels like” to them, because it won’t be getting theirs stream of low-level sensory inputs.
That’s nothing new, it’s the intuition that the Mary thought experiment is designed to address.
What’s your theory for why consciousness is actually your ability to perceive yourself as human mind? From your explanation it seems to be
You think (and say) you have consciousness.
When you examine why you think it, you use your ability to perceive yourself as human mind.
Therefore consciousness is your ability to perceive yourself as human mind.
You are basically saying that consciousness detector in the brain is an “algorithm of awareness” detector (and algorithm of awareness can work as “algorithm of awareness” detector). But what are the actual reasons to believe it? Only that if it is awareness, then it explains why you can detect it? It certainly is not a perfect detector, because some people will explicitly say “no, my definition of consciousness is not about awareness”. And because it doesn’t automatically fits into “If you have a conscious mind subjectively perceiving anything about the outside world, it has to feel like something” if you just replace “conscious” by “able to percieve itself”.
Those are all great points. Regarding your first question, no, that’s not the reasoning I have. I think consciousness is the ability to reflect on myself firstly because it feels like the ability to reflect on myself. Kind of like the reason that I believe I can see is that when I open my eyes I start seeing things and if I interact with those things they really are mostly where I see them, nothing more sophisticated than that. There’s a bunch of longer more theoretical arguments I can bring for this point, but I never thought I should because I was kind of taking it as a given. It well may be me falling into the typical mind fallacy, if you say some people say otherwise. So if you have different intuitions about the consciousness, can you tell:
How do you subjectively, from the first person view, know that you are conscious?
Can you genuinely imagine being conscious but not self aware from the first person view?
If you get to talk to and interact with, an alien or an AI of unknown power and architecture, how would you go about finding out if they are conscious?
Well, no, it doesn’t fit quite as simple, but overall I think it works out. If you have an agent able to reflect on itself and model itself perceiving something, it’s going to reflect on the fact that it perceives something. I.e. it’s going to have some mental representation for both the perception and for itself perceiving it. It will be able to reason about itself perceiving things, and if it can communicate it will probably also talk about it. Different perceptions will be in relation to each other (e.g. sky is not the same color as grass, and grass color is associated with summer and warmth and so on). And, perhaps most importantly, it will have models of other such agents perceiving things and it will on the high abstract level that they have the same perceptions in them. But it will only have the access to the lower level data for such perceptions from its own sensory inputs, not others’, so it won’t be able to tell for sure what it “feels like” to them, because it won’t be getting theirs stream of low-level sensory inputs.
In short, I think—and please do correct me if you have a counterexample—that we have reasons to expect such an agent to make any claim humans make (given similar circumstances and training examples), and we can make any testable claim about such an agent that we can make about a human.
To me it looks like the defining feature of consciousness intuition is one’s certainty in having it, so I define consciousness as the only thing one can be certain about and then I know I am conscious by executing “cogito ergo sum”.
I can imagine disabling specific features associated with awareness starting with memory: seeing something without remembering feels like seeing something and then forgetting about it. Usually when you don’t remember seeing something recent it means your perception wasn’t conscious, but you certainly forgot some conscious moments in the past.
Then I can imagine not having any thoughts. It is harder for long periods of time, but I can create short durations of just seeing that, as far as I remember, are not associated with any thoughts.
At that point it becomes harder to describe this process as self-awareness. You could argue that if there is representation of the lower level somewhere in the high level, then it is still modeling. But there is no more reason to consider these levels parts of the same system, than to consider any sender-receiver pair as self-modeling system.
I don’t know. It’s all ethics, so I’ll probably just check for some arbitrary similarity-to-human-mind metric.
Depending on detailed definitions of “reflect on itself” and “model itself perceiving” I think you can make an agent that wouldn’t claim to be perfectly certain in its own consciousness. For example, I don’t see a reason why some simple cartesian agent with direct read-only access to its own code would think in terms of consciousness.
That’s nothing new, it’s the intuition that the Mary thought experiment is designed to address.