I believe it depends on one’s preferences. Wait, you think it doesn’t? By “ability to do recursion” I meant “ability to predict its own state altered by receiving the signal” or whatever the difference of the top level is supposed to be. I assumed that in your model whoever doesn’t implement it doesn’t have qualia therefore doesn’t feel pain because there is no one to feel it. And for the interested in the Hard Problem the question would be “why this specific physical arrangement interpreted as recursive modeling feels so different from when the pain didn’t propagate to the top level”.
I don’t think qualia—to the degree it is at all a useful term—has much to do with the ability to feel pain, or anything. In my understanding all definitions of qualia assume it is a different thing from purely neurological perceptions (which is what I’d understand by “feelings”), more specifically that the perceptions can generate qualia sometimes in some creatures but they don’t automatically do.
Otherwise you’d have to argue one of the two:
Either even the most primitive animals like worms which you can literally simulate neuron by neuron, have qualia as long as they have some senses and neurons.
..or “feel pain” and e.g. “feel warmth” are somehow fundamentally different where the first necessarily requires/produces quale and the second may or may not produce it.
Both sound rather indefensible to me, so it follows that an animal can feel pain without experiencing a quale of it, just like a scallop can see the light without experiencing a quale of it. But two caveats on this. First, I don’t have a really good grasp on what a qualia is, and as wikipedia attests neither do the experts. I feel there’s some core of truth that people are trying to get at with this concept (something along what you said in your first comment), but also it’s very often used as a rug for people to hide their confusion under, so I’m always skeptical about using this term. Second, whether or not one should ascribe any moral worth to the agents without consciousness/qualia is decisively not a part of what I’m saying here. I personally do, but as you say it depends on one’s preferences, and so largely orthogonal to the question of how consciousness works.
In my understanding all definitions of qualia assume it is a different thing from purely neurological perceptions (which is what I’d understand by “feelings”), more specifically that the perceptions can generate qualia sometimes in some creatures but they don’t automatically do.
Of course, the minimal definition of “qualia” I have been using doesn’t have that implication.
Ok, by these definitions what I was saying is “why not having ability to do recursion stops you from having pain-qualia?”. Just feeling like there is a core of truth to qualia (“conceivability” in zombie language) is enough for asking your world-model to provide a reason why not everything, including recursively self-modeling systems, feels like qualialess feelings—why recursively self-modeling is not just another kind of reaction and perception?
Ah, I see. My take on this question would be that we should focus on the word “you” rather than “qualia”. If you have a conscious mind subjectively perceiving anything about the outside world (or its own internal workings), it has to feel like something, almost by definition. Like, if you went to go get your covid shot and it hurt you’d say “it felt like something”. If and only if somehow you didn’t even feel the needle piercing your skin, you’ll say “I didn’t feel anything”. There were experiments proving that people can react to a stimulus they are not subjectively aware of (mostly for visual stimuli), but I’m pretty sure in all those cases they’d say they didn’t see anything—basically that’s how we know they were not subjectively aware of it. What would it even mean for a conscious mind to be aware of a stimulus but it not “feeling like something”? It must have some representation in the consciousness, that’s basically what we mean by “being aware of X” or “consciously experiencing X”.
So I’d say given a consciousness experiencing stuff, you necessarily have conscious experiences (aka qualia), that’s a tautology basically. So the question becomes why some things have consciousness, or to narrow it down to your question—why (certain) recursively self-modeling systems are conscious? And that’s kind of what I was trying to explain by the part 4 of the post, and approximately the same idea just from another perspective is much better covered in this book review and this article.
But if I tried to put it in one paragraph, I’d start with—how do I know that I’m conscious and why do I think I know it? And the answer would be a ramble along the lines of: well when I look into my mind I can see me, i.e. some guy who thinks and makes decisions and is aware of things, and have emotions and memories and so on and so forth. And at the same time as I see I also am this guy! I can have different thoughts whenever I choose to (to a degree), I can do different things whenever I choose to (to a still more limited degree), and at the same time I can reflect on the choice process. So my theory is that I can perceive myself as a human mind mostly because the self-reflecting model—which is me—has trained to perceive other human mind so well that it learned to generalize to itself (see the whole entire post for the details). Although Graziano in the article and book I linked provides a very convincing explanation as to why this self-modeling would also be very helpful for the general reasoning ability—something I was unsuccessfully trying to figure out in the part 5.
So my theory is that I can perceive myself as a human mind mostly because the self-reflecting model—which is me—has trained to perceive other human mind so well that it learned to generalize to itself.
What’s your theory for why consciousness is actually your ability to perceive yourself as human mind? From your explanation it seems to be
You think (and say) you have consciousness.
When you examine why you think it, you use your ability to perceive yourself as human mind.
Therefore consciousness is your ability to perceive yourself as human mind.
You are basically saying that consciousness detector in the brain is an “algorithm of awareness” detector (and algorithm of awareness can work as “algorithm of awareness” detector). But what are the actual reasons to believe it? Only that if it is awareness, then it explains why you can detect it? It certainly is not a perfect detector, because some people will explicitly say “no, my definition of consciousness is not about awareness”. And because it doesn’t automatically fits into “If you have a conscious mind subjectively perceiving anything about the outside world, it has to feel like something” if you just replace “conscious” by “able to percieve itself”.
Those are all great points. Regarding your first question, no, that’s not the reasoning I have. I think consciousness is the ability to reflect on myself firstly because it feels like the ability to reflect on myself. Kind of like the reason that I believe I can see is that when I open my eyes I start seeing things and if I interact with those things they really are mostly where I see them, nothing more sophisticated than that. There’s a bunch of longer more theoretical arguments I can bring for this point, but I never thought I should because I was kind of taking it as a given. It well may be me falling into the typical mind fallacy, if you say some people say otherwise. So if you have different intuitions about the consciousness, can you tell:
How do you subjectively, from the first person view, know that you are conscious?
Can you genuinely imagine being conscious but not self aware from the first person view?
If you get to talk to and interact with, an alien or an AI of unknown power and architecture, how would you go about finding out if they are conscious?
And because it doesn’t automatically fits into “If you have a conscious mind subjectively perceiving anything about the outside world, it has to feel like something” if you just replace “conscious” by “able to percieve itself”.
Well, no, it doesn’t fit quite as simple, but overall I think it works out. If you have an agent able to reflect on itself and model itself perceiving something, it’s going to reflect on the fact that it perceives something. I.e. it’s going to have some mental representation for both the perception and for itself perceiving it. It will be able to reason about itself perceiving things, and if it can communicate it will probably also talk about it. Different perceptions will be in relation to each other (e.g. sky is not the same color as grass, and grass color is associated with summer and warmth and so on). And, perhaps most importantly, it will have models of other such agents perceiving things and it will on the high abstract level that they have the same perceptions in them. But it will only have the access to the lower level data for such perceptions from its own sensory inputs, not others’, so it won’t be able to tell for sure what it “feels like” to them, because it won’t be getting theirs stream of low-level sensory inputs.
In short, I think—and please do correct me if you have a counterexample—that we have reasons to expect such an agent to make any claim humans make (given similar circumstances and training examples), and we can make any testable claim about such an agent that we can make about a human.
To me it looks like the defining feature of consciousness intuition is one’s certainty in having it, so I define consciousness as the only thing one can be certain about and then I know I am conscious by executing “cogito ergo sum”.
I can imagine disabling specific features associated with awareness starting with memory: seeing something without remembering feels like seeing something and then forgetting about it. Usually when you don’t remember seeing something recent it means your perception wasn’t conscious, but you certainly forgot some conscious moments in the past.
Then I can imagine not having any thoughts. It is harder for long periods of time, but I can create short durations of just seeing that, as far as I remember, are not associated with any thoughts.
At that point it becomes harder to describe this process as self-awareness. You could argue that if there is representation of the lower level somewhere in the high level, then it is still modeling. But there is no more reason to consider these levels parts of the same system, than to consider any sender-receiver pair as self-modeling system.
I don’t know. It’s all ethics, so I’ll probably just check for some arbitrary similarity-to-human-mind metric.
we have reasons to expect such an agent to make any claim humans make
Depending on detailed definitions of “reflect on itself” and “model itself perceiving” I think you can make an agent that wouldn’t claim to be perfectly certain in its own consciousness. For example, I don’t see a reason why some simple cartesian agent with direct read-only access to its own code would think in terms of consciousness.
But it will only have the access to the lower level data for such perceptions from its own sensory inputs, not others’, so it won’t be able to tell for sure what it “feels like” to them, because it won’t be getting theirs stream of low-level sensory inputs.
That’s nothing new, it’s the intuition that the Mary thought experiment is designed to address.
I believe it depends on one’s preferences. Wait, you think it doesn’t? By “ability to do recursion” I meant “ability to predict its own state altered by receiving the signal” or whatever the difference of the top level is supposed to be. I assumed that in your model whoever doesn’t implement it doesn’t have qualia therefore doesn’t feel pain because there is no one to feel it. And for the interested in the Hard Problem the question would be “why this specific physical arrangement interpreted as recursive modeling feels so different from when the pain didn’t propagate to the top level”.
I don’t think qualia—to the degree it is at all a useful term—has much to do with the ability to feel pain, or anything. In my understanding all definitions of qualia assume it is a different thing from purely neurological perceptions (which is what I’d understand by “feelings”), more specifically that the perceptions can generate qualia sometimes in some creatures but they don’t automatically do.
Otherwise you’d have to argue one of the two:
Either even the most primitive animals like worms which you can literally simulate neuron by neuron, have qualia as long as they have some senses and neurons.
..or “feel pain” and e.g. “feel warmth” are somehow fundamentally different where the first necessarily requires/produces quale and the second may or may not produce it.
Both sound rather indefensible to me, so it follows that an animal can feel pain without experiencing a quale of it, just like a scallop can see the light without experiencing a quale of it. But two caveats on this. First, I don’t have a really good grasp on what a qualia is, and as wikipedia attests neither do the experts. I feel there’s some core of truth that people are trying to get at with this concept (something along what you said in your first comment), but also it’s very often used as a rug for people to hide their confusion under, so I’m always skeptical about using this term. Second, whether or not one should ascribe any moral worth to the agents without consciousness/qualia is decisively not a part of what I’m saying here. I personally do, but as you say it depends on one’s preferences, and so largely orthogonal to the question of how consciousness works.
Of course, the minimal definition of “qualia” I have been using doesn’t have that implication.
Your “definition” (which really isn’t a definition but just three examples) have almost no implications at all, that’s my only issue with it.
That’s a feature, since it begs the minimal number of questions.
Ok, by these definitions what I was saying is “why not having ability to do recursion stops you from having pain-qualia?”. Just feeling like there is a core of truth to qualia (“conceivability” in zombie language) is enough for asking your world-model to provide a reason why not everything, including recursively self-modeling systems, feels like qualialess feelings—why recursively self-modeling is not just another kind of reaction and perception?
Ah, I see. My take on this question would be that we should focus on the word “you” rather than “qualia”. If you have a conscious mind subjectively perceiving anything about the outside world (or its own internal workings), it has to feel like something, almost by definition. Like, if you went to go get your covid shot and it hurt you’d say “it felt like something”. If and only if somehow you didn’t even feel the needle piercing your skin, you’ll say “I didn’t feel anything”. There were experiments proving that people can react to a stimulus they are not subjectively aware of (mostly for visual stimuli), but I’m pretty sure in all those cases they’d say they didn’t see anything—basically that’s how we know they were not subjectively aware of it. What would it even mean for a conscious mind to be aware of a stimulus but it not “feeling like something”? It must have some representation in the consciousness, that’s basically what we mean by “being aware of X” or “consciously experiencing X”.
So I’d say given a consciousness experiencing stuff, you necessarily have conscious experiences (aka qualia), that’s a tautology basically. So the question becomes why some things have consciousness, or to narrow it down to your question—why (certain) recursively self-modeling systems are conscious? And that’s kind of what I was trying to explain by the part 4 of the post, and approximately the same idea just from another perspective is much better covered in this book review and this article.
But if I tried to put it in one paragraph, I’d start with—how do I know that I’m conscious and why do I think I know it? And the answer would be a ramble along the lines of: well when I look into my mind I can see me, i.e. some guy who thinks and makes decisions and is aware of things, and have emotions and memories and so on and so forth. And at the same time as I see I also am this guy! I can have different thoughts whenever I choose to (to a degree), I can do different things whenever I choose to (to a still more limited degree), and at the same time I can reflect on the choice process. So my theory is that I can perceive myself as a human mind mostly because the self-reflecting model—which is me—has trained to perceive other human mind so well that it learned to generalize to itself (see the whole entire post for the details). Although Graziano in the article and book I linked provides a very convincing explanation as to why this self-modeling would also be very helpful for the general reasoning ability—something I was unsuccessfully trying to figure out in the part 5.
What’s your theory for why consciousness is actually your ability to perceive yourself as human mind? From your explanation it seems to be
You think (and say) you have consciousness.
When you examine why you think it, you use your ability to perceive yourself as human mind.
Therefore consciousness is your ability to perceive yourself as human mind.
You are basically saying that consciousness detector in the brain is an “algorithm of awareness” detector (and algorithm of awareness can work as “algorithm of awareness” detector). But what are the actual reasons to believe it? Only that if it is awareness, then it explains why you can detect it? It certainly is not a perfect detector, because some people will explicitly say “no, my definition of consciousness is not about awareness”. And because it doesn’t automatically fits into “If you have a conscious mind subjectively perceiving anything about the outside world, it has to feel like something” if you just replace “conscious” by “able to percieve itself”.
Those are all great points. Regarding your first question, no, that’s not the reasoning I have. I think consciousness is the ability to reflect on myself firstly because it feels like the ability to reflect on myself. Kind of like the reason that I believe I can see is that when I open my eyes I start seeing things and if I interact with those things they really are mostly where I see them, nothing more sophisticated than that. There’s a bunch of longer more theoretical arguments I can bring for this point, but I never thought I should because I was kind of taking it as a given. It well may be me falling into the typical mind fallacy, if you say some people say otherwise. So if you have different intuitions about the consciousness, can you tell:
How do you subjectively, from the first person view, know that you are conscious?
Can you genuinely imagine being conscious but not self aware from the first person view?
If you get to talk to and interact with, an alien or an AI of unknown power and architecture, how would you go about finding out if they are conscious?
Well, no, it doesn’t fit quite as simple, but overall I think it works out. If you have an agent able to reflect on itself and model itself perceiving something, it’s going to reflect on the fact that it perceives something. I.e. it’s going to have some mental representation for both the perception and for itself perceiving it. It will be able to reason about itself perceiving things, and if it can communicate it will probably also talk about it. Different perceptions will be in relation to each other (e.g. sky is not the same color as grass, and grass color is associated with summer and warmth and so on). And, perhaps most importantly, it will have models of other such agents perceiving things and it will on the high abstract level that they have the same perceptions in them. But it will only have the access to the lower level data for such perceptions from its own sensory inputs, not others’, so it won’t be able to tell for sure what it “feels like” to them, because it won’t be getting theirs stream of low-level sensory inputs.
In short, I think—and please do correct me if you have a counterexample—that we have reasons to expect such an agent to make any claim humans make (given similar circumstances and training examples), and we can make any testable claim about such an agent that we can make about a human.
To me it looks like the defining feature of consciousness intuition is one’s certainty in having it, so I define consciousness as the only thing one can be certain about and then I know I am conscious by executing “cogito ergo sum”.
I can imagine disabling specific features associated with awareness starting with memory: seeing something without remembering feels like seeing something and then forgetting about it. Usually when you don’t remember seeing something recent it means your perception wasn’t conscious, but you certainly forgot some conscious moments in the past.
Then I can imagine not having any thoughts. It is harder for long periods of time, but I can create short durations of just seeing that, as far as I remember, are not associated with any thoughts.
At that point it becomes harder to describe this process as self-awareness. You could argue that if there is representation of the lower level somewhere in the high level, then it is still modeling. But there is no more reason to consider these levels parts of the same system, than to consider any sender-receiver pair as self-modeling system.
I don’t know. It’s all ethics, so I’ll probably just check for some arbitrary similarity-to-human-mind metric.
Depending on detailed definitions of “reflect on itself” and “model itself perceiving” I think you can make an agent that wouldn’t claim to be perfectly certain in its own consciousness. For example, I don’t see a reason why some simple cartesian agent with direct read-only access to its own code would think in terms of consciousness.
That’s nothing new, it’s the intuition that the Mary thought experiment is designed to address.