I can retort: Yes, what is that thing that experiences itself in humans? You don’t seem to have an answer.
I’m not sure what you’re asking for, when you ask what it “is”. I call that thing “consciousness”, but I don’t know how it works. I have no physical explanation for it. I have never seen such an explanation and I cannot even say what such an explanation might look like. No-one has an explanation, for all that some fancy that they do, or that they have a demonstration that there is no such thing. Nothing else we know about the universe leaves room for there to even be such a thing. But here I am, conscious anyway, knowing this from my own experience, from the very fact of having experience. This is no more circular than it would be for a robot with eyes to use them to report what it looks like, even if it knows nothing of how it was made or how it works.
Those who have such experience can judge this of themselves. Those (if any) who do not have this experience must find this talk incomprehensible — they cannot find the terrain I am speaking of on their own maps, and will “sanewash” my words by insisting that I must be talking about something else, such as my externally visible behaviour. Well, I am not. But none can show their inner experience to another. Each of us is shut up in an unbreakable box exactly the shape of ourselves. We can only speculate on what lies inside anyone else’s box on the basis of shared outwardly observable properties, properties which are not, however, the thing sought.
Sam Altman once mentioned a test: Don’t train an LLM (or other AI system) on any text about consciousness and see if the system will still report having inner experiences unprompted. I would predict a normal LLM would not. At least if we are careful to remove all implied consciousness, which excludes most texts by humans. But if we have a system that can interact with some environment, have some hidden state, observe some of its own hidden state, and can maybe interact with other such systems (or maybe humans, such as in a game), and train with self-play, then I wouldn’t be surprised if it would report inner experiences.
Experiments along these lines would be worth doing, although assembling a corpus of text containing no examples of people talking about their inner worlds could be difficult.
Sam Altman once mentioned a test: Don’t train an LLM (or other AI system) on any text about consciousness and see if the system will still report having inner experiences unprompted. I would predict a normal LLM would not. At least if we are careful to remove all implied consciousness, which excludes most texts by humans.
I second this prediction, and would go further in saying that just removing explicit discourse about consciousness is sufficient
With a sufficiently strong LLM, I think you could still elicit reports of inner dialogs if you prompt lightly, such as “put yourself into the shoes of...”. That’s because inner monologs are implied in many reasoning processes, even if not explicitly mentioned so.
As I said, I don’t know. Nobody does. But here I am, and here we are, fellow conscious beings. How things are is unaffected by whether we can explain how they are.
I mean, we know how knowing works—you do not experience knowing. For you to know how things are you have to be connected to these things. And independently of consciousness we also know how “you” works—identity is just an ethical construct over something physical, like brain. So, you can at least imagine how an explanation of you knowing may look like, right?
I’m just asking what do you mean by “knowing” in “But here I am, conscious anyway, knowing this from my own experience, from the very fact of having experience.”. If you don’t know what you mean, and nobody does, then why are you using “knowing”?
“Know” is an ordinary word of English that every English speaker knows (at least until they start philosophizing about it, but you can cultivate mystery about anything by staring hard enough that it disappears). I am using the word in this ordinary, everyday sense. I do not know what sort of answer you are looking for.
We have non-ordinary theories about many things that ordinary words are about, like light. What I want is for you to consider implications of some proper theory of knowledge for your claim about knowing for a fact that you are conscious. Not “theory of knowledge” as some complicated philosophical construction—just non-controversial facts, like that you have to interact with something to know about it.
I have no theory to present. Theorising comes after. I know my own consciousness the way I know the sun, the way everyone has known the sun since before we knew how it shone: by our senses of sight and warmth for the sun, by our inner senses for consciousness.
You are asking me for a solution to the Hard Problem of consciousness. No-one has one, yet. That is what makes it Hard.
No, I’m asking you to constrain the space of solutions using the theory we have. For example, if you know your consciousness as sun’s warmth, then now we know you can in principle be wrong about being conscious—because you can think that you are feeling warmth, when actually your thoughts about it were generated by electrodes in your brain. Agree?
I can be mistaken about the cause of a sensation of warmth, but not about the fact of having such a sensation. In the case of consciousness, to speculate about some part not being what it seems is still to be conscious in making that speculation. There is no way to catch one’s own tail here.
I can be mistaken about the cause of a sensation of warmth, but not about the fact of having such a sensation.
That’s incorrect, unless you make it an axiom. You do at least agree that you can be mistaken about having a sensation in the past? But that implies that sensation must actually modify your memory for you to be right about it. You also obviously can be mistaken about which sensation you are having—you can initially think that you are seeing 0x0000ff, but after a second conclude that no, it’s actually 0x0000fe. And I’m not talking about external cause of you sensations, I’m talking about you inspecting sensations themselves.
In the case of consciousness, to speculate about some part not being what it seems is still to be conscious in making that speculation. There is no way to catch one’s own tail here.
You can speculate unconsciously. Like, if we isolate some part of you brain that makes you think “I can’t be wrong about being conscious, therefore I’m conscious”, put you in a coma and run just that thought, would you say you are not mistaken in that moment, even though you are in a coma?
They are supposed to test consistency of beliefs. I mean, if you think some part of the experiment is impossible, like separating your thoughts from your experiences, say so. I just want to know what your beliefs are.
And the part about memory or colors is not a thought experiment but just an observation about reality? You do agree about that part, that whatever sensation you name, you can be wrong about having it, right?
You do agree about that part, that whatever sensation you name, you can be wrong about having it, right?
Can you give me a concrete example of this? I can be wrong about what is happening to produce some sensation, but what in concrete terms would be an example of being wrong about having the sensation itself? No speculations about magic electrodes in the brain please.
I can be wrong about what is happening to produce some sensation
And about having it in the past, and about which sensation you are having. To calibrate you about how unsurprising it should be.
Well, it’s hard to give impressive examples in normal conditions—it’s like asking to demonstrate nuclear reaction with two sticks—brain tries to not be wrong about stuff. Non-impressive examples include lying to yourself—deliberately thinking “I’m feeling warmth” and so on, when you know, that you don’t. Or answering “Yes” to “Are you feeling warm?” when you are distracted and then realizing, that no, you weren’t really tracking your feelings at that moment. But something persistent that survives you actually querying relevant parts of the brain, and without externally spoofing this connection… Something like reading, that you are supposed to feel warmth, when looking at kittens, believing it, but not actually feeling it?
I guess I’ll go look what people did with actual electrodes, if “you can misidentify sensation, you can be wrong about it being present in any point in time, but you can’t be wrong about having it now” still seems likely to you.
Sure, I’m not saying you are usually wrong about your sensations, but it still means there are physical conditions on your thoughts being right—when you are right about your sensation, you are right because that sensation influenced your thoughts. Otherwise being wrong about past sensation doesn’t work. And if there are conditions, then they can be violated.
I agree! I don’t think consciousness can be further analyzed or broken down into its constituent parts. It’s just a fundamental property of the universe. It doesn’t mean, however, that human consciousness has no explanation. (An explanation for human consciousness would be nice, because otherwise we have two kinds of things in the world: the physical and the mental, and none of these would be explicable in terms of the other, except maybe via solipsism.) Human consciousness, along with everything physical, is well explained by Christian theism, according to which God created the material world, which is inert and wholly subject to him, and then created mankind in His image. Man belongs both to the physical and the mental world and (s)he can be described as a consciousness made in the likeness of the Creator. Humans have/are a consciousness because God desired a personal relationship with them; for this reason they are not inert substances, but have free will.
Clearly, a process of experiencing is going on in humans. I don’t dispute that. But that is strictly a different argument.
No, the process of experiencing is the main thing that distinguishes the mental (consciousness) from the physical. In fact, one way to define the mental is this (R. Swinburne): mental events are those that cannot happen without being experienced/observed. Mental events are not fully determined by the physical events, e.g. in the physical world there are no colors, only wavelengths of light. It is only in our consciousness that wavelengths of light acquire the quality of being a certain color, and even that may differ between one individual and another (what you see as green I might see as red).
I’m not sure what you’re asking for, when you ask what it “is”. I call that thing “consciousness”, but I don’t know how it works. I have no physical explanation for it. I have never seen such an explanation and I cannot even say what such an explanation might look like. No-one has an explanation, for all that some fancy that they do, or that they have a demonstration that there is no such thing. Nothing else we know about the universe leaves room for there to even be such a thing. But here I am, conscious anyway, knowing this from my own experience, from the very fact of having experience. This is no more circular than it would be for a robot with eyes to use them to report what it looks like, even if it knows nothing of how it was made or how it works.
Those who have such experience can judge this of themselves. Those (if any) who do not have this experience must find this talk incomprehensible — they cannot find the terrain I am speaking of on their own maps, and will “sanewash” my words by insisting that I must be talking about something else, such as my externally visible behaviour. Well, I am not. But none can show their inner experience to another. Each of us is shut up in an unbreakable box exactly the shape of ourselves. We can only speculate on what lies inside anyone else’s box on the basis of shared outwardly observable properties, properties which are not, however, the thing sought.
Sam Altman once mentioned a test: Don’t train an LLM (or other AI system) on any text about consciousness and see if the system will still report having inner experiences unprompted. I would predict a normal LLM would not. At least if we are careful to remove all implied consciousness, which excludes most texts by humans. But if we have a system that can interact with some environment, have some hidden state, observe some of its own hidden state, and can maybe interact with other such systems (or maybe humans, such as in a game), and train with self-play, then I wouldn’t be surprised if it would report inner experiences.
Experiments along these lines would be worth doing, although assembling a corpus of text containing no examples of people talking about their inner worlds could be difficult.
I second this prediction, and would go further in saying that just removing explicit discourse about consciousness is sufficient
With a sufficiently strong LLM, I think you could still elicit reports of inner dialogs if you prompt lightly, such as “put yourself into the shoes of...”. That’s because inner monologs are implied in many reasoning processes, even if not explicitly mentioned so.
What do mean by “I” here—what physical thing does the knowing?
As I said, I don’t know. Nobody does. But here I am, and here we are, fellow conscious beings. How things are is unaffected by whether we can explain how they are.
I mean, we know how knowing works—you do not experience knowing. For you to know how things are you have to be connected to these things. And independently of consciousness we also know how “you” works—identity is just an ethical construct over something physical, like brain. So, you can at least imagine how an explanation of you knowing may look like, right?
No, I didn’t understand any of that. I don’t know what you mean by most of these keywords.
I’m just asking what do you mean by “knowing” in “But here I am, conscious anyway, knowing this from my own experience, from the very fact of having experience.”. If you don’t know what you mean, and nobody does, then why are you using “knowing”?
“Know” is an ordinary word of English that every English speaker knows (at least until they start philosophizing about it, but you can cultivate mystery about anything by staring hard enough that it disappears). I am using the word in this ordinary, everyday sense. I do not know what sort of answer you are looking for.
We have non-ordinary theories about many things that ordinary words are about, like light. What I want is for you to consider implications of some proper theory of knowledge for your claim about knowing for a fact that you are conscious. Not “theory of knowledge” as some complicated philosophical construction—just non-controversial facts, like that you have to interact with something to know about it.
I have no theory to present. Theorising comes after. I know my own consciousness the way I know the sun, the way everyone has known the sun since before we knew how it shone: by our senses of sight and warmth for the sun, by our inner senses for consciousness.
You are asking me for a solution to the Hard Problem of consciousness. No-one has one, yet. That is what makes it Hard.
No, I’m asking you to constrain the space of solutions using the theory we have. For example, if you know your consciousness as sun’s warmth, then now we know you can in principle be wrong about being conscious—because you can think that you are feeling warmth, when actually your thoughts about it were generated by electrodes in your brain. Agree?
I can be mistaken about the cause of a sensation of warmth, but not about the fact of having such a sensation. In the case of consciousness, to speculate about some part not being what it seems is still to be conscious in making that speculation. There is no way to catch one’s own tail here.
That’s incorrect, unless you make it an axiom. You do at least agree that you can be mistaken about having a sensation in the past? But that implies that sensation must actually modify your memory for you to be right about it. You also obviously can be mistaken about which sensation you are having—you can initially think that you are seeing 0x0000ff, but after a second conclude that no, it’s actually 0x0000fe. And I’m not talking about external cause of you sensations, I’m talking about you inspecting sensations themselves.
You can speculate unconsciously. Like, if we isolate some part of you brain that makes you think “I can’t be wrong about being conscious, therefore I’m conscious”, put you in a coma and run just that thought, would you say you are not mistaken in that moment, even though you are in a coma?
Such thought experiments are just a game of But What If, where the proposer’s beliefs are baked into the presuppositions. I don’t find them useful.
They are supposed to test consistency of beliefs. I mean, if you think some part of the experiment is impossible, like separating your thoughts from your experiences, say so. I just want to know what your beliefs are.
And the part about memory or colors is not a thought experiment but just an observation about reality? You do agree about that part, that whatever sensation you name, you can be wrong about having it, right?
Can you give me a concrete example of this? I can be wrong about what is happening to produce some sensation, but what in concrete terms would be an example of being wrong about having the sensation itself? No speculations about magic electrodes in the brain please.
And about having it in the past, and about which sensation you are having. To calibrate you about how unsurprising it should be.
Well, it’s hard to give impressive examples in normal conditions—it’s like asking to demonstrate nuclear reaction with two sticks—brain tries to not be wrong about stuff. Non-impressive examples include lying to yourself—deliberately thinking “I’m feeling warmth” and so on, when you know, that you don’t. Or answering “Yes” to “Are you feeling warm?” when you are distracted and then realizing, that no, you weren’t really tracking your feelings at that moment. But something persistent that survives you actually querying relevant parts of the brain, and without externally spoofing this connection… Something like reading, that you are supposed to feel warmth, when looking at kittens, believing it, but not actually feeling it?
I guess I’ll go look what people did with actual electrodes, if “you can misidentify sensation, you can be wrong about it being present in any point in time, but you can’t be wrong about having it now” still seems likely to you.
This does not describe any experience of mine.
I don’t think that will help.
Sure, I’m not saying you are usually wrong about your sensations, but it still means there are physical conditions on your thoughts being right—when you are right about your sensation, you are right because that sensation influenced your thoughts. Otherwise being wrong about past sensation doesn’t work. And if there are conditions, then they can be violated.
I agree! I don’t think consciousness can be further analyzed or broken down into its constituent parts. It’s just a fundamental property of the universe. It doesn’t mean, however, that human consciousness has no explanation. (An explanation for human consciousness would be nice, because otherwise we have two kinds of things in the world: the physical and the mental, and none of these would be explicable in terms of the other, except maybe via solipsism.) Human consciousness, along with everything physical, is well explained by Christian theism, according to which God created the material world, which is inert and wholly subject to him, and then created mankind in His image. Man belongs both to the physical and the mental world and (s)he can be described as a consciousness made in the likeness of the Creator. Humans have/are a consciousness because God desired a personal relationship with them; for this reason they are not inert substances, but have free will.
@Gunnar_Zarncke
No, the process of experiencing is the main thing that distinguishes the mental (consciousness) from the physical. In fact, one way to define the mental is this (R. Swinburne): mental events are those that cannot happen without being experienced/observed. Mental events are not fully determined by the physical events, e.g. in the physical world there are no colors, only wavelengths of light. It is only in our consciousness that wavelengths of light acquire the quality of being a certain color, and even that may differ between one individual and another (what you see as green I might see as red).