The brain and the physical world in general are sufficient to explain consciousness
The problem is that they aren’t, as Richard explains here.
1) In context, we’re talking about consciousness as in beings which are behaviorally aware, not about subjective-experience qualia, right?
I don’t understand the distinction you’re making there. As I use the words, consciousness is awareness, which is experience. These are just different ways of pointing at the same thing.
2) Richard says he doesn’t know, not that they aren’t.
The problem is worse than merely not knowing, in the sense in which we do not know a cure for cancer. We can imagine that eventually, we will discover enough about the mechanism of cancer to devise an intervention as effective as penicillin for bacterial infection. But we cannot see, in terms of what we understand of physics and the brain, how there could be any such thing as consciousness, despite people giving the matter a great deal of thought and experimental work. That’s a strong argument that they are not sufficient to explain it.
There’s a tautologous sense in which they are sufficient, by taking “the physical world” to include whatever the real explanation turns out to be, but in a discussion of this issue “the physical world” means the world as understood in terms of the sort of physical theories we currently have.
On the other hand, the very close observable connection between brain states and consciousness is a strong argument that the brain and the physical world are sufficient to explain consciousness.
In short we are faced with a gigantic problem:
We are conscious.
Consciousness is very closely connected with the brain.
We cannot see how there could be any such phenomenon.
Go to 1.
A lot of discussion on the subject consists of people writing their conclusion in different words and using it as an argument for their conclusion.
I don’t understand the distinction you’re making there
“Behaviorally aware” is a term I’m using to talk about consciousness without invoking the “hard problem of consciousness”.
The brain is a structure which takes various inputs, does a bunch of operations to them, and produces various outputs. We can see how that works, and to some extents we can make machines that do the same.
When someone’s “unconscious”, it means they are no longer responding to the environment (taking inputs, producing outputs) appropriately. A “conscious” being is responding appropriately to the environment. It’s various internal parts are interacting with each other in an organized way, and they are also interacting with the external world in an organized way. Behaviorally speaking, they are aware and reacting.
None of the above has yet involved qualia, subjective experience, or the hard problem of consciousness. We are using the word “conscious” to mean two separate things—“aware-in-the-information-processing-sense” and “subjectively-experiencing-qualia”.
So you can talk about whether or not there are conscious (information-processing) structures which continue on after we die without tackling subjective experience or qualia. And when you talk about these structures, you still have to use parsimony...just because the information-processing structure is no longer in the “observable-physical world” or whatever doesn’t mean it’s not still a complex, rule-following mathematical structure.
Which is why I say, in the context of this conversation, there is no need to invoke The Hard Problem
A lot of discussion on the subject consists of people writing their conclusion in different words and using it as an argument for their conclusion.
I think this is because this is primarily a matter of definition. The “answer” to the “hard problem” is decidedly not empirical and purely philosophical. All non-empirical statements are tautological in nature. Arguments aiming to dissolve the question rely upon altering definitions and are necessarily tautological.
“Behaviorally aware” is a term I’m using to talk about consciousness without invoking the “hard problem of consciousness”.
The brain is a structure which takes various inputs, does a bunch of operations to them, and produces various outputs. We can see how that works, and to some extents we can make machines that do the same.
Why call this “consciousness”? Pretty much any machine that we make takes various inputs, does a bunch of operations to them, and produces various outputs. Is my computer (my real computer, not an imaginary one programmed with an imaginary AI) “behaviourally aware”? It even runs tests on itself and reports the results.
I don’t think it useful to broaden the definition of “conscious” to include things that are clearly not “conscious” (in the meaning it normally has). This doesn’t let you talk about consciousness without invoking the “hard problem of consciousness”. It lets you talk about something completely different that you are calling by the same name, without invoking the “hard problem of consciousness”.
A lot of discussion on the subject consists of people writing their conclusion in different words and using it as an argument for their conclusion.
I think this is because this is primarily a matter of definition. The “answer” to the “hard problem” is decidedly not empirical and purely philosophical.
The problem is clearly an empirical one. We are aware, seek an explanation, but have not found one.
Is my computer (my real computer, not an imaginary one programmed with an imaginary AI) “behaviourally aware”? It even runs tests on itself and reports the results.
Yes, actually? To the extent that a worm is aware.
We don’t normally use the word “aware” to describe it, but what it’s doing seems very, very close to the things we do describe with the word awareness.
The problem is clearly an empirical one.
Then I’ve misunderstood your claim. The Hard Problem of Consciousness as popularly understood is that even if we understand all the mechanisms of thought to the point that we can construct brains ourselves, it won’t explain the subjective experience we have. We can understand the universe with mathematical precision down to the last photon and it still wouldn’t explain it. Seems like a non-empirical question to me. That’s why they call it subjective experience.
Is my computer (my real computer, not an imaginary one programmed with an imaginary AI) “behaviourally aware”? It even runs tests on itself and reports the results.
Yes, actually? To the extent that a worm is aware.
Is a worm aware? I don’t know. Is my computer aware? I see no reason to think so, not in the sense of “aware” that we’re discussing. Is a thermostat aware? That too has input and output. Is a rock aware? If the answer to all of these is “yes”, then that is not a useful sense of “aware”. It’s just another route for the mercury blob of thought to escape the finger of logic.
In other contexts, I have no problem with talking about a robot (a real robot really existing in the real world right now, such as Google’s driverless cars) as being “aware” of something, or for that matter my computer running self-tests, but I would also know that I was not imputing consciousness to the devices. If we’re going to talk about consciousness, that is what we must talk about, instead of broadening the word beyond what we are talking about and using the same word to talk about some other thing instead.
The Hard Problem of Consciousness as popularly understood is that even if we understand all the mechanisms of thought to the point that we can construct brains ourselves, it won’t explain the subjective experience we have.
I would say that’s one particular position, or class of positions, on the Hard Problem. The other class of positions are those that hold that if we understood all etc. etc. then it would explain the subjective experience we have.
The Hard Problem, to me, is that both of these positions are both ineluctable and untenable.
That’s why they call it subjective experience.
That we have subjective experience is an objective fact.
Is there no middle ground between “aware” and “not aware” then? This is like asking “Is a boulder a chair?”, “is a tree stump a chair?” “Is a stool a chair?” Words are fuzzy like that.
That we have subjective experience is an objective fact.
Rather, that you have it is an objective fact to you. The empirical questions involved here are applied to other minds, not your own.
Is there no middle ground between “aware” and “not aware” then? This is like asking “Is a boulder a chair?”, “is a tree stump a chair?” “Is a stool a chair?” Words are fuzzy like that.
Yes, there’s a whole range. Maybe a worm has a microconsciousness, or a nanoconsciousness, or maybe it has none at all, relative to a human. Or maybe it’s like asking about the temperature of a cluster of a few atoms. The concept is indeed fuzzy.
That we have subjective experience is an objective fact.
Rather, that you have it is an objective fact to you. The empirical questions involved here are applied to other minds, not your own.
Other people seem to be the same sorts of thing as me, and they report awareness of things. That’s good enough for me to believe them to have consciousness. When robots get good enough to not sound like spam when they pretend to be people, then that criterion would have to be reexamined.
As Scott Aaronson points out in his discussion of IIT, experiences of oneself and intuitions about other creatures based on their behaviour are all we have to go on at present. If an explanation of consciousness doesn’t more or less match up to those intuitions, it’s a problem for the explanation, not the intuitions.
The Hard Problem of Consciousness as popularly understood is that even if we understand all the mechanisms of thought to the point that we can construct brains ourselves, it won’t explain the subjective experience we have. We can understand the universe with mathematical precision down to the last photon and it still wouldn’t explain it. Seems like a non-empirical question to me.
The common meaning of “empirical” is something based on experience, so it seems that the Hard Problem of Consciousness fits that definition.
No? There is no subjective experience I can have that can distinguish you from a P-zombie (under the (wrong) assumption that the hard-problem even makes sense and that there is a meaningful distinction to be made there)
The problem is that they aren’t, as Richard explains here.
1) In context, we’re talking about consciousness as in beings which are behaviorally aware, not about subjective-experience qualia, right?
2) Richard says he doesn’t know, not that they aren’t.
I don’t understand the distinction you’re making there. As I use the words, consciousness is awareness, which is experience. These are just different ways of pointing at the same thing.
The problem is worse than merely not knowing, in the sense in which we do not know a cure for cancer. We can imagine that eventually, we will discover enough about the mechanism of cancer to devise an intervention as effective as penicillin for bacterial infection. But we cannot see, in terms of what we understand of physics and the brain, how there could be any such thing as consciousness, despite people giving the matter a great deal of thought and experimental work. That’s a strong argument that they are not sufficient to explain it.
There’s a tautologous sense in which they are sufficient, by taking “the physical world” to include whatever the real explanation turns out to be, but in a discussion of this issue “the physical world” means the world as understood in terms of the sort of physical theories we currently have.
On the other hand, the very close observable connection between brain states and consciousness is a strong argument that the brain and the physical world are sufficient to explain consciousness.
In short we are faced with a gigantic problem:
We are conscious.
Consciousness is very closely connected with the brain.
We cannot see how there could be any such phenomenon.
Go to 1.
A lot of discussion on the subject consists of people writing their conclusion in different words and using it as an argument for their conclusion.
“Behaviorally aware” is a term I’m using to talk about consciousness without invoking the “hard problem of consciousness”.
The brain is a structure which takes various inputs, does a bunch of operations to them, and produces various outputs. We can see how that works, and to some extents we can make machines that do the same.
When someone’s “unconscious”, it means they are no longer responding to the environment (taking inputs, producing outputs) appropriately. A “conscious” being is responding appropriately to the environment. It’s various internal parts are interacting with each other in an organized way, and they are also interacting with the external world in an organized way. Behaviorally speaking, they are aware and reacting.
None of the above has yet involved qualia, subjective experience, or the hard problem of consciousness. We are using the word “conscious” to mean two separate things—“aware-in-the-information-processing-sense” and “subjectively-experiencing-qualia”.
So you can talk about whether or not there are conscious (information-processing) structures which continue on after we die without tackling subjective experience or qualia. And when you talk about these structures, you still have to use parsimony...just because the information-processing structure is no longer in the “observable-physical world” or whatever doesn’t mean it’s not still a complex, rule-following mathematical structure.
Which is why I say, in the context of this conversation, there is no need to invoke The Hard Problem
I think this is because this is primarily a matter of definition. The “answer” to the “hard problem” is decidedly not empirical and purely philosophical. All non-empirical statements are tautological in nature. Arguments aiming to dissolve the question rely upon altering definitions and are necessarily tautological.
Why call this “consciousness”? Pretty much any machine that we make takes various inputs, does a bunch of operations to them, and produces various outputs. Is my computer (my real computer, not an imaginary one programmed with an imaginary AI) “behaviourally aware”? It even runs tests on itself and reports the results.
I don’t think it useful to broaden the definition of “conscious” to include things that are clearly not “conscious” (in the meaning it normally has). This doesn’t let you talk about consciousness without invoking the “hard problem of consciousness”. It lets you talk about something completely different that you are calling by the same name, without invoking the “hard problem of consciousness”.
The problem is clearly an empirical one. We are aware, seek an explanation, but have not found one.
Yes, actually? To the extent that a worm is aware.
We don’t normally use the word “aware” to describe it, but what it’s doing seems very, very close to the things we do describe with the word awareness.
Then I’ve misunderstood your claim. The Hard Problem of Consciousness as popularly understood is that even if we understand all the mechanisms of thought to the point that we can construct brains ourselves, it won’t explain the subjective experience we have. We can understand the universe with mathematical precision down to the last photon and it still wouldn’t explain it. Seems like a non-empirical question to me. That’s why they call it subjective experience.
Is a worm aware? I don’t know. Is my computer aware? I see no reason to think so, not in the sense of “aware” that we’re discussing. Is a thermostat aware? That too has input and output. Is a rock aware? If the answer to all of these is “yes”, then that is not a useful sense of “aware”. It’s just another route for the mercury blob of thought to escape the finger of logic.
In other contexts, I have no problem with talking about a robot (a real robot really existing in the real world right now, such as Google’s driverless cars) as being “aware” of something, or for that matter my computer running self-tests, but I would also know that I was not imputing consciousness to the devices. If we’re going to talk about consciousness, that is what we must talk about, instead of broadening the word beyond what we are talking about and using the same word to talk about some other thing instead.
I would say that’s one particular position, or class of positions, on the Hard Problem. The other class of positions are those that hold that if we understood all etc. etc. then it would explain the subjective experience we have.
The Hard Problem, to me, is that both of these positions are both ineluctable and untenable.
That we have subjective experience is an objective fact.
Is there no middle ground between “aware” and “not aware” then? This is like asking “Is a boulder a chair?”, “is a tree stump a chair?” “Is a stool a chair?” Words are fuzzy like that.
Rather, that you have it is an objective fact to you. The empirical questions involved here are applied to other minds, not your own.
Yes, there’s a whole range. Maybe a worm has a microconsciousness, or a nanoconsciousness, or maybe it has none at all, relative to a human. Or maybe it’s like asking about the temperature of a cluster of a few atoms. The concept is indeed fuzzy.
Other people seem to be the same sorts of thing as me, and they report awareness of things. That’s good enough for me to believe them to have consciousness. When robots get good enough to not sound like spam when they pretend to be people, then that criterion would have to be reexamined.
As Scott Aaronson points out in his discussion of IIT, experiences of oneself and intuitions about other creatures based on their behaviour are all we have to go on at present. If an explanation of consciousness doesn’t more or less match up to those intuitions, it’s a problem for the explanation, not the intuitions.
The common meaning of “empirical” is something based on experience, so it seems that the Hard Problem of Consciousness fits that definition.
No? There is no subjective experience I can have that can distinguish you from a P-zombie (under the (wrong) assumption that the hard-problem even makes sense and that there is a meaningful distinction to be made there)