Is my computer (my real computer, not an imaginary one programmed with an imaginary AI) “behaviourally aware”? It even runs tests on itself and reports the results.
Yes, actually? To the extent that a worm is aware.
We don’t normally use the word “aware” to describe it, but what it’s doing seems very, very close to the things we do describe with the word awareness.
The problem is clearly an empirical one.
Then I’ve misunderstood your claim. The Hard Problem of Consciousness as popularly understood is that even if we understand all the mechanisms of thought to the point that we can construct brains ourselves, it won’t explain the subjective experience we have. We can understand the universe with mathematical precision down to the last photon and it still wouldn’t explain it. Seems like a non-empirical question to me. That’s why they call it subjective experience.
Is my computer (my real computer, not an imaginary one programmed with an imaginary AI) “behaviourally aware”? It even runs tests on itself and reports the results.
Yes, actually? To the extent that a worm is aware.
Is a worm aware? I don’t know. Is my computer aware? I see no reason to think so, not in the sense of “aware” that we’re discussing. Is a thermostat aware? That too has input and output. Is a rock aware? If the answer to all of these is “yes”, then that is not a useful sense of “aware”. It’s just another route for the mercury blob of thought to escape the finger of logic.
In other contexts, I have no problem with talking about a robot (a real robot really existing in the real world right now, such as Google’s driverless cars) as being “aware” of something, or for that matter my computer running self-tests, but I would also know that I was not imputing consciousness to the devices. If we’re going to talk about consciousness, that is what we must talk about, instead of broadening the word beyond what we are talking about and using the same word to talk about some other thing instead.
The Hard Problem of Consciousness as popularly understood is that even if we understand all the mechanisms of thought to the point that we can construct brains ourselves, it won’t explain the subjective experience we have.
I would say that’s one particular position, or class of positions, on the Hard Problem. The other class of positions are those that hold that if we understood all etc. etc. then it would explain the subjective experience we have.
The Hard Problem, to me, is that both of these positions are both ineluctable and untenable.
That’s why they call it subjective experience.
That we have subjective experience is an objective fact.
Is there no middle ground between “aware” and “not aware” then? This is like asking “Is a boulder a chair?”, “is a tree stump a chair?” “Is a stool a chair?” Words are fuzzy like that.
That we have subjective experience is an objective fact.
Rather, that you have it is an objective fact to you. The empirical questions involved here are applied to other minds, not your own.
Is there no middle ground between “aware” and “not aware” then? This is like asking “Is a boulder a chair?”, “is a tree stump a chair?” “Is a stool a chair?” Words are fuzzy like that.
Yes, there’s a whole range. Maybe a worm has a microconsciousness, or a nanoconsciousness, or maybe it has none at all, relative to a human. Or maybe it’s like asking about the temperature of a cluster of a few atoms. The concept is indeed fuzzy.
That we have subjective experience is an objective fact.
Rather, that you have it is an objective fact to you. The empirical questions involved here are applied to other minds, not your own.
Other people seem to be the same sorts of thing as me, and they report awareness of things. That’s good enough for me to believe them to have consciousness. When robots get good enough to not sound like spam when they pretend to be people, then that criterion would have to be reexamined.
As Scott Aaronson points out in his discussion of IIT, experiences of oneself and intuitions about other creatures based on their behaviour are all we have to go on at present. If an explanation of consciousness doesn’t more or less match up to those intuitions, it’s a problem for the explanation, not the intuitions.
The Hard Problem of Consciousness as popularly understood is that even if we understand all the mechanisms of thought to the point that we can construct brains ourselves, it won’t explain the subjective experience we have. We can understand the universe with mathematical precision down to the last photon and it still wouldn’t explain it. Seems like a non-empirical question to me.
The common meaning of “empirical” is something based on experience, so it seems that the Hard Problem of Consciousness fits that definition.
No? There is no subjective experience I can have that can distinguish you from a P-zombie (under the (wrong) assumption that the hard-problem even makes sense and that there is a meaningful distinction to be made there)
Yes, actually? To the extent that a worm is aware.
We don’t normally use the word “aware” to describe it, but what it’s doing seems very, very close to the things we do describe with the word awareness.
Then I’ve misunderstood your claim. The Hard Problem of Consciousness as popularly understood is that even if we understand all the mechanisms of thought to the point that we can construct brains ourselves, it won’t explain the subjective experience we have. We can understand the universe with mathematical precision down to the last photon and it still wouldn’t explain it. Seems like a non-empirical question to me. That’s why they call it subjective experience.
Is a worm aware? I don’t know. Is my computer aware? I see no reason to think so, not in the sense of “aware” that we’re discussing. Is a thermostat aware? That too has input and output. Is a rock aware? If the answer to all of these is “yes”, then that is not a useful sense of “aware”. It’s just another route for the mercury blob of thought to escape the finger of logic.
In other contexts, I have no problem with talking about a robot (a real robot really existing in the real world right now, such as Google’s driverless cars) as being “aware” of something, or for that matter my computer running self-tests, but I would also know that I was not imputing consciousness to the devices. If we’re going to talk about consciousness, that is what we must talk about, instead of broadening the word beyond what we are talking about and using the same word to talk about some other thing instead.
I would say that’s one particular position, or class of positions, on the Hard Problem. The other class of positions are those that hold that if we understood all etc. etc. then it would explain the subjective experience we have.
The Hard Problem, to me, is that both of these positions are both ineluctable and untenable.
That we have subjective experience is an objective fact.
Is there no middle ground between “aware” and “not aware” then? This is like asking “Is a boulder a chair?”, “is a tree stump a chair?” “Is a stool a chair?” Words are fuzzy like that.
Rather, that you have it is an objective fact to you. The empirical questions involved here are applied to other minds, not your own.
Yes, there’s a whole range. Maybe a worm has a microconsciousness, or a nanoconsciousness, or maybe it has none at all, relative to a human. Or maybe it’s like asking about the temperature of a cluster of a few atoms. The concept is indeed fuzzy.
Other people seem to be the same sorts of thing as me, and they report awareness of things. That’s good enough for me to believe them to have consciousness. When robots get good enough to not sound like spam when they pretend to be people, then that criterion would have to be reexamined.
As Scott Aaronson points out in his discussion of IIT, experiences of oneself and intuitions about other creatures based on their behaviour are all we have to go on at present. If an explanation of consciousness doesn’t more or less match up to those intuitions, it’s a problem for the explanation, not the intuitions.
The common meaning of “empirical” is something based on experience, so it seems that the Hard Problem of Consciousness fits that definition.
No? There is no subjective experience I can have that can distinguish you from a P-zombie (under the (wrong) assumption that the hard-problem even makes sense and that there is a meaningful distinction to be made there)