There are in fact some plausible scientific hypotheses that try to isolate particular physical states associated with “qualia”. Without giving references to those, obviously, as I’m sure you’ll all agree, there is no reason to debate the truth of physicalism.
The mentioned approach is probably bogus, and seems to be a rip-off of Marvin Minsky’s older A-B brain ideas in “The Society of Mind”. I wish I were a “cognitive scientist” it would be so much easier to publish!
However, needless to say any such hypothesis must be founded on the correct philosophical explanation, which is pretty much neurophysiological identity theory. I don’t see a need to debate that, either. Debates of dualism etc. are for the weak minded.
Furthermore, awareness is not quite the same thing as phenomenal consciousness, either. Awareness itself is a quite high cognitive function. But a system could have phenomenal consciousness without any discernible perceptual awareness. I suspect that these theories are not sufficiently informed by neuroscience and philosophy, but neither am I going to offer free clues about the solution to that :) For now, let us just say that it is entirely plausible that small nervous systems (like that of an insect) with no possibility of higher order representations still may have subjective experience. There is also a hint of anthropocentricism in the cited approach (we’re conscious because we can make those higher order representations...), which I usually think points to the falsehood of a theory of mind (similar errors are often seen on this site, as well).
Is Dennett to blame here? I hope not :/ Dennett has many excellent ideas, but his approach to consciousness may push the people the wrong way (as it has some flavor of behaviorism, which is not the most advanced view).
There are in fact some plausible scientific hypotheses that try to isolate particular physical states associated with “qualia”. Without giving references to those, obviously, as I’m sure you’ll all agree, there is no reason to debate the truth of physicalism.
I was looking some things up after you mentioned this, and after reading a bit about it, qualia appears to be extremely similar to sensory memory.
These quotes about them from Wikipedia(with the names removed) seem to do a good job describing the similarity:
‘The information represented in ### is the “raw data” which provides a snapshot of a person’s overall sensory experience.’
‘Another way of defining ### is as “raw feels.” A raw feel is a perception in and of itself, considered entirely in isolation from any effect it might have on behavior and behavioral disposition.’
If you think about this in P-zombie terms, and someone attempts to say “A P-zombie is a person who has sensory memory, but not qualia.” I’m not sure what would even be the difference between that and a regular person. Either one can call on their sensory memory to say “I am experiencing redness right now, and now I am experiencing my experiences of redness” and it would seem to be correct if that is what is in their sensory memory. There doesn’t appear to be anything left for qualia to explain, and it feels a lot like the question is dissolved at that point.
Is this approximately correct, or is there something else that qualia attempts to explain that sensory memory doesn’t that I’m not perceiving?
Subjective experience isn’t limited to sensory experience, a headache, or any feeling, like happiness, without any sensory reason, would also count. The idea is that you can trace most of those to electrical/biochemical states. Might be why some drugs can make you feel happy and how anesthetics work!
That is the hard problem, actually. If we could operationalize those terms, we would be able to study what they refer to with a reductionist lens. Until then, we’re kind of stuck using words to point at experience rather than at structural definitions.
In case you’re honestly not sure what everyone is talking about, though: There’s a difference between red as a certain frequency of light and red as experienced. Yes, we know there’s a strong connection between the two, and we can describe in some fair detail how a certain frequency of light stimulates optic nerves and is processed in the brain and so on. But it’s not at all clear how we get from those mechanical processes to the experience of red. We don’t experience red as a frequency; we experience it as red! That latter bit, the redness of red, is what people refer to as the qualium of red. (“Qualium” is the singular form of “qualia”.)
The reductionist thesis maintains that there must be a way to reduce the connection between physical mechanisms and qualia down to mechanisms. The hard problem of consciousness is that no one seems to be able to come up with even an in-principle plausible way of making that connection. In other words, everyone is confused but doesn’t have a clear way to even start dispelling the confusion. People like Daniel Dennett have made efforts, but many people question whether their efforts even count as progress.
So in short: “phenomenal consciousness” refers to the experience of qualia, although we don’t know what that means aside from pointing at the fact that everyone seems to experience qualia and that mechanisms affect but don’t seem to be qualia. “Subjective experience” usually refers to the same thing, but is often used to emphasize the fact that the experience of qualia seems to depend on the individual; e.g., you don’t experience my experiencing red the way I do and vice versa.
Ah, I had been misinformed! I was informed it was the Latin neuter form, which uses “-um” for singular endings and “-a” for plural. Thanks for the correction!
It seems like a subtle question which I could be missing the point of, so I’ll explain my answer instead of just saying “yes”: When awake someone is generally acting based on their sensory inputs and plan. When asleep they are in one of several different sleep stages, I don’t know much about these different states but I’ll say in general that I think they are still (using the HOT terminology) creature-conscious of sensory inputs (that’s how you can wake from the alarm clock) but they are not transitive-conscious (except in the cases when you incorporate these into your dreams).
Let me also add that I’ve been re-reading the wiki and Stanford encyclopedia pages on all these terms and it makes just as much sense as last time I tried to understand what it’s all about (none). I’m a bit worried about people getting angry at me for not “getting it” as fast as they did but hopefully people on LW are more forgiving than what I’m used to.
Chimera writes: I’m a bit worried about people getting angry at me for not “getting it”
You are what? Worried? Worried is a conscious experience. A movie of you being worried does not show someone else being worried, it shows an unconscious image that looks like you being worried. An automaton built to duplicate your behavior when you are worried feels nothing, there is nothing (no consciousness) there to feel anything, but when you are doing that stuff people know and more importantly, you know how you feel and what it means to feel worried.
Imagine a world filled with disney animatronic robots all programmed to behave like real world people in our world behaved. Unless you think all those singing ghosts in the Haunted Mansion at disneyland are feeling happy and scared, then you can know what is being discussed here by imagining the difference between what images of people feel (nothing) and what actual people feel.
I would argue that if someone constructed an automaton that behaved exactly like I would in any given real-world situation—including novel situations, which Disney automatons can’t handle—then that automaton would, for all intents and purposes, be as conscious as I am. In fact, this automaton would, in fact, be a copy of me.
Let’s imagine that tonight, while you sleep, evil aliens replace everyone else in your home town (except for yourself, that is) with one of those perfect automatons. Would you be able to tell that this had occurred ? If so, how would you determine this ?
Perhaps I might not know the difference, but I am not the only observer here. Would the people replaced know the difference?
Well, presumably, the original people who were replace would indeed know the difference, as they watch helplessly from within the bubbling storage tanks where the evil aliens / wizards / whomever had put them prior to replacing them with the automatons.
The more interesting question is, would the automatons believe that they were the originals ? My claim is that, in order to emulate the originals perfectly with 100% accuracy—which is what this thought experiment requires—the automatons would have to believe that they were, in fact, original; and thus they would have to be conscious.
You could probably say, “ah-hah, sure the automatons may believe that they are the originals, but they’re wrong ! The originals are back on the mothership inside the storage vats !” This doesn’t sound like a very fruitful objection to me, however, since it doesn’t help you prove that the automatons are not conscious—merely that they aren’t composed of the same atoms as some other conscious beings (the ones inside the vats). So what, everyone is made of different atoms, you and I included.
It depends on whether your definition of “sensory input” and “acting on a plan” already require the concept of being conscious. Functionalists have definitions of those concepts which are just about relations of causality (sensory input = something outside the nervous system affects something inside the nervous system) and isomorphism (plan = combinatorial structure in nervous system with limited isomorphism to possible future world-states). And the point of the original question is that when you know you’re awake, it’s not because you know that your nervous system currently contains a combinatorial structure possessing certain isomorphisms to the world, that stands in an appropriate causal relation to the actions of your body. In fact, that is something that you deduce from (1) knowing that you are awake (2) having a functionalist theory of consciousness.
So, when you are awake (or “conscious”), how do you know that you are conscious?
When awake you are not necessarily transitively conscious of it—I think usually we are but there are times when we ‘zone out’ and only have first order thoughts.
OK. But it seems (according to your answer) that when I am awake and knowing it, it’s because I’m transitively conscious of something. Transitively conscious of what?
of being awake, as defined above: “I notice that I am taking audio-visual input from my environment and acting on it”. (The quote should be ‘noninferential, nondispositional and assertoric’ but I am not completely sure it is of that nature, if not, my mistake)
i.e. you know you’re awake when you have subjective experience of phenomenal consciousness. :-) Or something very close to this—that may not be the most nuanced, 100% correct way of stating it.
Would you say that only a functionalist can know whether they are awake, because only a functionalist knows what consciousness is? I presume not. But that means that it is possible to name and identify what consciousness is, and to say that I am awake and that I know it, in terms which do not presuppose functionalism. In this we have both the justification for the jargon terms “subjective experience” and “phenomenal consciousness”, and also the reason why the hard problem is a problem. If the existence of consciousness is not logically identical with the existence of a particular causal-functional system, then I can legitimately ask why the existence of that system leads to the existence of an accompanying conscious experience. And that “why” is the hard problem of consciousness.
There are in fact some plausible scientific hypotheses that try to isolate particular physical states associated with “qualia”. Without giving references to those, obviously, as I’m sure you’ll all agree, there is no reason to debate the truth of physicalism.
The mentioned approach is probably bogus, and seems to be a rip-off of Marvin Minsky’s older A-B brain ideas in “The Society of Mind”. I wish I were a “cognitive scientist” it would be so much easier to publish!
However, needless to say any such hypothesis must be founded on the correct philosophical explanation, which is pretty much neurophysiological identity theory. I don’t see a need to debate that, either. Debates of dualism etc. are for the weak minded.
Furthermore, awareness is not quite the same thing as phenomenal consciousness, either. Awareness itself is a quite high cognitive function. But a system could have phenomenal consciousness without any discernible perceptual awareness. I suspect that these theories are not sufficiently informed by neuroscience and philosophy, but neither am I going to offer free clues about the solution to that :) For now, let us just say that it is entirely plausible that small nervous systems (like that of an insect) with no possibility of higher order representations still may have subjective experience. There is also a hint of anthropocentricism in the cited approach (we’re conscious because we can make those higher order representations...), which I usually think points to the falsehood of a theory of mind (similar errors are often seen on this site, as well).
Is Dennett to blame here? I hope not :/ Dennett has many excellent ideas, but his approach to consciousness may push the people the wrong way (as it has some flavor of behaviorism, which is not the most advanced view).
I was looking some things up after you mentioned this, and after reading a bit about it, qualia appears to be extremely similar to sensory memory.
(http://en.wikipedia.org/wiki/Qualia) (http://en.wikipedia.org/wiki/Sensory_memory)
These quotes about them from Wikipedia(with the names removed) seem to do a good job describing the similarity:
‘The information represented in ### is the “raw data” which provides a snapshot of a person’s overall sensory experience.’ ‘Another way of defining ### is as “raw feels.” A raw feel is a perception in and of itself, considered entirely in isolation from any effect it might have on behavior and behavioral disposition.’
If you think about this in P-zombie terms, and someone attempts to say “A P-zombie is a person who has sensory memory, but not qualia.” I’m not sure what would even be the difference between that and a regular person. Either one can call on their sensory memory to say “I am experiencing redness right now, and now I am experiencing my experiences of redness” and it would seem to be correct if that is what is in their sensory memory. There doesn’t appear to be anything left for qualia to explain, and it feels a lot like the question is dissolved at that point.
Is this approximately correct, or is there something else that qualia attempts to explain that sensory memory doesn’t that I’m not perceiving?
Subjective experience isn’t limited to sensory experience, a headache, or any feeling, like happiness, without any sensory reason, would also count. The idea is that you can trace most of those to electrical/biochemical states. Might be why some drugs can make you feel happy and how anesthetics work!
I don’t know what phenomenal consciousness or subjective experience means. Could you please give a reference or explanation for these terms?
That is the hard problem, actually. If we could operationalize those terms, we would be able to study what they refer to with a reductionist lens. Until then, we’re kind of stuck using words to point at experience rather than at structural definitions.
In case you’re honestly not sure what everyone is talking about, though: There’s a difference between red as a certain frequency of light and red as experienced. Yes, we know there’s a strong connection between the two, and we can describe in some fair detail how a certain frequency of light stimulates optic nerves and is processed in the brain and so on. But it’s not at all clear how we get from those mechanical processes to the experience of red. We don’t experience red as a frequency; we experience it as red! That latter bit, the redness of red, is what people refer to as the qualium of red. (“Qualium” is the singular form of “qualia”.)
The reductionist thesis maintains that there must be a way to reduce the connection between physical mechanisms and qualia down to mechanisms. The hard problem of consciousness is that no one seems to be able to come up with even an in-principle plausible way of making that connection. In other words, everyone is confused but doesn’t have a clear way to even start dispelling the confusion. People like Daniel Dennett have made efforts, but many people question whether their efforts even count as progress.
So in short: “phenomenal consciousness” refers to the experience of qualia, although we don’t know what that means aside from pointing at the fact that everyone seems to experience qualia and that mechanisms affect but don’t seem to be qualia. “Subjective experience” usually refers to the same thing, but is often used to emphasize the fact that the experience of qualia seems to depend on the individual; e.g., you don’t experience my experiencing red the way I do and vice versa.
“Quale”, according to Wiktionary, the Stanford Encyclopedia of Philosophy, and my 1993 Random House unabridged dictionary (which gives the pronunciations KWAH-lee, KWAH-lay, and KWAY-lee).
Edit for completeness:
For the plural, “qualia”, the Random House gives the pronunciations KWAH-lee-uh and KWAY-lee-uh.
(The second edition OED pronunces “quale” as KWAY-lee but does not include “qualia” at all.)
Ah, I had been misinformed! I was informed it was the Latin neuter form, which uses “-um” for singular endings and “-a” for plural. Thanks for the correction!
Do you understand the difference between being asleep and being awake?
It seems like a subtle question which I could be missing the point of, so I’ll explain my answer instead of just saying “yes”: When awake someone is generally acting based on their sensory inputs and plan. When asleep they are in one of several different sleep stages, I don’t know much about these different states but I’ll say in general that I think they are still (using the HOT terminology) creature-conscious of sensory inputs (that’s how you can wake from the alarm clock) but they are not transitive-conscious (except in the cases when you incorporate these into your dreams).
Let me also add that I’ve been re-reading the wiki and Stanford encyclopedia pages on all these terms and it makes just as much sense as last time I tried to understand what it’s all about (none). I’m a bit worried about people getting angry at me for not “getting it” as fast as they did but hopefully people on LW are more forgiving than what I’m used to.
Chimera writes: I’m a bit worried about people getting angry at me for not “getting it”
You are what? Worried? Worried is a conscious experience. A movie of you being worried does not show someone else being worried, it shows an unconscious image that looks like you being worried. An automaton built to duplicate your behavior when you are worried feels nothing, there is nothing (no consciousness) there to feel anything, but when you are doing that stuff people know and more importantly, you know how you feel and what it means to feel worried.
Imagine a world filled with disney animatronic robots all programmed to behave like real world people in our world behaved. Unless you think all those singing ghosts in the Haunted Mansion at disneyland are feeling happy and scared, then you can know what is being discussed here by imagining the difference between what images of people feel (nothing) and what actual people feel.
Good luck with this.
I would argue that if someone constructed an automaton that behaved exactly like I would in any given real-world situation—including novel situations, which Disney automatons can’t handle—then that automaton would, for all intents and purposes, be as conscious as I am. In fact, this automaton would, in fact, be a copy of me.
Let’s imagine that tonight, while you sleep, evil aliens replace everyone else in your home town (except for yourself, that is) with one of those perfect automatons. Would you be able to tell that this had occurred ? If so, how would you determine this ?
Perhaps I might not know the difference, but I am not the only observer here. Would the people replaced know the difference?
Fooling you by replacing me is one thing. Fooling me by replacing me is an entirely more difficult thing to do.
Well, presumably, the original people who were replace would indeed know the difference, as they watch helplessly from within the bubbling storage tanks where the evil aliens / wizards / whomever had put them prior to replacing them with the automatons.
The more interesting question is, would the automatons believe that they were the originals ? My claim is that, in order to emulate the originals perfectly with 100% accuracy—which is what this thought experiment requires—the automatons would have to believe that they were, in fact, original; and thus they would have to be conscious.
You could probably say, “ah-hah, sure the automatons may believe that they are the originals, but they’re wrong ! The originals are back on the mothership inside the storage vats !” This doesn’t sound like a very fruitful objection to me, however, since it doesn’t help you prove that the automatons are not conscious—merely that they aren’t composed of the same atoms as some other conscious beings (the ones inside the vats). So what, everyone is made of different atoms, you and I included.
deleted
You skated past the hard problem of consciousness right there. Why does “acting based on sensory inputs and a plan” correlate with “being awake”?
It’s just the term “awake” is defined that way, or is that wrong?
It depends on whether your definition of “sensory input” and “acting on a plan” already require the concept of being conscious. Functionalists have definitions of those concepts which are just about relations of causality (sensory input = something outside the nervous system affects something inside the nervous system) and isomorphism (plan = combinatorial structure in nervous system with limited isomorphism to possible future world-states). And the point of the original question is that when you know you’re awake, it’s not because you know that your nervous system currently contains a combinatorial structure possessing certain isomorphisms to the world, that stands in an appropriate causal relation to the actions of your body. In fact, that is something that you deduce from (1) knowing that you are awake (2) having a functionalist theory of consciousness.
So, when you are awake (or “conscious”), how do you know that you are conscious?
When awake you are not necessarily transitively conscious of it—I think usually we are but there are times when we ‘zone out’ and only have first order thoughts.
OK. But it seems (according to your answer) that when I am awake and knowing it, it’s because I’m transitively conscious of something. Transitively conscious of what?
of being awake, as defined above: “I notice that I am taking audio-visual input from my environment and acting on it”. (The quote should be ‘noninferential, nondispositional and assertoric’ but I am not completely sure it is of that nature, if not, my mistake)
i.e. you know you’re awake when you have subjective experience of phenomenal consciousness. :-) Or something very close to this—that may not be the most nuanced, 100% correct way of stating it.
Would you say that only a functionalist can know whether they are awake, because only a functionalist knows what consciousness is? I presume not. But that means that it is possible to name and identify what consciousness is, and to say that I am awake and that I know it, in terms which do not presuppose functionalism. In this we have both the justification for the jargon terms “subjective experience” and “phenomenal consciousness”, and also the reason why the hard problem is a problem. If the existence of consciousness is not logically identical with the existence of a particular causal-functional system, then I can legitimately ask why the existence of that system leads to the existence of an accompanying conscious experience. And that “why” is the hard problem of consciousness.
Thanks for your comment but I don’t understand it.