It seems like a subtle question which I could be missing the point of, so I’ll explain my answer instead of just saying “yes”: When awake someone is generally acting based on their sensory inputs and plan. When asleep they are in one of several different sleep stages, I don’t know much about these different states but I’ll say in general that I think they are still (using the HOT terminology) creature-conscious of sensory inputs (that’s how you can wake from the alarm clock) but they are not transitive-conscious (except in the cases when you incorporate these into your dreams).
Let me also add that I’ve been re-reading the wiki and Stanford encyclopedia pages on all these terms and it makes just as much sense as last time I tried to understand what it’s all about (none). I’m a bit worried about people getting angry at me for not “getting it” as fast as they did but hopefully people on LW are more forgiving than what I’m used to.
Chimera writes: I’m a bit worried about people getting angry at me for not “getting it”
You are what? Worried? Worried is a conscious experience. A movie of you being worried does not show someone else being worried, it shows an unconscious image that looks like you being worried. An automaton built to duplicate your behavior when you are worried feels nothing, there is nothing (no consciousness) there to feel anything, but when you are doing that stuff people know and more importantly, you know how you feel and what it means to feel worried.
Imagine a world filled with disney animatronic robots all programmed to behave like real world people in our world behaved. Unless you think all those singing ghosts in the Haunted Mansion at disneyland are feeling happy and scared, then you can know what is being discussed here by imagining the difference between what images of people feel (nothing) and what actual people feel.
I would argue that if someone constructed an automaton that behaved exactly like I would in any given real-world situation—including novel situations, which Disney automatons can’t handle—then that automaton would, for all intents and purposes, be as conscious as I am. In fact, this automaton would, in fact, be a copy of me.
Let’s imagine that tonight, while you sleep, evil aliens replace everyone else in your home town (except for yourself, that is) with one of those perfect automatons. Would you be able to tell that this had occurred ? If so, how would you determine this ?
Perhaps I might not know the difference, but I am not the only observer here. Would the people replaced know the difference?
Well, presumably, the original people who were replace would indeed know the difference, as they watch helplessly from within the bubbling storage tanks where the evil aliens / wizards / whomever had put them prior to replacing them with the automatons.
The more interesting question is, would the automatons believe that they were the originals ? My claim is that, in order to emulate the originals perfectly with 100% accuracy—which is what this thought experiment requires—the automatons would have to believe that they were, in fact, original; and thus they would have to be conscious.
You could probably say, “ah-hah, sure the automatons may believe that they are the originals, but they’re wrong ! The originals are back on the mothership inside the storage vats !” This doesn’t sound like a very fruitful objection to me, however, since it doesn’t help you prove that the automatons are not conscious—merely that they aren’t composed of the same atoms as some other conscious beings (the ones inside the vats). So what, everyone is made of different atoms, you and I included.
It depends on whether your definition of “sensory input” and “acting on a plan” already require the concept of being conscious. Functionalists have definitions of those concepts which are just about relations of causality (sensory input = something outside the nervous system affects something inside the nervous system) and isomorphism (plan = combinatorial structure in nervous system with limited isomorphism to possible future world-states). And the point of the original question is that when you know you’re awake, it’s not because you know that your nervous system currently contains a combinatorial structure possessing certain isomorphisms to the world, that stands in an appropriate causal relation to the actions of your body. In fact, that is something that you deduce from (1) knowing that you are awake (2) having a functionalist theory of consciousness.
So, when you are awake (or “conscious”), how do you know that you are conscious?
When awake you are not necessarily transitively conscious of it—I think usually we are but there are times when we ‘zone out’ and only have first order thoughts.
OK. But it seems (according to your answer) that when I am awake and knowing it, it’s because I’m transitively conscious of something. Transitively conscious of what?
of being awake, as defined above: “I notice that I am taking audio-visual input from my environment and acting on it”. (The quote should be ‘noninferential, nondispositional and assertoric’ but I am not completely sure it is of that nature, if not, my mistake)
i.e. you know you’re awake when you have subjective experience of phenomenal consciousness. :-) Or something very close to this—that may not be the most nuanced, 100% correct way of stating it.
Would you say that only a functionalist can know whether they are awake, because only a functionalist knows what consciousness is? I presume not. But that means that it is possible to name and identify what consciousness is, and to say that I am awake and that I know it, in terms which do not presuppose functionalism. In this we have both the justification for the jargon terms “subjective experience” and “phenomenal consciousness”, and also the reason why the hard problem is a problem. If the existence of consciousness is not logically identical with the existence of a particular causal-functional system, then I can legitimately ask why the existence of that system leads to the existence of an accompanying conscious experience. And that “why” is the hard problem of consciousness.
Do you understand the difference between being asleep and being awake?
It seems like a subtle question which I could be missing the point of, so I’ll explain my answer instead of just saying “yes”: When awake someone is generally acting based on their sensory inputs and plan. When asleep they are in one of several different sleep stages, I don’t know much about these different states but I’ll say in general that I think they are still (using the HOT terminology) creature-conscious of sensory inputs (that’s how you can wake from the alarm clock) but they are not transitive-conscious (except in the cases when you incorporate these into your dreams).
Let me also add that I’ve been re-reading the wiki and Stanford encyclopedia pages on all these terms and it makes just as much sense as last time I tried to understand what it’s all about (none). I’m a bit worried about people getting angry at me for not “getting it” as fast as they did but hopefully people on LW are more forgiving than what I’m used to.
Chimera writes: I’m a bit worried about people getting angry at me for not “getting it”
You are what? Worried? Worried is a conscious experience. A movie of you being worried does not show someone else being worried, it shows an unconscious image that looks like you being worried. An automaton built to duplicate your behavior when you are worried feels nothing, there is nothing (no consciousness) there to feel anything, but when you are doing that stuff people know and more importantly, you know how you feel and what it means to feel worried.
Imagine a world filled with disney animatronic robots all programmed to behave like real world people in our world behaved. Unless you think all those singing ghosts in the Haunted Mansion at disneyland are feeling happy and scared, then you can know what is being discussed here by imagining the difference between what images of people feel (nothing) and what actual people feel.
Good luck with this.
I would argue that if someone constructed an automaton that behaved exactly like I would in any given real-world situation—including novel situations, which Disney automatons can’t handle—then that automaton would, for all intents and purposes, be as conscious as I am. In fact, this automaton would, in fact, be a copy of me.
Let’s imagine that tonight, while you sleep, evil aliens replace everyone else in your home town (except for yourself, that is) with one of those perfect automatons. Would you be able to tell that this had occurred ? If so, how would you determine this ?
Perhaps I might not know the difference, but I am not the only observer here. Would the people replaced know the difference?
Fooling you by replacing me is one thing. Fooling me by replacing me is an entirely more difficult thing to do.
Well, presumably, the original people who were replace would indeed know the difference, as they watch helplessly from within the bubbling storage tanks where the evil aliens / wizards / whomever had put them prior to replacing them with the automatons.
The more interesting question is, would the automatons believe that they were the originals ? My claim is that, in order to emulate the originals perfectly with 100% accuracy—which is what this thought experiment requires—the automatons would have to believe that they were, in fact, original; and thus they would have to be conscious.
You could probably say, “ah-hah, sure the automatons may believe that they are the originals, but they’re wrong ! The originals are back on the mothership inside the storage vats !” This doesn’t sound like a very fruitful objection to me, however, since it doesn’t help you prove that the automatons are not conscious—merely that they aren’t composed of the same atoms as some other conscious beings (the ones inside the vats). So what, everyone is made of different atoms, you and I included.
deleted
You skated past the hard problem of consciousness right there. Why does “acting based on sensory inputs and a plan” correlate with “being awake”?
It’s just the term “awake” is defined that way, or is that wrong?
It depends on whether your definition of “sensory input” and “acting on a plan” already require the concept of being conscious. Functionalists have definitions of those concepts which are just about relations of causality (sensory input = something outside the nervous system affects something inside the nervous system) and isomorphism (plan = combinatorial structure in nervous system with limited isomorphism to possible future world-states). And the point of the original question is that when you know you’re awake, it’s not because you know that your nervous system currently contains a combinatorial structure possessing certain isomorphisms to the world, that stands in an appropriate causal relation to the actions of your body. In fact, that is something that you deduce from (1) knowing that you are awake (2) having a functionalist theory of consciousness.
So, when you are awake (or “conscious”), how do you know that you are conscious?
When awake you are not necessarily transitively conscious of it—I think usually we are but there are times when we ‘zone out’ and only have first order thoughts.
OK. But it seems (according to your answer) that when I am awake and knowing it, it’s because I’m transitively conscious of something. Transitively conscious of what?
of being awake, as defined above: “I notice that I am taking audio-visual input from my environment and acting on it”. (The quote should be ‘noninferential, nondispositional and assertoric’ but I am not completely sure it is of that nature, if not, my mistake)
i.e. you know you’re awake when you have subjective experience of phenomenal consciousness. :-) Or something very close to this—that may not be the most nuanced, 100% correct way of stating it.
Would you say that only a functionalist can know whether they are awake, because only a functionalist knows what consciousness is? I presume not. But that means that it is possible to name and identify what consciousness is, and to say that I am awake and that I know it, in terms which do not presuppose functionalism. In this we have both the justification for the jargon terms “subjective experience” and “phenomenal consciousness”, and also the reason why the hard problem is a problem. If the existence of consciousness is not logically identical with the existence of a particular causal-functional system, then I can legitimately ask why the existence of that system leads to the existence of an accompanying conscious experience. And that “why” is the hard problem of consciousness.
Thanks for your comment but I don’t understand it.