I will take RobinZ’s good advice the not talk about qualia (for some time anyway). It is a philosophical term. Consciousness is a different matter, needs to be discussed and is too important to put in the ‘taboo’ bin. We need consciousness to remember, to learn and to do the prediction involved in controlling movement. It is a scientific term as well as a philosophical one and an ordinary everyday one.
We need consciousness to remember, to learn and to do the prediction involved in controlling movement.
Controlled movement does not require consciousness, memory, learning, or prediction. This (simulated) machine has none of those things, yet it walks over uneven terrain and searches for (simulated) food. What controlled movement requires is control.
Memory, learning, and prediction do not require consciousness. Mundane machines and software exist that do all of these things without anyone attributing consciousness to them.
People may think they are conscious of how they move, but they are not. Unless you have studied human physiology, it is unlikely that you can say which of your muscles are exerted in performing any particular movement. People are conscious of muscular action only at a rather high level of abstraction: “pick up a cup” rather than “activate the abductor pollicis brevis”. Most of the learning that happens when you learn Tai Chi, yoga, dance, or martial arts, is not accessible to consciousness. There are exercises that you can tell people exactly how to do, and demonstrate in front of them, and yet they will go wrong the first time they try. Then the instructor gives the class a metaphor for the required movement, involving, say, an imaginary lead-weighted diving boot on one foot, and suddenly the students get it. Where is consciousness in that process?
I believe there is scientific agreement that the memory of an event in episodic memory only can be done it the event is consciously experienced. No conscious experience = no episodic memory
A certain type of learning depends on episodic memory and so conscious experience.
The fine control of movement depends on the comparison between expectation and result, ie error signals. As it appears to be consciousness that gives access across the brain to a near future prediction, it is needed for fine control. Prediction is only valuable in it is accessible.
I am not saying that memory, learning or fine motor control is ‘done’ in consciousness (or even that in other systems, such as robots, there would not be other ways to do these things.) I am only saying that the science implies that in the human brain we need to have conscious experience in order for these processes to work properly.
Yes, consciousness is certainly involved in the way we do some of those things, but I don’t see that as evidence that that is why we have consciousness. Consciousness is involved in many things: modelling other people, solving problems, imagining anticipated situations, and so on. But how did it come about and why?
FWIW, I don’t think anyone has come close to explaining consciousness yet. Every attempt ends up pointing to some physical phenomenon, demonstrated or hypothesised, and saying “that’s consciousness”. But the most they explain is people’s reports of being conscious, not the experience that they are reports of. I don’t have an explanation for the experience either. I don’t even have an idea of what an explanation would look like.
In terms of Eliezer’s metaphor of the Explain/Worship/Ignore dialog box, I don’t worship the ineffable mystery, nor ignore the question by declaring it solved, but I don’t know how to hit the Explain button either. For the time being the dialog will just have to float there unanswered.
We need consciousness to remember, to learn and to do the prediction involved in controlling movement.
Controlled movement does not require consciousness, memory, learning, or prediction.
Concurred. I want to point out that Julian Jaynes presents a lot of evidence for the lack of a role for consciousness for these and many other things in his book The Origin of Consciousness in the Breakdown of the Bicameral Mind. (And yes, I know his general thesis is kind of flaky, but he handles this very narrow topic well.)
One of his examples is how people, under experimental conditions and without even knowing it, adjust muscles that can’t be consciously controlled, in order to optimally contain a source of irritation. They never report any conscious recognition of the correlation between that muscle’s flexing and the irritation (which was ensured to exist by the experiment, and which irritation they were aware of).
I’m fairly sure a friend of a friend was on a similar insomnia drug and held a long, apparently-coherent phone conversation with her sister, to whom she had not spoken in some time. And then woke up later and thought, “I should call my sister—we haven’t spoken in a long time.”
Let me just say I find the stories more plausible than the newswriters seem to.
Yesterday we saw how replacing terms with definitions could reveal the empirical unproductivity of the classical Aristotelian syllogism:
All [mortal, ~feathers, biped] are mortal; Socrates is a [mortal, ~feathers, biped]; Therefore Socrates is mortal.
But the principle applies much more broadly:
Albert: “A tree falling in a deserted forest makes a sound.” Barry: “A tree falling in a deserted forest does not make a sound.”
Clearly, since one says “sound” and one says “~sound”, we must have a contradiction, right? But suppose that they both dereference their pointers before speaking:
Albert: “A tree falling in a deserted forest matches [membership test: this event generates acoustic vibrations].” Barry: “A tree falling in a deserted forest does not match [membership test: this event generates auditory experiences].”
Now there is no longer an apparent collision—all they had to do was prohibit themselves from using the word sound. If “acoustic vibrations” came into dispute, we would just play Taboo again and say “pressure waves in a material medium”; if necessary we would play Taboo again on the word “wave” and replace it with the wave equation. (Play Taboo on “auditory experience” and you get “That form of sensory processing, within the human brain, which takes as input a linear time series of frequency mixes.”)
Edit: For example, were I to begin to contribute to this conversation, I would probably talk about self-awareness, the internal trace of successive experiences attended to, and the narrative chains of internal monologue or dialogue that we observe and recall on introspection—not “consciousness”.
The “tree falling in a forest” question was posed before people knew that sound was caused by vibrations, or even that sound was a physical phenomenon. It wasn’t asking the same question it’s asking now. It may have been intended to ask, “Is sound a physical phenomenon?”
Confession: I always assumed (until EY’s article, believe it or not!) that the “tree falling in a forest …” philosophical dilemma was asking whether the tree makes vibrations.
That is, I thought the issue it’s trying to address is, “If nothing is around to verify the vibrations, how do you know the vibrations really happen in that circumstance? What keeps you from believing that whenever nobody’s around [nor e.g. any sensor], the vibrations just don’t happen?”
Over what period, exactly, was the question widely accepted to be making a point about the difference between vibrations and auditory experiences, as Eliezer seemed to imply is the common understanding?
I’ve encountered people asking the question with both meanings or sometimes a combination of meanings. Like many of these questions of a similar form, the questions are often so muddled as to be close to useless.
The “tree falling in a forest” question was posed before people knew that sound was caused by vibrations, or even that sound was a physical phenomenon.
I don’t think that’s correct. The notion that sound is vibrations in air dates back to at least Aristotle. See for example here
I don’t know, but Aristotle’s writings were not well-known in Europe from the 6th through the end of the 12th centuries. They were re-introduced via the Crusades.
By the way, the modern phrasing of the dilemma is, “If people are in a multiplayer game on Xbox Live, and everyone’s headset is muted, does a whiny 11-year-old still complain about lag?”
I don’t. Sorry, I thought the question was medieval, but now can’t remember why I thought that. Probably just from giving the question-asker the benefit of the doubt. If the original asker was Berkeley, then it was just a stupid question.
I take your point, I really do. I will for example avoid ‘qualia’ as a word and use other terms.
But here is my problem. I have been following what the scientists that research it have been saying about consciousness for some years. They call it consciousness. They call it that because the people they know and I know and you know call it that. Now you are suggesting nicely that I call it something else and there is no other simple word or phrase that describes consciousness.
When I wrote a post I defined as well as I could how I was using the word. I could invent a word like ‘xness’ but I would have to keep saying that ‘xness’ is like consciousness in everything but name. And it would not accomplish much because it is not the word or even particular philosophies that is the source of the problem. It is the how and where and why and when that the brain produces consciousness. If we disagreed about what an electron was, it would not help to change the name. In the same way, if we disagree about what consciousness is, this is not a semantic problem. We know what we are talking about as well as we would if we could point at it, we have a different views about its nature.
To categorize is to throw away information. If you’re told that a falling tree makes a “sound”, you don’t know what the actual sound is; you haven’t actually heard the tree falling. If a coin lands “heads”, you don’t know its radial orientation. A blue egg-shaped thing may be a “blegg”, but what if the exact egg shape varies, or the exact shade of blue? You want to use categories to throw away irrelevant information, to sift gold from dust, but often the standard categorization ends up throwing out relevant information too. And when you end up in that sort of mental trouble, the first and most obvious solution is to play Taboo.
For example: “Play Taboo” is itself a leaky generalization. Hasbro’s version is not the rationalist version; they only list five additional banned words on the card, and that’s not nearly enough coverage to exclude thinking in familiar old words. What rationalists do would count as playing Taboo—it would match against the “play Taboo” concept—but not everything that counts as playing Taboo works to force original seeing. If you just think “play Taboo to force original seeing”, you’ll start thinking that anything that counts as playing Taboo must count as original seeing.
The rationalist version isn’t a game, which means that you can’t win by trying to be clever and stretching the rules. You have to play Taboo with a voluntary handicap: Stop yourself from using synonyms that aren’t on the card. You also have to stop yourself from inventing a new simple word or phrase that functions as an equivalent mental handle to the old one. You are trying to zoom in on your map, not rename the cities; dereference the pointer, not allocate a new pointer; see the events as they happen, not rewrite the cliche in a different wording.[emphasis added]
By visualizing the problem in more detail, you can see the lost purpose: Exactly what do you do when you “play Taboo”? What purpose does each and every part serve?
The specific advantage I see of cracking open the black-box of “consciousness” in this conversation is that I expect it to be the fastest way to one of the following useful outcomes:
“On page 8675309 of I Wrote “Consciousness Explained” Twenty Years Ago Haven’t You Gotten It By Now by Daniel Dennett, he says that fribblety chacocoa opoloba doesn’t exist—here’s the quote.” “Oh, I see the confusion! No, he’s talking about albittiver rikvotil, as you can see from this context, that quote, and this journal paper.”
“On page 8675309 of I Wrote “Consciousness Explained” Twenty Years Ago Haven’t You Gotten It By Now by Daniel Dennett, he says that fribblety chacocoa opoloba doesn’t exist—here’s the quote.” “But that doesn’t exist, according to the four experiments described in these three research papers, and doesn’t have to exist by this philosophical argument.”
Ok, its my bed time here in France. I will sleep on this and maybe I can be more positive in the morning. But the likelihood is that I will go back to the occassional lurck.
Your comment does not make a great deal of sense to me, no one appears to be interested in what I am interested in (contrary to what I thought previously), the horrid disagreement about Alicorn’s posting is disturbing, and so was the discussion of asking for a drink. I was not upset at the time with the remarks about my spelling and I would correct them. But now I think, is there any latitude for a dyslexic? I thought the site was for discussion ideas not everything but.
I apologize for making a big deal of this, but my main point is that I want to know I’m talking about the thing you’re interested in, not about something else. I wasn’t even really trying to address what you said—just to make some suggestions to reduce the confusion floating around.
Have a good night—hope I can catch you on the flip side.
Apology accepted. You are not the problem—I would not go away because of one conversation.
I have decided that I will take a less active part in LW for a while. It is very time consuming and I have a lot of actually productive reading and blogging to do. By productive I mean things that add to my understanding. I will look to see what has been posted and will probably read the odd one. I may even write a small comment from time to time. The posting that I was preparing for LW will be abandoned. I would put in too much effort for too little serious productive useful discussion. Better to put the effort elsewhere.
I think what you’re talking about needs a different name. ‘Attention’ might be an informal one and ‘executive control’ a more formal one, or just ‘planning’, if we’re talking AI instead of psychology. ‘Reflection’, if we’re talking about metacognition.
Like RichardKennaway said, the tasks you describe sound like things that existing narrow AI robotic systems can already do, yet it sounds quite odd to describe current-gen robots as conscious. Talking about consciousness here is confusing at least to me.
Outside qualia and Chalmers’ hard probjem of consciousness, is the term consciousness really necessary for something that can’t be expressed in more precise terms?
I think I answered this in another sub-thread of this discussion. But, here it is again in outline.
We only remember in episodic memory events that we had conscious awareness of. Some types of learning rely on episodic memory. The remembering and the learning are not necessarily, not even probably, part of the conscious process but without consciousness we do not have them. The prediction is part of the monitoring and correcting of on-going motor actions. In order to create the prediction and to use it, various parts of the cortex doing different things have to have access to the prediction. This wide-ranging access seems to be one of the hallmarks of consciousness. So does the slight forward projection of the actual conscious awareness—ie there is a possibility that it is the actual prediction and well is the mode of access.
I hope this answers the question of why I said what I said. I don’t wish to continue this discussion at the present time. As I told RobinZ, I currently have other things to do with my time and find LW has been going off-topic in ways that I don’t find useful. However, you have always been willing to seriously debate and stay on topic, so I have answered your comment. I will probably return to LW at some time. Until then, good luck.
Thanks. I know you don’t want to continue discussion; but I note, for others reading this, that in this explanation, you’re using the word “conscious” to mean “at the center of attention”. This is not the same question I’m asking, which is about “consciousness” as “the experience of qualia”.
I made my comment because it’s very important to know whether experiencing qualia is efficient. Is there any reason to expect that future AIs will have qualia; or can they do what they want to do just as well (maybe better) by not having that feature? If experiencing qualia does not confer an advantage to an AI, then we’re headed for a universe devoid of qualia. That’s the big lose for the universe.
Avoiding that common qualia/attention confusion is reason enough not to taboo “qualia”, which is more precise than “consciousness”.
You seem to be missing the point about what he means to taboo a word. In LessWrong speak, this means to expand out what you mean by the term rather than just use the term itself. So for example, if we tabooed “prime number” we’d need to say instead something like “an integer greater than one that has no positive, non-trivial divisors.” This sort of step is very important when discussing something like consciousness because so many people have different ideas about what the term means.
I will take RobinZ’s good advice the not talk about qualia (for some time anyway). It is a philosophical term. Consciousness is a different matter, needs to be discussed and is too important to put in the ‘taboo’ bin. We need consciousness to remember, to learn and to do the prediction involved in controlling movement. It is a scientific term as well as a philosophical one and an ordinary everyday one.
Controlled movement does not require consciousness, memory, learning, or prediction. This (simulated) machine has none of those things, yet it walks over uneven terrain and searches for (simulated) food. What controlled movement requires is control.
Memory, learning, and prediction do not require consciousness. Mundane machines and software exist that do all of these things without anyone attributing consciousness to them.
People may think they are conscious of how they move, but they are not. Unless you have studied human physiology, it is unlikely that you can say which of your muscles are exerted in performing any particular movement. People are conscious of muscular action only at a rather high level of abstraction: “pick up a cup” rather than “activate the abductor pollicis brevis”. Most of the learning that happens when you learn Tai Chi, yoga, dance, or martial arts, is not accessible to consciousness. There are exercises that you can tell people exactly how to do, and demonstrate in front of them, and yet they will go wrong the first time they try. Then the instructor gives the class a metaphor for the required movement, involving, say, an imaginary lead-weighted diving boot on one foot, and suddenly the students get it. Where is consciousness in that process?
I believe there is scientific agreement that the memory of an event in episodic memory only can be done it the event is consciously experienced. No conscious experience = no episodic memory
A certain type of learning depends on episodic memory and so conscious experience.
The fine control of movement depends on the comparison between expectation and result, ie error signals. As it appears to be consciousness that gives access across the brain to a near future prediction, it is needed for fine control. Prediction is only valuable in it is accessible.
I am not saying that memory, learning or fine motor control is ‘done’ in consciousness (or even that in other systems, such as robots, there would not be other ways to do these things.) I am only saying that the science implies that in the human brain we need to have conscious experience in order for these processes to work properly.
Yes, consciousness is certainly involved in the way we do some of those things, but I don’t see that as evidence that that is why we have consciousness. Consciousness is involved in many things: modelling other people, solving problems, imagining anticipated situations, and so on. But how did it come about and why?
FWIW, I don’t think anyone has come close to explaining consciousness yet. Every attempt ends up pointing to some physical phenomenon, demonstrated or hypothesised, and saying “that’s consciousness”. But the most they explain is people’s reports of being conscious, not the experience that they are reports of. I don’t have an explanation for the experience either. I don’t even have an idea of what an explanation would look like.
In terms of Eliezer’s metaphor of the Explain/Worship/Ignore dialog box, I don’t worship the ineffable mystery, nor ignore the question by declaring it solved, but I don’t know how to hit the Explain button either. For the time being the dialog will just have to float there unanswered.
Concurred. I want to point out that Julian Jaynes presents a lot of evidence for the lack of a role for consciousness for these and many other things in his book The Origin of Consciousness in the Breakdown of the Bicameral Mind. (And yes, I know his general thesis is kind of flaky, but he handles this very narrow topic well.)
One of his examples is how people, under experimental conditions and without even knowing it, adjust muscles that can’t be consciously controlled, in order to optimally contain a source of irritation. They never report any conscious recognition of the correlation between that muscle’s flexing and the irritation (which was ensured to exist by the experiment, and which irritation they were aware of).
It may in fact be possible to drive while unconscious, though not very well.
I’m fairly sure a friend of a friend was on a similar insomnia drug and held a long, apparently-coherent phone conversation with her sister, to whom she had not spoken in some time. And then woke up later and thought, “I should call my sister—we haven’t spoken in a long time.”
Let me just say I find the stories more plausible than the newswriters seem to.
I apologize—what I meant wasn’t “drop the subject of consciousness”, but “don’t use the specific word ‘consciousness’”:
Besides the original essay linked and quoted above, there’s elaboration on the value of the exercise here.
Edit: For example, were I to begin to contribute to this conversation, I would probably talk about self-awareness, the internal trace of successive experiences attended to, and the narrative chains of internal monologue or dialogue that we observe and recall on introspection—not “consciousness”.
The “tree falling in a forest” question was posed before people knew that sound was caused by vibrations, or even that sound was a physical phenomenon. It wasn’t asking the same question it’s asking now. It may have been intended to ask, “Is sound a physical phenomenon?”
Confession: I always assumed (until EY’s article, believe it or not!) that the “tree falling in a forest …” philosophical dilemma was asking whether the tree makes vibrations.
That is, I thought the issue it’s trying to address is, “If nothing is around to verify the vibrations, how do you know the vibrations really happen in that circumstance? What keeps you from believing that whenever nobody’s around [nor e.g. any sensor], the vibrations just don’t happen?”
(In yet other words, a question about belief in the implied invisible, or inaudible as the case may be.)
Over what period, exactly, was the question widely accepted to be making a point about the difference between vibrations and auditory experiences, as Eliezer seemed to imply is the common understanding?
I’ve encountered people asking the question with both meanings or sometimes a combination of meanings. Like many of these questions of a similar form, the questions are often so muddled as to be close to useless.
I don’t think that’s correct. The notion that sound is vibrations in air dates back to at least Aristotle. See for example here
I don’t know, but Aristotle’s writings were not well-known in Europe from the 6th through the end of the 12th centuries. They were re-introduced via the Crusades.
By the way, the modern phrasing of the dilemma is, “If people are in a multiplayer game on Xbox Live, and everyone’s headset is muted, does a whiny 11-year-old still complain about lag?”
Do you have a citation for that? The earliest reference I see is Berkeley.
I don’t. Sorry, I thought the question was medieval, but now can’t remember why I thought that. Probably just from giving the question-asker the benefit of the doubt. If the original asker was Berkeley, then it was just a stupid question.
I take your point, I really do. I will for example avoid ‘qualia’ as a word and use other terms.
But here is my problem. I have been following what the scientists that research it have been saying about consciousness for some years. They call it consciousness. They call it that because the people they know and I know and you know call it that. Now you are suggesting nicely that I call it something else and there is no other simple word or phrase that describes consciousness.
When I wrote a post I defined as well as I could how I was using the word. I could invent a word like ‘xness’ but I would have to keep saying that ‘xness’ is like consciousness in everything but name. And it would not accomplish much because it is not the word or even particular philosophies that is the source of the problem. It is the how and where and why and when that the brain produces consciousness. If we disagreed about what an electron was, it would not help to change the name. In the same way, if we disagree about what consciousness is, this is not a semantic problem. We know what we are talking about as well as we would if we could point at it, we have a different views about its nature.
That’s not quite what I meant either (although I actually approve of avoiding the term “qualia”, full stop):
The specific advantage I see of cracking open the black-box of “consciousness” in this conversation is that I expect it to be the fastest way to one of the following useful outcomes:
“But you haven’t talked about fribblety chacocoa opoloba.” “I haven’t talked about what? I don’t think I’ve ever actually observed that.”
“On page 8675309 of I Wrote “Consciousness Explained” Twenty Years Ago Haven’t You Gotten It By Now by Daniel Dennett, he says that fribblety chacocoa opoloba doesn’t exist—here’s the quote.” “Oh, I see the confusion! No, he’s talking about albittiver rikvotil, as you can see from this context, that quote, and this journal paper.”
“On page 8675309 of I Wrote “Consciousness Explained” Twenty Years Ago Haven’t You Gotten It By Now by Daniel Dennett, he says that fribblety chacocoa opoloba doesn’t exist—here’s the quote.” “But that doesn’t exist, according to the four experiments described in these three research papers, and doesn’t have to exist by this philosophical argument.”
Edit: Also, there’s no requirement that you actually solve the problem of what it is—a sufficiently specific and detailed map leading to the thing to be observed suffices.
Ok, its my bed time here in France. I will sleep on this and maybe I can be more positive in the morning. But the likelihood is that I will go back to the occassional lurck.
Your comment does not make a great deal of sense to me, no one appears to be interested in what I am interested in (contrary to what I thought previously), the horrid disagreement about Alicorn’s posting is disturbing, and so was the discussion of asking for a drink. I was not upset at the time with the remarks about my spelling and I would correct them. But now I think, is there any latitude for a dyslexic? I thought the site was for discussion ideas not everything but.
Good night.
Good night.
I apologize for making a big deal of this, but my main point is that I want to know I’m talking about the thing you’re interested in, not about something else. I wasn’t even really trying to address what you said—just to make some suggestions to reduce the confusion floating around.
Have a good night—hope I can catch you on the flip side.
Apology accepted. You are not the problem—I would not go away because of one conversation.
I have decided that I will take a less active part in LW for a while. It is very time consuming and I have a lot of actually productive reading and blogging to do. By productive I mean things that add to my understanding. I will look to see what has been posted and will probably read the odd one. I may even write a small comment from time to time. The posting that I was preparing for LW will be abandoned. I would put in too much effort for too little serious productive useful discussion. Better to put the effort elsewhere.
I think what you’re talking about needs a different name. ‘Attention’ might be an informal one and ‘executive control’ a more formal one, or just ‘planning’, if we’re talking AI instead of psychology. ‘Reflection’, if we’re talking about metacognition.
Like RichardKennaway said, the tasks you describe sound like things that existing narrow AI robotic systems can already do, yet it sounds quite odd to describe current-gen robots as conscious. Talking about consciousness here is confusing at least to me.
Outside qualia and Chalmers’ hard probjem of consciousness, is the term consciousness really necessary for something that can’t be expressed in more precise terms?
Do we? That would be good news; but I doubt it’s true.
I think I answered this in another sub-thread of this discussion. But, here it is again in outline.
We only remember in episodic memory events that we had conscious awareness of. Some types of learning rely on episodic memory. The remembering and the learning are not necessarily, not even probably, part of the conscious process but without consciousness we do not have them. The prediction is part of the monitoring and correcting of on-going motor actions. In order to create the prediction and to use it, various parts of the cortex doing different things have to have access to the prediction. This wide-ranging access seems to be one of the hallmarks of consciousness. So does the slight forward projection of the actual conscious awareness—ie there is a possibility that it is the actual prediction and well is the mode of access.
I hope this answers the question of why I said what I said. I don’t wish to continue this discussion at the present time. As I told RobinZ, I currently have other things to do with my time and find LW has been going off-topic in ways that I don’t find useful. However, you have always been willing to seriously debate and stay on topic, so I have answered your comment. I will probably return to LW at some time. Until then, good luck.
Thanks. I know you don’t want to continue discussion; but I note, for others reading this, that in this explanation, you’re using the word “conscious” to mean “at the center of attention”. This is not the same question I’m asking, which is about “consciousness” as “the experience of qualia”.
I made my comment because it’s very important to know whether experiencing qualia is efficient. Is there any reason to expect that future AIs will have qualia; or can they do what they want to do just as well (maybe better) by not having that feature? If experiencing qualia does not confer an advantage to an AI, then we’re headed for a universe devoid of qualia. That’s the big lose for the universe.
Avoiding that common qualia/attention confusion is reason enough not to taboo “qualia”, which is more precise than “consciousness”.
You seem to be missing the point about what he means to taboo a word. In LessWrong speak, this means to expand out what you mean by the term rather than just use the term itself. So for example, if we tabooed “prime number” we’d need to say instead something like “an integer greater than one that has no positive, non-trivial divisors.” This sort of step is very important when discussing something like consciousness because so many people have different ideas about what the term means.