I’m starting Dennett’s “Consciousness Explained”. Dennett says, in the introduction, that he believes he has solved the problem of consciousness. Since several people have referred to his work here with approval, I’m going to give it a go. I’m going to post chapter summaries as I read, for my own selfish benefit, so that you can point out when you disagree with my understanding of it. “D” will stand for Dennett.
If you loathe the C-word, just stop now. That’s what the convenient break just below is for. You are responsible for your own wasted time if you proceed.
Chpt. 1: Prelude: How are Hallucinations Possible?
D describes the brain in a vat, and asks how we can know we aren’t brains in vats. This dismays me, as it is one of those questions that distracts people trying to talk about consciousness, that has nothing to do with the difficult problems of consciousness.
Dennett states, without presenting a single number, that the bandwidth needs for reproducing our sensory experience would be so great that it is impossible (his actual word); and that this proves that we are not brains in vats. Sigh.
He then asks how hallucinations are possible: “How on earth can a single brain do what teams of scientists and computer animators would find to be almost impossible?” Sigh again. This is surprising to Dennett because he believes he has just established that the bandwidth needs for consciousness are too great for any computer to provide; yet the brain sometimes (during hallucinations) provides nearly that much bandwidth. D has apparently forgotten that the brain provides exactly, by definition, the consciousness bandwidth of information to us all the time.
D recounts Descartes’ remarkably prescient discussion of the bellpull as an analogy for how the brain could send us phantom misinformation; but dismisses it, saying, “there is no way the brain as illusionist could store and manipulate enough false information to fool an inquiring mind.” Sigh. Now not only consciousness, but also dreams, are impossible. However, D then comes back to dreams, and is aware they exist and are hallucinations; so either he or I is misunderstanding this section.
On p. 12 he suggests something interesting: Perception is driven both bottom-up (from the senses) and top-down (from our expectations). A hallucination could happen when the bottom-up channel is cut off. D doesn’t get into data compression at all, but I think a better way to phrase this is that, given arbitrary bottom-up data, the mind can decompress sensory input into the most likely interpretation given the data and given its knowledge about the world. Internally, we should expect that high-bandwidth sensory data is summarized somewhere in a compressed form. Compressed data necessarily looks more random than prior to compression. This means that, somewhere inside the mind, we should expect it to be harder than naive introspection suggests to distinguish between true sensory data and random sensory noise. D suggests an important role for an adjustable sensitivity threshold for accepting/rejecting suggested interpretations of sense data.
D dismisses Freud’s ideas about dreams—that they are stories about our current concerns, hidden under symbolism in order to sneak past our internal censors—by observing that we should not posit homunculi inside our brains who are smarter than we are.
[In summary, this chapter contained some bone-headed howlers, and some interesting things; but on the whole, it makes me doubt that D is going to address the problem of consciousness. He seems, instead, on a trajectory to try to explain how a brain can produce intelligent action. It sounds like he plans to talk about the architecture of human intelligence, although he does promise to address qualia in part III.
Repeatedly on LW, I’ve seen one person (frequently Mitchell Porter) raise the problem of qualia; and seen otherwise-intelligent people reply by saying science has got it covered, consciousness is a property of physical systems, nothing to worry about. For some reason, a lot of very bright people cannot see that consciousness is a big, strange problem. Not intelligence, not even assigning meaning to representations, but consciousness. It is a different problem. (A complete explanation of how intelligence and symbol-grounding take place in humans might concomitantly explain consciousness; it does not follow, as most people seem to think it does, that demonstrating a way to account for non-human intelligence and symbol-grounding therefore accounts for consciousness.)
Part of the problem is their theistic opponents, who hopelessly muddle intelligence, consciousness, and religion: “A computer can never write a symphony. Therefore consciousness is metaphysical; therefore I have a soul; therefore there is life after-death.” I think this line of reasoning has been presented to us all so often that a lot of us have cached it, to the extent that it injects itself into our own reasoning. People on LW who try to elucidate the problem of qualia inevitably get dismissed as quasi-theists, because, historically, all of the people saying things that sound similar were theists.
At this point, I suspect that Dennett has contributed to this confusion, by writing a book about intelligence and claiming not just that it’s about consciousness, but that it has solved the problem. I shall see.]
Dennett’s “Consciousness Explained”: Prelude
I’m starting Dennett’s “Consciousness Explained”. Dennett says, in the introduction, that he believes he has solved the problem of consciousness. Since several people have referred to his work here with approval, I’m going to give it a go. I’m going to post chapter summaries as I read, for my own selfish benefit, so that you can point out when you disagree with my understanding of it. “D” will stand for Dennett.
If you loathe the C-word, just stop now. That’s what the convenient break just below is for. You are responsible for your own wasted time if you proceed.
Chpt. 1: Prelude: How are Hallucinations Possible?
D describes the brain in a vat, and asks how we can know we aren’t brains in vats. This dismays me, as it is one of those questions that distracts people trying to talk about consciousness, that has nothing to do with the difficult problems of consciousness.
Dennett states, without presenting a single number, that the bandwidth needs for reproducing our sensory experience would be so great that it is impossible (his actual word); and that this proves that we are not brains in vats. Sigh.
He then asks how hallucinations are possible: “How on earth can a single brain do what teams of scientists and computer animators would find to be almost impossible?” Sigh again. This is surprising to Dennett because he believes he has just established that the bandwidth needs for consciousness are too great for any computer to provide; yet the brain sometimes (during hallucinations) provides nearly that much bandwidth. D has apparently forgotten that the brain provides exactly, by definition, the consciousness bandwidth of information to us all the time.
D recounts Descartes’ remarkably prescient discussion of the bellpull as an analogy for how the brain could send us phantom misinformation; but dismisses it, saying, “there is no way the brain as illusionist could store and manipulate enough false information to fool an inquiring mind.” Sigh. Now not only consciousness, but also dreams, are impossible. However, D then comes back to dreams, and is aware they exist and are hallucinations; so either he or I is misunderstanding this section.
On p. 12 he suggests something interesting: Perception is driven both bottom-up (from the senses) and top-down (from our expectations). A hallucination could happen when the bottom-up channel is cut off. D doesn’t get into data compression at all, but I think a better way to phrase this is that, given arbitrary bottom-up data, the mind can decompress sensory input into the most likely interpretation given the data and given its knowledge about the world. Internally, we should expect that high-bandwidth sensory data is summarized somewhere in a compressed form. Compressed data necessarily looks more random than prior to compression. This means that, somewhere inside the mind, we should expect it to be harder than naive introspection suggests to distinguish between true sensory data and random sensory noise. D suggests an important role for an adjustable sensitivity threshold for accepting/rejecting suggested interpretations of sense data.
D dismisses Freud’s ideas about dreams—that they are stories about our current concerns, hidden under symbolism in order to sneak past our internal censors—by observing that we should not posit homunculi inside our brains who are smarter than we are.
[In summary, this chapter contained some bone-headed howlers, and some interesting things; but on the whole, it makes me doubt that D is going to address the problem of consciousness. He seems, instead, on a trajectory to try to explain how a brain can produce intelligent action. It sounds like he plans to talk about the architecture of human intelligence, although he does promise to address qualia in part III.
Repeatedly on LW, I’ve seen one person (frequently Mitchell Porter) raise the problem of qualia; and seen otherwise-intelligent people reply by saying science has got it covered, consciousness is a property of physical systems, nothing to worry about. For some reason, a lot of very bright people cannot see that consciousness is a big, strange problem. Not intelligence, not even assigning meaning to representations, but consciousness. It is a different problem. (A complete explanation of how intelligence and symbol-grounding take place in humans might concomitantly explain consciousness; it does not follow, as most people seem to think it does, that demonstrating a way to account for non-human intelligence and symbol-grounding therefore accounts for consciousness.)
Part of the problem is their theistic opponents, who hopelessly muddle intelligence, consciousness, and religion: “A computer can never write a symphony. Therefore consciousness is metaphysical; therefore I have a soul; therefore there is life after-death.” I think this line of reasoning has been presented to us all so often that a lot of us have cached it, to the extent that it injects itself into our own reasoning. People on LW who try to elucidate the problem of qualia inevitably get dismissed as quasi-theists, because, historically, all of the people saying things that sound similar were theists.
At this point, I suspect that Dennett has contributed to this confusion, by writing a book about intelligence and claiming not just that it’s about consciousness, but that it has solved the problem. I shall see.]