Suggestion: stop reading, and, before you continue, write (perhaps in a comment to Mitchell’s post) what you take the problems of consciousness to be; what work you expect Dennett to achieve if he is to deserve the book’s title.
Or perhaps that could be a top-level “poll” type post, asking LW readers to post their framing of the issues in consciousness; we would then have some immediate data on your hypothesis that “a lot of very bright people cannot see that consciousness is a big, strange problem”.
You appear to have missed the whole point of section 3, which despite its title isn’t about Freud but is a setup for later exposition of Dennett’s “multiple drafts” model. So far you are sounding like a reader who is expecting to be disappointed, and I feel concerned that your expectations will color your reading too much; which would be a shame, because it’s a great book.
There’s one step in the book that I come back to over and over again, that I have so far never got a hard-problemer to directly address: the idea of heterophenomenology. If you follow this advice, then when you come to write your comment on what the problem of consciousness is, consider whether you have to directly and explicitly appeal to a shared experience of consciousness, or whether you can do it by referring to what we say about consciousness, which is observable from the outside.
My understanding of Dennett’s heterophenomenology has benefited from comparing it with Pickering/Latour and the STS folks’ approach, which rests on reconciling two positions which initially seem at odd with each other:
we commit to taking seriously the first-person accounts of, respectively, “what it is like to be a conscious person” and “what it is like to advance scientific knowledge”
we decline in both case to take these accounts at face value; that is, we assert that our position as an outside observer is no less privileded than our “inside” interlocutor’s; we seek to explain why people say what they say about how they come to have certain forms of knowledge, without assuming their reports are infallible.
When investigating something like attentional blindness, this goes roughly as follows: we show a subject a short video of basketball players in the street after giving them brief instructions. Then we ask them afterwards “what did you consciously see for the past few minutes” ? They are likely to say that they were consciously observing a street scene during that time. But it turns out that we, the investigator, know something about the video which leads us to doubt the subject’s report about what they were conscious of. (I don’t want to spoil anything for those who haven’t seen the video yet, but I assume many people know what I’m talking about. If you don’t, go see the video.)
As far as I can tell, a large number of “problems of consciousness” fall into this category; people’s self-reports of what it is like to be a conscious person conflict with what various clever experiments indicate about what it actually is like to be a conscious person. They also conflict with our intuitions obtained from physical theories.
For instance we can poll people on whether an atom-for-atom copy of themselves would be “the same person”, and notice that most people say “no way, because there can only be one of me”. To explain consciousness is to explain why people feel that way, without assuming that what is to be explained is the mysterious property of “continuity” that consciousness has which results it its being impossible to conceive of a copy of me being the “same consciousness” as me.
Our explanations of consciousness should predict what people will say about how it feels like to be a conscious person.
For me the “hard, scary” problems would include things like whether something can be intelligent without being conscious, and vice versa. Prior to coming across some of the Friendly AI writings on this site, I was assuming that any intelligence also had to be conscious. I also assumed that beings without language must have a much lower degree of consciousness.
Heterophenomenology is neat, tidy, and and wonderful for doing science about a whole bunch of questions about inner sensation. It’s great as far as it goes. Some of us just don’t think it goes to the finish line, and are deeply dissatisfied with the attitude that seems to suggest that it is our “scientific duty” to abandon the question of how the brain generates characteristic inner sensations, on the grounds that we can’t directly access such things from the outside. I believe that future discovery and insight will show that view (assuming I am even ascribing it correctly in the first place) to be short-sighted.
Again, that is useful in its own right, but the indirection changes the question, and so it is not an answer. Accounting for reports of sensations is not conceptually problematic. It’s easy to imagine making the same kinds of explanations for the utterances of a simpler, unconscious machine.
Except that there are certain utterances I would not expect the machine to make. E.g., assuming it was not designed with trickery in mind, I would not expect the machine to insist that it had a tangible, first-person, inner experience. Explaining the utterances does not explain the actual mechanism that I’m talking about when I insist that I have that experience.
I’m not interested in why I say it, I’m interested in what it is that the brain is doing to produce that experience. If it’s a bidirectional feedback loop involving my body, itself, and its environment (or whatever), then I want to know that. And I want to know whether one can construct such a feedback loop and have it experience the same effect.
Please note that I am not making the standard zombie argument. I’m suggesting that humans and animals must have an extra physical—not metaphysical, not extra-physical—component that produces our first-person experience. I want to know how that works, and I currently do not accept that that question is either meaningless, or unanswerable in principle.
Except that there are certain utterances I would not expect the machine to make. E.g., assuming it was not designed with trickery in mind, I would not expect the machine to insist that it had a tangible, first-person, inner experience.
This is precisely the point!
Explaining the utterances does not explain the actual mechanism that I’m talking about when I insist that I have that experience.
Why not? Why, once we’ve explained why you sincerely insist you have that experience, do you assume there’s more to explain?
For certain senses of the word “why” in that sentence, which do not “explain away” the experience, there might not be more to explain.
From reading Dennett, I have not yet got the sense that he, at least, ever means to answer “why” non-trivially. Trivially, I already know why I insist—it’s because I have subjective experience. I can sit here in silence all day and experience all kinds of non-verbal assurances that this is so—textures, tastes, colors, shapes, spacial relationships, sounds, etc.
Whatever systems in my brain register these, and register the registering, interact with the systems that produce beliefs, speech, and so forth. What I’m looking for, and what I suspect a lot of people who posit a “hard problem” are really looking for, is more detail on how the registration works.
Dennett’s “multiple drafts” model might be a good start, for all I know, but it leaves me wanting more. Not wanting a so-called Cartesian Theater—just wanting more explanation of the sort that might be very vaguely analogous to how an electromagnetic speaker produces sound waves. Frankly, I find it very difficult even to think of a proper analogy. At any rate, I’m happy to wait until someone figures it out, but in the meantime I object to philosophies that imply there is nothing left to figure out.
Heterophenomenology does tackle that question, just at one remove—it attempts to account for your reports of those inner sensations.
It does so in terms making no reference to those inner sensations. Heterophenomenology is a lot more than the idea that first-person reports of inner experience are something to be explained, rather than taken as direct reports of the truth. It—Dennett—requires that such reports be explained without reference to inner experience. Heterophenomenology is the view that we are all p-zombies.
It avoids the argument that a distinction between conscious beings and p-zombies makes no sense, by denying that there are conscious beings. There is no inner experience to be explained. Zombie World is this world. Consciousness is not extra-physical, but non-existent. It is consciousness that is absurd, not p-zombies.
You do not exist. I do not exist. There are no persons, no selves, no experiences. There are reports of these things, but nothing that they are reports about. In such reports nothing is true, all is a lie.
Physics revealed the universe to be meaningless. Biology and palaeontology revealed our creation to be meaningless. Now, neuroscience reveals that we are meaningless.
Such, at any rate, is my understanding of Dennett’s book.
Heterophenomenology is a lot more than the idea that first-person reports of inner experience are something to be explained, rather than taken as direct reports of the truth. It—Dennett—requires that such reports be explained without reference to inner experience.
This is the exact opposite of my understanding, which is that heterophenomenology itself sets out only what it is that is to be accounted for and is entirely neutral on what the account might be.
It—Dennett—requires that such reports be explained without reference to inner experience.
Sure.
Heterophenomenology is the view that we are all p-zombies.
Doesn’t follow. H17y can be seen as simply a first, more tractable step on the way to solving the hard problem. Perhaps others would agree with your statement, but I don’t believe Dennett would.
A flawed understanding, then. Dennett certainly does not deny the existence of selves, or of persons. What he does assert is that “self” is something of a different category from the primary elements of our current physics’ ontology (particles, etc.). His analogy is to a “center of gravity”—a notional object, but “real” in the sense of what you take it to be definitely makes a difference in what you predict will happen.
The trouble comes in when we start putting a utility on pleasure and pain. For example lets say you were given a programmatic description of a less than 100% faithful simulation of a human and asked to assess whether it would have (or reported it had) pain, without you running it.
Your answer would determine whether it was used in a world simulation.
I’m just identifying the problem. I have no preferred solution at this point.
ETA: Altering physics is one possible solution. I’d wait on proposing a change to physics until we have a more concrete theory of intelligence and how human type systems are built. I think we can still push computers to be more like human-style systems. So I’m reserving judgement until we have those types of systems.
Whatever we say is explicable in terms of brain physics. It is enough to postulate a p-zombie-like world to explain what we say. If we didn’t experience consciousness directly, the very idea (edit: that is, of p-zombies) would never have occurred to us.
Therefore I don’t see why anyone would want or need to discuss consciousness in terms of outside observations.
If we didn’t experience consciousness directly, the very idea would never have occurred to us.
The fact that the idea occurred to us is observable from the outside—that’s pretty much the central insight behind heterophenomenology. An external observer could see for example this entire thread of discussion, and conclude that we’ve come up with an idea we call “consciousness” and some of us discuss it lots. And that’s definitely an observation that any worthwhile theory has to account for, it’s completely copper-bottomed objective truth.
If you haven’t already, have a look at the sequence on zombies especially the first couple of articles.
You may have misinterpreted my comment. I meant that if we didn’t experience consciousness directly, the idea of p-zombies would not have occurred to us.
I did misinterpret it, but it doesn’t matter because the response is almost exactly the same. The fact that the idea of p-zombies occurred to us is also observable from the outside, and is therefore copper-bottomed evidence that heterophenomenology takes into account.
If you can state the problem based on what we observe from the outside, it moves us from a rather slippery world of questions that are hard to pin down to a very straightforward hard-edged question about “does your theory predict what we observe?”. And I’m not asking whether you think this is a necessary move—I’m asking whether it’s sufficient—whether you’re still able to state the problem relying only on what you can observe from the outside.
The fundamental premise of consciousness (in its usual definitions) is, indeed, something that by definition cannot be observed from the outside. Yes, this involves a lot of problems like p-zombies. But once you prove (or assume) that you can handle the problem purely from the outside, then you’ve effectively solved (or destroyed) the problem, congratulations.
Hard Problem Of Consciousness tl;dr: “It feels like there’s something on the inside that cannot be observed from the outside...”
Not only cannot be observed from the outside, but has no observable consequences whatsoever? This whole thread isn’t a consequence of consciousness? Could you confirm for me that you mean to bite that bullet?
It’s starting to look like I or someone should do a top level article explicitly on this subject, but in the meantime, you might be interested in Dennett’s Who’s On First?.
EDIT: probably too late, but requesting downvote explanation—thanks!
Just to make clear, this isn’t my view. I’m explaining the views of other people who think “consciousness” is an extra-physical phenomenon. I started by pointing out the necessary consequences of that position, but it’s not my own position. (I said this in other threads on the subject but not here, I see now.)
And yes, if people who postulate extra-physical consciousness, AFAICS they have to bite this bullet. If consciousness is at all extra-physical, then it is completely extra-physical, and is not the cause of any physical event.
On the other hand, if consciousness was the name of some ordinary, physical, pattern, then it wouldn’t help explain the subjective experience that forms the “Hard Problem of Consciousness.”
Suggestion: stop reading, and, before you continue, write (perhaps in a comment to Mitchell’s post) what you take the problems of consciousness to be; what work you expect Dennett to achieve if he is to deserve the book’s title.
Or perhaps that could be a top-level “poll” type post, asking LW readers to post their framing of the issues in consciousness; we would then have some immediate data on your hypothesis that “a lot of very bright people cannot see that consciousness is a big, strange problem”.
You appear to have missed the whole point of section 3, which despite its title isn’t about Freud but is a setup for later exposition of Dennett’s “multiple drafts” model. So far you are sounding like a reader who is expecting to be disappointed, and I feel concerned that your expectations will color your reading too much; which would be a shame, because it’s a great book.
There’s one step in the book that I come back to over and over again, that I have so far never got a hard-problemer to directly address: the idea of heterophenomenology. If you follow this advice, then when you come to write your comment on what the problem of consciousness is, consider whether you have to directly and explicitly appeal to a shared experience of consciousness, or whether you can do it by referring to what we say about consciousness, which is observable from the outside.
My understanding of Dennett’s heterophenomenology has benefited from comparing it with Pickering/Latour and the STS folks’ approach, which rests on reconciling two positions which initially seem at odd with each other:
we commit to taking seriously the first-person accounts of, respectively, “what it is like to be a conscious person” and “what it is like to advance scientific knowledge”
we decline in both case to take these accounts at face value; that is, we assert that our position as an outside observer is no less privileded than our “inside” interlocutor’s; we seek to explain why people say what they say about how they come to have certain forms of knowledge, without assuming their reports are infallible.
When investigating something like attentional blindness, this goes roughly as follows: we show a subject a short video of basketball players in the street after giving them brief instructions. Then we ask them afterwards “what did you consciously see for the past few minutes” ? They are likely to say that they were consciously observing a street scene during that time. But it turns out that we, the investigator, know something about the video which leads us to doubt the subject’s report about what they were conscious of. (I don’t want to spoil anything for those who haven’t seen the video yet, but I assume many people know what I’m talking about. If you don’t, go see the video.)
As far as I can tell, a large number of “problems of consciousness” fall into this category; people’s self-reports of what it is like to be a conscious person conflict with what various clever experiments indicate about what it actually is like to be a conscious person. They also conflict with our intuitions obtained from physical theories.
For instance we can poll people on whether an atom-for-atom copy of themselves would be “the same person”, and notice that most people say “no way, because there can only be one of me”. To explain consciousness is to explain why people feel that way, without assuming that what is to be explained is the mysterious property of “continuity” that consciousness has which results it its being impossible to conceive of a copy of me being the “same consciousness” as me.
Our explanations of consciousness should predict what people will say about how it feels like to be a conscious person.
For me the “hard, scary” problems would include things like whether something can be intelligent without being conscious, and vice versa. Prior to coming across some of the Friendly AI writings on this site, I was assuming that any intelligence also had to be conscious. I also assumed that beings without language must have a much lower degree of consciousness.
Heterophenomenology is neat, tidy, and and wonderful for doing science about a whole bunch of questions about inner sensation. It’s great as far as it goes. Some of us just don’t think it goes to the finish line, and are deeply dissatisfied with the attitude that seems to suggest that it is our “scientific duty” to abandon the question of how the brain generates characteristic inner sensations, on the grounds that we can’t directly access such things from the outside. I believe that future discovery and insight will show that view (assuming I am even ascribing it correctly in the first place) to be short-sighted.
Heterophenomenology does tackle that question, just at one remove—it attempts to account for your reports of those inner sensations.
Again, that is useful in its own right, but the indirection changes the question, and so it is not an answer. Accounting for reports of sensations is not conceptually problematic. It’s easy to imagine making the same kinds of explanations for the utterances of a simpler, unconscious machine.
Except that there are certain utterances I would not expect the machine to make. E.g., assuming it was not designed with trickery in mind, I would not expect the machine to insist that it had a tangible, first-person, inner experience. Explaining the utterances does not explain the actual mechanism that I’m talking about when I insist that I have that experience.
I’m not interested in why I say it, I’m interested in what it is that the brain is doing to produce that experience. If it’s a bidirectional feedback loop involving my body, itself, and its environment (or whatever), then I want to know that. And I want to know whether one can construct such a feedback loop and have it experience the same effect.
Please note that I am not making the standard zombie argument. I’m suggesting that humans and animals must have an extra physical—not metaphysical, not extra-physical—component that produces our first-person experience. I want to know how that works, and I currently do not accept that that question is either meaningless, or unanswerable in principle.
This is precisely the point!
Why not? Why, once we’ve explained why you sincerely insist you have that experience, do you assume there’s more to explain?
For certain senses of the word “why” in that sentence, which do not “explain away” the experience, there might not be more to explain.
From reading Dennett, I have not yet got the sense that he, at least, ever means to answer “why” non-trivially. Trivially, I already know why I insist—it’s because I have subjective experience. I can sit here in silence all day and experience all kinds of non-verbal assurances that this is so—textures, tastes, colors, shapes, spacial relationships, sounds, etc.
Whatever systems in my brain register these, and register the registering, interact with the systems that produce beliefs, speech, and so forth. What I’m looking for, and what I suspect a lot of people who posit a “hard problem” are really looking for, is more detail on how the registration works.
Dennett’s “multiple drafts” model might be a good start, for all I know, but it leaves me wanting more. Not wanting a so-called Cartesian Theater—just wanting more explanation of the sort that might be very vaguely analogous to how an electromagnetic speaker produces sound waves. Frankly, I find it very difficult even to think of a proper analogy. At any rate, I’m happy to wait until someone figures it out, but in the meantime I object to philosophies that imply there is nothing left to figure out.
Which, to his credit, Dennett does not imply (at least, not in Consciousness Explained).
It does so in terms making no reference to those inner sensations. Heterophenomenology is a lot more than the idea that first-person reports of inner experience are something to be explained, rather than taken as direct reports of the truth. It—Dennett—requires that such reports be explained without reference to inner experience. Heterophenomenology is the view that we are all p-zombies.
It avoids the argument that a distinction between conscious beings and p-zombies makes no sense, by denying that there are conscious beings. There is no inner experience to be explained. Zombie World is this world. Consciousness is not extra-physical, but non-existent. It is consciousness that is absurd, not p-zombies.
You do not exist. I do not exist. There are no persons, no selves, no experiences. There are reports of these things, but nothing that they are reports about. In such reports nothing is true, all is a lie.
Physics revealed the universe to be meaningless. Biology and palaeontology revealed our creation to be meaningless. Now, neuroscience reveals that we are meaningless.
Such, at any rate, is my understanding of Dennett’s book.
This is the exact opposite of my understanding, which is that heterophenomenology itself sets out only what it is that is to be accounted for and is entirely neutral on what the account might be.
Sure.
Doesn’t follow. H17y can be seen as simply a first, more tractable step on the way to solving the hard problem. Perhaps others would agree with your statement, but I don’t believe Dennett would.
A flawed understanding, then. Dennett certainly does not deny the existence of selves, or of persons. What he does assert is that “self” is something of a different category from the primary elements of our current physics’ ontology (particles, etc.). His analogy is to a “center of gravity”—a notional object, but “real” in the sense of what you take it to be definitely makes a difference in what you predict will happen.
The trouble comes in when we start putting a utility on pleasure and pain. For example lets say you were given a programmatic description of a less than 100% faithful simulation of a human and asked to assess whether it would have (or reported it had) pain, without you running it.
Your answer would determine whether it was used in a world simulation.
Proposing a change in physics to make your utility function more intuitive seems like a serious mis-step to me.
I’m just identifying the problem. I have no preferred solution at this point.
ETA: Altering physics is one possible solution. I’d wait on proposing a change to physics until we have a more concrete theory of intelligence and how human type systems are built. I think we can still push computers to be more like human-style systems. So I’m reserving judgement until we have those types of systems.
Whatever we say is explicable in terms of brain physics. It is enough to postulate a p-zombie-like world to explain what we say. If we didn’t experience consciousness directly, the very idea (edit: that is, of p-zombies) would never have occurred to us.
Therefore I don’t see why anyone would want or need to discuss consciousness in terms of outside observations.
The fact that the idea occurred to us is observable from the outside—that’s pretty much the central insight behind heterophenomenology. An external observer could see for example this entire thread of discussion, and conclude that we’ve come up with an idea we call “consciousness” and some of us discuss it lots. And that’s definitely an observation that any worthwhile theory has to account for, it’s completely copper-bottomed objective truth.
If you haven’t already, have a look at the sequence on zombies especially the first couple of articles.
You may have misinterpreted my comment. I meant that if we didn’t experience consciousness directly, the idea of p-zombies would not have occurred to us.
I did misinterpret it, but it doesn’t matter because the response is almost exactly the same. The fact that the idea of p-zombies occurred to us is also observable from the outside, and is therefore copper-bottomed evidence that heterophenomenology takes into account.
If you can state the problem based on what we observe from the outside, it moves us from a rather slippery world of questions that are hard to pin down to a very straightforward hard-edged question about “does your theory predict what we observe?”. And I’m not asking whether you think this is a necessary move—I’m asking whether it’s sufficient—whether you’re still able to state the problem relying only on what you can observe from the outside.
The fundamental premise of consciousness (in its usual definitions) is, indeed, something that by definition cannot be observed from the outside. Yes, this involves a lot of problems like p-zombies. But once you prove (or assume) that you can handle the problem purely from the outside, then you’ve effectively solved (or destroyed) the problem, congratulations.
Hard Problem Of Consciousness tl;dr: “It feels like there’s something on the inside that cannot be observed from the outside...”
Not only cannot be observed from the outside, but has no observable consequences whatsoever? This whole thread isn’t a consequence of consciousness? Could you confirm for me that you mean to bite that bullet?
It’s starting to look like I or someone should do a top level article explicitly on this subject, but in the meantime, you might be interested in Dennett’s Who’s On First?.
EDIT: probably too late, but requesting downvote explanation—thanks!
Just to make clear, this isn’t my view. I’m explaining the views of other people who think “consciousness” is an extra-physical phenomenon. I started by pointing out the necessary consequences of that position, but it’s not my own position. (I said this in other threads on the subject but not here, I see now.)
And yes, if people who postulate extra-physical consciousness, AFAICS they have to bite this bullet. If consciousness is at all extra-physical, then it is completely extra-physical, and is not the cause of any physical event.
On the other hand, if consciousness was the name of some ordinary, physical, pattern, then it wouldn’t help explain the subjective experience that forms the “Hard Problem of Consciousness.”