My understanding of Dennett’s heterophenomenology has benefited from comparing it with Pickering/Latour and the STS folks’ approach, which rests on reconciling two positions which initially seem at odd with each other:
we commit to taking seriously the first-person accounts of, respectively, “what it is like to be a conscious person” and “what it is like to advance scientific knowledge”
we decline in both case to take these accounts at face value; that is, we assert that our position as an outside observer is no less privileded than our “inside” interlocutor’s; we seek to explain why people say what they say about how they come to have certain forms of knowledge, without assuming their reports are infallible.
When investigating something like attentional blindness, this goes roughly as follows: we show a subject a short video of basketball players in the street after giving them brief instructions. Then we ask them afterwards “what did you consciously see for the past few minutes” ? They are likely to say that they were consciously observing a street scene during that time. But it turns out that we, the investigator, know something about the video which leads us to doubt the subject’s report about what they were conscious of. (I don’t want to spoil anything for those who haven’t seen the video yet, but I assume many people know what I’m talking about. If you don’t, go see the video.)
As far as I can tell, a large number of “problems of consciousness” fall into this category; people’s self-reports of what it is like to be a conscious person conflict with what various clever experiments indicate about what it actually is like to be a conscious person. They also conflict with our intuitions obtained from physical theories.
For instance we can poll people on whether an atom-for-atom copy of themselves would be “the same person”, and notice that most people say “no way, because there can only be one of me”. To explain consciousness is to explain why people feel that way, without assuming that what is to be explained is the mysterious property of “continuity” that consciousness has which results it its being impossible to conceive of a copy of me being the “same consciousness” as me.
Our explanations of consciousness should predict what people will say about how it feels like to be a conscious person.
For me the “hard, scary” problems would include things like whether something can be intelligent without being conscious, and vice versa. Prior to coming across some of the Friendly AI writings on this site, I was assuming that any intelligence also had to be conscious. I also assumed that beings without language must have a much lower degree of consciousness.
My understanding of Dennett’s heterophenomenology has benefited from comparing it with Pickering/Latour and the STS folks’ approach, which rests on reconciling two positions which initially seem at odd with each other:
we commit to taking seriously the first-person accounts of, respectively, “what it is like to be a conscious person” and “what it is like to advance scientific knowledge”
we decline in both case to take these accounts at face value; that is, we assert that our position as an outside observer is no less privileded than our “inside” interlocutor’s; we seek to explain why people say what they say about how they come to have certain forms of knowledge, without assuming their reports are infallible.
When investigating something like attentional blindness, this goes roughly as follows: we show a subject a short video of basketball players in the street after giving them brief instructions. Then we ask them afterwards “what did you consciously see for the past few minutes” ? They are likely to say that they were consciously observing a street scene during that time. But it turns out that we, the investigator, know something about the video which leads us to doubt the subject’s report about what they were conscious of. (I don’t want to spoil anything for those who haven’t seen the video yet, but I assume many people know what I’m talking about. If you don’t, go see the video.)
As far as I can tell, a large number of “problems of consciousness” fall into this category; people’s self-reports of what it is like to be a conscious person conflict with what various clever experiments indicate about what it actually is like to be a conscious person. They also conflict with our intuitions obtained from physical theories.
For instance we can poll people on whether an atom-for-atom copy of themselves would be “the same person”, and notice that most people say “no way, because there can only be one of me”. To explain consciousness is to explain why people feel that way, without assuming that what is to be explained is the mysterious property of “continuity” that consciousness has which results it its being impossible to conceive of a copy of me being the “same consciousness” as me.
Our explanations of consciousness should predict what people will say about how it feels like to be a conscious person.
For me the “hard, scary” problems would include things like whether something can be intelligent without being conscious, and vice versa. Prior to coming across some of the Friendly AI writings on this site, I was assuming that any intelligence also had to be conscious. I also assumed that beings without language must have a much lower degree of consciousness.