I’m splitting up my response to this into several pieces because it got long. Some other stuff:
what, in light of a Dennett-type approach, we can identify as conscious or not.
The process isn’t anything special, but OK, since you ask.
Let’s assert for simplicity that “I” has a relatively straightforward and consistent referent, just to get us off the ground. Given that, I conclude that am at least sometimes capable of subjective experience, because I’ve observed myself subjectively experiencing.
I further observe that my subjective experiences reliably and differentially predict certain behaviors. I do certain things when I experience pain, for example, and different things when I experience pleasure. When I observe other entities (E2) performing those behaviors, that’s evidence that they, too, experience pain and pleasure. Similar reasoning applies to other kinds of subjective experience.
I look for commonalities among E2 and I generalize across those commonalities. I notice certain biological structures are common to E2 and that when I manipulate those structures, I reliably and differentially get changes in the above-referenced behavior. Later, I observe additional entities (E3) that have similar structures; that’s evidence that E3 also demonstrates subjective experience, even though E3 doesn’t behave the way I do.
Later, I build an artificial structure (E4) and I observe that there are certain properties (P1) of E2 which, when I reproduce them in E4 without reproducing other properties (P2), reproduce the behavior of E2. I conclude that P1 is an important part of that behavior, and P2 is not.
I continue this process of observation and inference and continue to draw conclusions based on it. And at some point someone asks “is X conscious?” for various Xes:
I.e. plants, animals, simple computers, top-level computers, theoretical super-computers of various kinds, theoretical complex networks divided across large areas so that each signal from one part to another takes weeks...
If I interpret “conscious” as meaning having subjective experience, then for each X I observe it carefully and look for the kinds of attributes I’ve attributed to subjective experience… behaviors, anatomical structures, formal structures, etc… and compare it to my accumulated knowledge to make a decision.
Isn’t that how you answer such questions as well?
If not, then I’ll ask you the same question: what, in light of whatever non-Dennett-type approach you prefer, can we identify as conscious or not?
Ok, well given that responses to pain/pleasure can equally be explained by more direct evolutionary reasons, I’m not sure that the inference from action to experience is very useful. Why would you ever connect these things with expereince rather than other, more directly measurable things?
But the point is definitely not that I have a magic bullet or easy solution: it’s that I think there’s a real and urgent question—are they conscious—which I don’t see how information about responses etc. can answer. Compare to the cases of containment, or heat, or life—all the urgent questions are already resolved before those issues are even raised.
As I say, the best way I know of to answer “is it conscious?” about X is to compare X to other systems about which I have confidence-levels about its consciousness and look for commonalities and distinctions.
If there are alternative approaches that you think give us more reliable answers, I’d love to hear about them.
I have no reliable answers! And I have low meta-confidence levels (in that it seems clear to me that people and most other creatures are conscious but I have no confidence in why I think this)
If the Dennett position still sees this as a complete bafflement but thinks it will be resolved with the so-called ‘soft’ problem, I have less of an issue than I thought I did. Though I’d still regard the view that the issue will become clear one of hope rather than evidence.
I’m splitting up my response to this into several pieces because it got long. Some other stuff:
The process isn’t anything special, but OK, since you ask.
Let’s assert for simplicity that “I” has a relatively straightforward and consistent referent, just to get us off the ground. Given that, I conclude that am at least sometimes capable of subjective experience, because I’ve observed myself subjectively experiencing.
I further observe that my subjective experiences reliably and differentially predict certain behaviors. I do certain things when I experience pain, for example, and different things when I experience pleasure. When I observe other entities (E2) performing those behaviors, that’s evidence that they, too, experience pain and pleasure. Similar reasoning applies to other kinds of subjective experience.
I look for commonalities among E2 and I generalize across those commonalities. I notice certain biological structures are common to E2 and that when I manipulate those structures, I reliably and differentially get changes in the above-referenced behavior. Later, I observe additional entities (E3) that have similar structures; that’s evidence that E3 also demonstrates subjective experience, even though E3 doesn’t behave the way I do.
Later, I build an artificial structure (E4) and I observe that there are certain properties (P1) of E2 which, when I reproduce them in E4 without reproducing other properties (P2), reproduce the behavior of E2. I conclude that P1 is an important part of that behavior, and P2 is not.
I continue this process of observation and inference and continue to draw conclusions based on it. And at some point someone asks “is X conscious?” for various Xes:
If I interpret “conscious” as meaning having subjective experience, then for each X I observe it carefully and look for the kinds of attributes I’ve attributed to subjective experience… behaviors, anatomical structures, formal structures, etc… and compare it to my accumulated knowledge to make a decision.
Isn’t that how you answer such questions as well?
If not, then I’ll ask you the same question: what, in light of whatever non-Dennett-type approach you prefer, can we identify as conscious or not?
Ok, well given that responses to pain/pleasure can equally be explained by more direct evolutionary reasons, I’m not sure that the inference from action to experience is very useful. Why would you ever connect these things with expereince rather than other, more directly measurable things?
But the point is definitely not that I have a magic bullet or easy solution: it’s that I think there’s a real and urgent question—are they conscious—which I don’t see how information about responses etc. can answer. Compare to the cases of containment, or heat, or life—all the urgent questions are already resolved before those issues are even raised.
As I say, the best way I know of to answer “is it conscious?” about X is to compare X to other systems about which I have confidence-levels about its consciousness and look for commonalities and distinctions.
If there are alternative approaches that you think give us more reliable answers, I’d love to hear about them.
I have no reliable answers! And I have low meta-confidence levels (in that it seems clear to me that people and most other creatures are conscious but I have no confidence in why I think this)
If the Dennett position still sees this as a complete bafflement but thinks it will be resolved with the so-called ‘soft’ problem, I have less of an issue than I thought I did. Though I’d still regard the view that the issue will become clear one of hope rather than evidence.