The chinese room argument is really just another form of the Hard Problem of consciousness.
This is correct and deserves elaboration.
Searle makes clear his agreement with Brentano that intentionality is the hallmark of consciousness. “Intentionality” here means about-ness, i.e. a semantic relation whereby a word (for example) is about an object. For Searle, all consciousness involves intentionality, and all intentionality either directly involves consciousness or derives ultimately from consciousness. But suppose we also smuggle in the assumption—and for English speakers, this will come naturally—that subjective experience is necessarily entwined with “consciousness”. In that case we commit to a view we could summarize as “intentionality if and only if subjective experience.”
Now let me admit, Searle never explicitly endorses such a statement, as far as I know. I think it has nothing to recommend it, either. But I do think he believes it, because that would explain so much of what he does explicitly say.
Why do I reject “intentionality if and only if subjective experience”? For one thing, there are simple states of consciousness—moods, for example—that have no intentionality, so subjectivity fails to imply intentionality. Nor can I see any reason that the implication holds in the direction from intentionality to subjectivity.
Searle’s arguments fail to show that AIs in the “computationalist” conception can’t think about, and talk about, stuff. But then, that just shows that he picked the wrong target. Intentionality is easy. The real question is qualia.
Why do I reject “intentionality if and only if subjective experience”? For one thing, there are simple states of consciousness—moods, for example—that have no intentionality, so subjectivity fails to imply intentionality. Nor can I see any reason that the implication holds in the direction from intentionality to subjectivity.
I think this is a bit confused. It isn’t that simple states of consciousness, qualia, etc. imply intentionality, rather that they are prerequisites for intentionality. X if and only if Y just means there can be no X without Y. I’m not familiar enough with Searle to comment on his endorsement of the idea, but it makes sense to me at least that in order to have intention (in the sense of will) an agent would have first to be able to perceive (subjectively, of course) the surroundings/other agents on which it intends to act. You say intentionality is “easy”. Okay. But what does it mean to talk of intentionality, without a subject to have the intention?
“Intentionality” is an unfortunate word choice here, because it’s not primarily about intention in the sense of will. Blame Brentano, and Searle for following him, for that word choice. Intentionality means aboutness, i.e. a semantic relation between word and object, belief and fact, or desire and outcome. The last example shows that intention in the sense of will is included within “intentionality” as Searle uses it, but it’s not the only example. Your argument is still plausible and relevant, and I’ll try to reply in a moment.
As you suggest, I didn’t even bother trying to argue against the contention that qualia are prerequisite for intentionality. Not because I don’t think an argument can be made, but mainly because the Less Wrong community doesn’t seem to need any convincing, or didn’t until you came along. My argument basically amounts to pointing to plausible theories of what the semantic relationship is, such as teleosemantics or asymmetric dependence, and noting that qualia are not mentioned or implied in those theories.
Now to answer your argument. I do think it’s conceivable for an agent to have intentions to act, and have perceptions of facts, without having qualia as we know them. Call this agent Robbie Robot. Robbie is still a subject, in the sense that, e.g. “Robbie knows that the blue box fits inside the red one” is true, and expresses a semantic relation, and Robbie is the subject of that sentence. But Robbie doesn’t have a subjective experience of red or blue; it only has an objective perception of red or blue. Unlike humans, Robbie has no cognitive access to an intermediate state between the actual external world of boxes, and the ultimate cognitive achievement of knowing that this box is red. Robbie is not subject to tricks of lighting. Robbie cannot be drugged in a way that makes it see colors differently. When it comes to box colors, Robbie is infallible, and therefore there is no such thing as “appears to be red” or “seems blue” to Robbie. There is no veil of perception. There is only reality. Perfect engineering has eliminated subjectivity.
This little story seems wildly improbable, but it’s not self-contradictory. I think it shows that knowledge and (repeat the story with suitable substitutions) intentional action need not imply subjectivity.
This is correct and deserves elaboration.
Searle makes clear his agreement with Brentano that intentionality is the hallmark of consciousness. “Intentionality” here means about-ness, i.e. a semantic relation whereby a word (for example) is about an object. For Searle, all consciousness involves intentionality, and all intentionality either directly involves consciousness or derives ultimately from consciousness. But suppose we also smuggle in the assumption—and for English speakers, this will come naturally—that subjective experience is necessarily entwined with “consciousness”. In that case we commit to a view we could summarize as “intentionality if and only if subjective experience.”
Now let me admit, Searle never explicitly endorses such a statement, as far as I know. I think it has nothing to recommend it, either. But I do think he believes it, because that would explain so much of what he does explicitly say.
Why do I reject “intentionality if and only if subjective experience”? For one thing, there are simple states of consciousness—moods, for example—that have no intentionality, so subjectivity fails to imply intentionality. Nor can I see any reason that the implication holds in the direction from intentionality to subjectivity.
Searle’s arguments fail to show that AIs in the “computationalist” conception can’t think about, and talk about, stuff. But then, that just shows that he picked the wrong target. Intentionality is easy. The real question is qualia.
I think this is a bit confused. It isn’t that simple states of consciousness, qualia, etc. imply intentionality, rather that they are prerequisites for intentionality. X if and only if Y just means there can be no X without Y. I’m not familiar enough with Searle to comment on his endorsement of the idea, but it makes sense to me at least that in order to have intention (in the sense of will) an agent would have first to be able to perceive (subjectively, of course) the surroundings/other agents on which it intends to act. You say intentionality is “easy”. Okay. But what does it mean to talk of intentionality, without a subject to have the intention?
“Intentionality” is an unfortunate word choice here, because it’s not primarily about intention in the sense of will. Blame Brentano, and Searle for following him, for that word choice. Intentionality means aboutness, i.e. a semantic relation between word and object, belief and fact, or desire and outcome. The last example shows that intention in the sense of will is included within “intentionality” as Searle uses it, but it’s not the only example. Your argument is still plausible and relevant, and I’ll try to reply in a moment.
As you suggest, I didn’t even bother trying to argue against the contention that qualia are prerequisite for intentionality. Not because I don’t think an argument can be made, but mainly because the Less Wrong community doesn’t seem to need any convincing, or didn’t until you came along. My argument basically amounts to pointing to plausible theories of what the semantic relationship is, such as teleosemantics or asymmetric dependence, and noting that qualia are not mentioned or implied in those theories.
Now to answer your argument. I do think it’s conceivable for an agent to have intentions to act, and have perceptions of facts, without having qualia as we know them. Call this agent Robbie Robot. Robbie is still a subject, in the sense that, e.g. “Robbie knows that the blue box fits inside the red one” is true, and expresses a semantic relation, and Robbie is the subject of that sentence. But Robbie doesn’t have a subjective experience of red or blue; it only has an objective perception of red or blue. Unlike humans, Robbie has no cognitive access to an intermediate state between the actual external world of boxes, and the ultimate cognitive achievement of knowing that this box is red. Robbie is not subject to tricks of lighting. Robbie cannot be drugged in a way that makes it see colors differently. When it comes to box colors, Robbie is infallible, and therefore there is no such thing as “appears to be red” or “seems blue” to Robbie. There is no veil of perception. There is only reality. Perfect engineering has eliminated subjectivity.
This little story seems wildly improbable, but it’s not self-contradictory. I think it shows that knowledge and (repeat the story with suitable substitutions) intentional action need not imply subjectivity.