I used not to take Searle’s arguments seriously until I actually understood what they were about.
Before anything, I should say that I disagree with Searle’s arguments. However, it is important to understand them if we are to have a rational discussion.
Most importantly, Searle does not claim that machines can never understand, or that there is something inherently special about the human brain that cannot be replicated in a computer. He acknowledges that the human brain is governed by physics and is probably subject to the church-turing thesis.
Searle’s main argument is this: sophistication of computation does not by itself lead to understanding. That is, just because a computer is doing something that a human could not do without understanding does not mean the computer must be understanding it as well. It is very hard to argue against this, which is why the Chinese room argument has stuck around for so long.
Searle is of the opinion that if we can find the ‘mechanism’ of understanding in the brain and replicate it in the computer, the computer can understand as well.
To get down to the nuts and bolts of the argument, he maintains that a precise molecular-level simulation of a human brain would be able to understand, but a computer that just happened to act intelligent might not be able to.
In my opinion, this argument just hides yet another form of vitalism. That there is something above and beyond the mechanical. However, everywhere in the brain we’ve looked, we’ve found just neurons doing simple computations on their inputs. I believe that that is all there is to it—that something with the capabilities of the human brain also has the ability to understand.
However, this is just a belief at this point. There is no way to prove it. There probably will be no way until we can figure out what consciousness is.
So there you have it. The chinese room argument is really just another form of the Hard Problem of consciousness. Nothing new to see here.
It is good to taboo words, but it is also good to criticize the attempts of others to taboo words, if you can make the case that those attempts fail to capture something important.
For example, it seems possible that a computer could predict your actions to high precision, but by running computations so different from the ones that you would have run yourself that the simulated-you doesn’t have subjective experiences. (If I understand it correctly, this is the idea behind Eliezer’s search for a non-person predicate. It would be good if this is possible, because then a superintelligence could run alternate histories without torturing millions of sentient simulated beings.) If such a thing is possible, then any superficial behavioristic attempt to taboo “subjective experience” will be missing something important.
Furthermore, I can mount this critique of such an attempt without being obliged to taboo “subjective experience” myself. That is, making the critique is valuable even if it doesn’t offer an alternative way to taboo “subjective experience”.
It’s not clear to me that “understanding” means “subjective experience,” which is one of several reasons why I think it’s reasonable for me to ask that we taboo “understanding.”
The only good taboo of understanding I’ve ever read came from an LW quotes thread, quoting Feynman, quoting Dirac:
I understand what an equation means if I have a way of figuring out the characteristics of its solution without actually solving it.
By this criterion, the Chinese Room might not actually understand Chinese, where a human Chinese speaker does—ie, you hear all but the last word of a sentence, can you give a much tighter probability distribution over the end of the sentence than maxent over common vocabulary that fits grammatically?
I would say I understand a system to the extent that I’m capable of predicting its behavior given novel inputs. Which seems to be getting at something similar to Dirac’s version.
you hear all but the last word of a sentence, can you give a much tighter probability distribution over the end of the sentence than maxent over common vocabulary that fits grammatically?
IIRC, the CR as Searle describes it would include rules for responding to the question “What are likely last words that end this sentence?” in the same way a Chinese speaker would. So presumably it is capable of doing that, if asked.
And, definitionally, of doing so without understanding.
To my way of thinking, that makes the CR a logical impossibility, and reasoning forward from an assumption of its existence can lead to nonsensical conclusions.
Good point—I was thinking of “figuring out the characteristics” fuzzily; but if defined as giving correctly predictive output in response to a given interrogative, the room either does it correctly, or isn’t a fully-functioning Chinese Room.
The chinese room argument is really just another form of the Hard Problem of consciousness.
This is correct and deserves elaboration.
Searle makes clear his agreement with Brentano that intentionality is the hallmark of consciousness. “Intentionality” here means about-ness, i.e. a semantic relation whereby a word (for example) is about an object. For Searle, all consciousness involves intentionality, and all intentionality either directly involves consciousness or derives ultimately from consciousness. But suppose we also smuggle in the assumption—and for English speakers, this will come naturally—that subjective experience is necessarily entwined with “consciousness”. In that case we commit to a view we could summarize as “intentionality if and only if subjective experience.”
Now let me admit, Searle never explicitly endorses such a statement, as far as I know. I think it has nothing to recommend it, either. But I do think he believes it, because that would explain so much of what he does explicitly say.
Why do I reject “intentionality if and only if subjective experience”? For one thing, there are simple states of consciousness—moods, for example—that have no intentionality, so subjectivity fails to imply intentionality. Nor can I see any reason that the implication holds in the direction from intentionality to subjectivity.
Searle’s arguments fail to show that AIs in the “computationalist” conception can’t think about, and talk about, stuff. But then, that just shows that he picked the wrong target. Intentionality is easy. The real question is qualia.
Why do I reject “intentionality if and only if subjective experience”? For one thing, there are simple states of consciousness—moods, for example—that have no intentionality, so subjectivity fails to imply intentionality. Nor can I see any reason that the implication holds in the direction from intentionality to subjectivity.
I think this is a bit confused. It isn’t that simple states of consciousness, qualia, etc. imply intentionality, rather that they are prerequisites for intentionality. X if and only if Y just means there can be no X without Y. I’m not familiar enough with Searle to comment on his endorsement of the idea, but it makes sense to me at least that in order to have intention (in the sense of will) an agent would have first to be able to perceive (subjectively, of course) the surroundings/other agents on which it intends to act. You say intentionality is “easy”. Okay. But what does it mean to talk of intentionality, without a subject to have the intention?
“Intentionality” is an unfortunate word choice here, because it’s not primarily about intention in the sense of will. Blame Brentano, and Searle for following him, for that word choice. Intentionality means aboutness, i.e. a semantic relation between word and object, belief and fact, or desire and outcome. The last example shows that intention in the sense of will is included within “intentionality” as Searle uses it, but it’s not the only example. Your argument is still plausible and relevant, and I’ll try to reply in a moment.
As you suggest, I didn’t even bother trying to argue against the contention that qualia are prerequisite for intentionality. Not because I don’t think an argument can be made, but mainly because the Less Wrong community doesn’t seem to need any convincing, or didn’t until you came along. My argument basically amounts to pointing to plausible theories of what the semantic relationship is, such as teleosemantics or asymmetric dependence, and noting that qualia are not mentioned or implied in those theories.
Now to answer your argument. I do think it’s conceivable for an agent to have intentions to act, and have perceptions of facts, without having qualia as we know them. Call this agent Robbie Robot. Robbie is still a subject, in the sense that, e.g. “Robbie knows that the blue box fits inside the red one” is true, and expresses a semantic relation, and Robbie is the subject of that sentence. But Robbie doesn’t have a subjective experience of red or blue; it only has an objective perception of red or blue. Unlike humans, Robbie has no cognitive access to an intermediate state between the actual external world of boxes, and the ultimate cognitive achievement of knowing that this box is red. Robbie is not subject to tricks of lighting. Robbie cannot be drugged in a way that makes it see colors differently. When it comes to box colors, Robbie is infallible, and therefore there is no such thing as “appears to be red” or “seems blue” to Robbie. There is no veil of perception. There is only reality. Perfect engineering has eliminated subjectivity.
This little story seems wildly improbable, but it’s not self-contradictory. I think it shows that knowledge and (repeat the story with suitable substitutions) intentional action need not imply subjectivity.
Searle’s main argument is this: sophistication of computation does not by itself lead to understanding. That is, just because a computer is doing something that a human could not do without understanding does not mean the computer must be understanding it as well. It is very hard to argue against this, which is why the Chinese room argument has stuck around for so long.
Searle is of the opinion that if we can find the ‘mechanism’ of understanding in the brain and replicate it in the computer, the computer can understand as well.
To get down to the nuts and bolts of the argument, he maintains that a precise molecular-level simulation of a human brain would be able to understand, but a computer that just happened to act intelligent might not be able to.
If that’s what the Chinese Room argument says, then:
1) Either my reading comprehension is awful or Searle is awful at making himself understood.
2) Searle is so obviously right that I wonder why he bothered to create his argument.
Perhaps a little bit of that and a little bit of the hordes of misguided people misunderstanding his arguments and then spreading their own misinformation around. And not to mention the opportunists who sieze at the argument as a way to defend their own pseudoscientific beliefs. That was, in part, why I didn’t take his argument seriously at first. I had recieved it through second-hand sources.
(In my experience what happens in practice is his perspective is unconsciously conflated with mysterianism (maybe through slippery slope reasoning) which prompts rationalized flag-wavings-dressed-as-arguments that dog whistle ‘we must heap lots of positive affect on Science, it works really well’ or ‘science doesn’t have all the answers, we have to make room for [vague intuition about institutions that respect human dignity, or something]’ depending.)
One thing to keep in mind is that there is no obvious evolutionary advantage to also having some form of “understanding” other than functional capabilities. Why would we have been selected for “understanding”, “aboutness”, if these were a mechanism separate from just performing the task needed?
Without such an evolutionary selection pressure, how come that our capable brains also evolved into be able to “understand” and “be about something” (if these were not necessary by-products), why didn’t we just become Chinese Rooms? To me the most parsimonious explanation seems to be that these capabilities go hand in hand with our functional capacity.
I hope my above point was cogently formulated, I’m forced into watching Chip and Dale right next to this window …
I used not to take Searle’s arguments seriously until I actually understood what they were about.
Before anything, I should say that I disagree with Searle’s arguments. However, it is important to understand them if we are to have a rational discussion.
Most importantly, Searle does not claim that machines can never understand, or that there is something inherently special about the human brain that cannot be replicated in a computer. He acknowledges that the human brain is governed by physics and is probably subject to the church-turing thesis.
Searle’s main argument is this: sophistication of computation does not by itself lead to understanding. That is, just because a computer is doing something that a human could not do without understanding does not mean the computer must be understanding it as well. It is very hard to argue against this, which is why the Chinese room argument has stuck around for so long.
Searle is of the opinion that if we can find the ‘mechanism’ of understanding in the brain and replicate it in the computer, the computer can understand as well.
To get down to the nuts and bolts of the argument, he maintains that a precise molecular-level simulation of a human brain would be able to understand, but a computer that just happened to act intelligent might not be able to.
In my opinion, this argument just hides yet another form of vitalism. That there is something above and beyond the mechanical. However, everywhere in the brain we’ve looked, we’ve found just neurons doing simple computations on their inputs. I believe that that is all there is to it—that something with the capabilities of the human brain also has the ability to understand.
However, this is just a belief at this point. There is no way to prove it. There probably will be no way until we can figure out what consciousness is.
So there you have it. The chinese room argument is really just another form of the Hard Problem of consciousness. Nothing new to see here.
Taboo “understanding.”
It is good to taboo words, but it is also good to criticize the attempts of others to taboo words, if you can make the case that those attempts fail to capture something important.
For example, it seems possible that a computer could predict your actions to high precision, but by running computations so different from the ones that you would have run yourself that the simulated-you doesn’t have subjective experiences. (If I understand it correctly, this is the idea behind Eliezer’s search for a non-person predicate. It would be good if this is possible, because then a superintelligence could run alternate histories without torturing millions of sentient simulated beings.) If such a thing is possible, then any superficial behavioristic attempt to taboo “subjective experience” will be missing something important.
Furthermore, I can mount this critique of such an attempt without being obliged to taboo “subjective experience” myself. That is, making the critique is valuable even if it doesn’t offer an alternative way to taboo “subjective experience”.
It’s not clear to me that “understanding” means “subjective experience,” which is one of several reasons why I think it’s reasonable for me to ask that we taboo “understanding.”
I didn’t mean to suggest that “understanding” means “subjective experience”, or to suggest that anyone else was suggesting that.
The only good taboo of understanding I’ve ever read came from an LW quotes thread, quoting Feynman, quoting Dirac:
By this criterion, the Chinese Room might not actually understand Chinese, where a human Chinese speaker does—ie, you hear all but the last word of a sentence, can you give a much tighter probability distribution over the end of the sentence than maxent over common vocabulary that fits grammatically?
I would say I understand a system to the extent that I’m capable of predicting its behavior given novel inputs. Which seems to be getting at something similar to Dirac’s version.
IIRC, the CR as Searle describes it would include rules for responding to the question “What are likely last words that end this sentence?” in the same way a Chinese speaker would. So presumably it is capable of doing that, if asked.
And, definitionally, of doing so without understanding.
To my way of thinking, that makes the CR a logical impossibility, and reasoning forward from an assumption of its existence can lead to nonsensical conclusions.
Good point—I was thinking of “figuring out the characteristics” fuzzily; but if defined as giving correctly predictive output in response to a given interrogative, the room either does it correctly, or isn’t a fully-functioning Chinese Room.
The thing is that the Chinese Room does not represent a system that could never understand. It fails at its task in the mental experiment.
This is correct and deserves elaboration.
Searle makes clear his agreement with Brentano that intentionality is the hallmark of consciousness. “Intentionality” here means about-ness, i.e. a semantic relation whereby a word (for example) is about an object. For Searle, all consciousness involves intentionality, and all intentionality either directly involves consciousness or derives ultimately from consciousness. But suppose we also smuggle in the assumption—and for English speakers, this will come naturally—that subjective experience is necessarily entwined with “consciousness”. In that case we commit to a view we could summarize as “intentionality if and only if subjective experience.”
Now let me admit, Searle never explicitly endorses such a statement, as far as I know. I think it has nothing to recommend it, either. But I do think he believes it, because that would explain so much of what he does explicitly say.
Why do I reject “intentionality if and only if subjective experience”? For one thing, there are simple states of consciousness—moods, for example—that have no intentionality, so subjectivity fails to imply intentionality. Nor can I see any reason that the implication holds in the direction from intentionality to subjectivity.
Searle’s arguments fail to show that AIs in the “computationalist” conception can’t think about, and talk about, stuff. But then, that just shows that he picked the wrong target. Intentionality is easy. The real question is qualia.
I think this is a bit confused. It isn’t that simple states of consciousness, qualia, etc. imply intentionality, rather that they are prerequisites for intentionality. X if and only if Y just means there can be no X without Y. I’m not familiar enough with Searle to comment on his endorsement of the idea, but it makes sense to me at least that in order to have intention (in the sense of will) an agent would have first to be able to perceive (subjectively, of course) the surroundings/other agents on which it intends to act. You say intentionality is “easy”. Okay. But what does it mean to talk of intentionality, without a subject to have the intention?
“Intentionality” is an unfortunate word choice here, because it’s not primarily about intention in the sense of will. Blame Brentano, and Searle for following him, for that word choice. Intentionality means aboutness, i.e. a semantic relation between word and object, belief and fact, or desire and outcome. The last example shows that intention in the sense of will is included within “intentionality” as Searle uses it, but it’s not the only example. Your argument is still plausible and relevant, and I’ll try to reply in a moment.
As you suggest, I didn’t even bother trying to argue against the contention that qualia are prerequisite for intentionality. Not because I don’t think an argument can be made, but mainly because the Less Wrong community doesn’t seem to need any convincing, or didn’t until you came along. My argument basically amounts to pointing to plausible theories of what the semantic relationship is, such as teleosemantics or asymmetric dependence, and noting that qualia are not mentioned or implied in those theories.
Now to answer your argument. I do think it’s conceivable for an agent to have intentions to act, and have perceptions of facts, without having qualia as we know them. Call this agent Robbie Robot. Robbie is still a subject, in the sense that, e.g. “Robbie knows that the blue box fits inside the red one” is true, and expresses a semantic relation, and Robbie is the subject of that sentence. But Robbie doesn’t have a subjective experience of red or blue; it only has an objective perception of red or blue. Unlike humans, Robbie has no cognitive access to an intermediate state between the actual external world of boxes, and the ultimate cognitive achievement of knowing that this box is red. Robbie is not subject to tricks of lighting. Robbie cannot be drugged in a way that makes it see colors differently. When it comes to box colors, Robbie is infallible, and therefore there is no such thing as “appears to be red” or “seems blue” to Robbie. There is no veil of perception. There is only reality. Perfect engineering has eliminated subjectivity.
This little story seems wildly improbable, but it’s not self-contradictory. I think it shows that knowledge and (repeat the story with suitable substitutions) intentional action need not imply subjectivity.
If that’s what the Chinese Room argument says, then:
1) Either my reading comprehension is awful or Searle is awful at making himself understood.
2) Searle is so obviously right that I wonder why he bothered to create his argument.
Perhaps a little bit of that and a little bit of the hordes of misguided people misunderstanding his arguments and then spreading their own misinformation around. And not to mention the opportunists who sieze at the argument as a way to defend their own pseudoscientific beliefs. That was, in part, why I didn’t take his argument seriously at first. I had recieved it through second-hand sources.
(In my experience what happens in practice is his perspective is unconsciously conflated with mysterianism (maybe through slippery slope reasoning) which prompts rationalized flag-wavings-dressed-as-arguments that dog whistle ‘we must heap lots of positive affect on Science, it works really well’ or ‘science doesn’t have all the answers, we have to make room for [vague intuition about institutions that respect human dignity, or something]’ depending.)
Thanks for this!
One thing to keep in mind is that there is no obvious evolutionary advantage to also having some form of “understanding” other than functional capabilities. Why would we have been selected for “understanding”, “aboutness”, if these were a mechanism separate from just performing the task needed?
Without such an evolutionary selection pressure, how come that our capable brains also evolved into be able to “understand” and “be about something” (if these were not necessary by-products), why didn’t we just become Chinese Rooms? To me the most parsimonious explanation seems to be that these capabilities go hand in hand with our functional capacity.
I hope my above point was cogently formulated, I’m forced into watching Chip and Dale right next to this window …
I think linking this concept in my mind to the concept of the Chinese Room might be helpful. Thanks!
Depends on your definition of consciousness.