You are getting the statement of the Chinese room wrong. The claim isn’t that the human inside the room will learn Chinese. Indeed, it’s a key feature of the argument that the person *doesn’t* ever count as knowing Chinese. It is only the system consisting of the person plus all the rules written down in the room etc.. which knows Chinese. This is what’s supposed to (but not convincingly IMO) be an unpalatable conclusion.
Secondly, no one is suggesting that there isn’t an algorithm that can be followed which makes it appear as if the room understands Chinese. The question is whether or not there is some conscious entity corresponding to the system of the guy plus all the rules which has the qualitative experience of understanding the Chinese words submitted etc.. As such the points you raise don’t really address the main issue.
TruePath, you are mistaken, my argument addresses the main issue of explaining computer understanding (moreover, it seems that you are making confusion between the Chinese room argument and the “system reply” to it). Let me clarify. I could write the Chinese room argument as the following deduction argument: 1) P is a computer program that does [x] 2) There is no computer program sufficient for explaining human understanding of [x] => 3) Computer program P does not understand [x] In my view, assumption (2) is not demonstrated and the argument should be reformulated as: 1) P is a computer program that does [x] 2’) Computer program P is not sufficient for explaining human understanding of [x] => 3) Computer program P does not understand [x] The argument still holds against any computer program satisfying assumption (2’). Does however a program exist that can explain human understanding of [x] (a program such that a human executing it understands [x])? My reply focuses on this question. I suggest to consider artificial semiosis. For example, a program P learns solely from symbolic experience of observing a symbols in a sequence that it should output “I say” (I have described how such a program would look like in my post). Another program Q could learn from symbolic experience solely how to speak Chinese. Humans do not normally learn these ways a rule for using “I say” or how to speak Chinese, because their experience is much richer. However, we could reason about the understanding that a human would have if he could have only symbolic experience and the right program instructions to follow. The semiosis performed by the human would not differ from the semiosis performed by the computer program. It can be said that program P understands a rule for using “I say”. It could be said that the computer program Q understands Chinese. You can consider [x] to be a capability enabled by sensory-motion. You can consider [x] to be consciousness. My “semiosis reply” could of course be adapted to these situations too.
Let me clarify. I could write the Chinese room argument as the following deduction argument:
P is a computer program that does [x]
There is no computer program sufficient for explaining human understanding of [x]
⇒ 3) Computer program P does not understand [x]
This is not at all correct as a summary of Searle’s argument.
A more proper summary would read as follows:
P is an instantiated algorithm that behaves as if it [x]. (Where [x] = “understands and speaks Chinese”.)
If we examine P, we can easily see that its inner workings cannot possibly explain how it could [x].
Therefore, the fact that humans can [x] cannot be explainable by any algorithm.
That the Room does not understand Chinese is not a conclusion of the argument. It’s taken as a premise; and the reader is induced to accede to taking it as a premise, on the basis of the “intuition pump” of the Room’s description (with the papers and so on).
Now, you seem to disagree with this premise (#2). Fair enough; so do I. But then there’s nothing more to discuss. Searle’s argument collapses, and we’re done here.
The rest of your argument seems aimed at shoring up the opposing intuition (unnecessary, but let’s go with it). However, it would not impress John Searle. He might say: very well, you propose to construct a computer program in a certain way, you propose to expose it to certain stimuli, yes, very good. Having done this, the resulting program would appear to understand Chinese. Would it still be some deterministic algorithm? Yes, of course; all computer programs are. Could you instantiate it in a Room-like structure, just like in the original thought experiment? Naturally. And so it would succumb to the same argument as the original Room.
1. P is an instantiated algorithm that behaves as if it [x]. (Where [x] = “understands and speaks Chinese”.)
2. If we examine P, we can easily see that its inner workings cannot possibly explain how it could [x].
3. Therefore, the fact that humans can [x] cannot be explainable by any algorithm.
I have some problem with your formulation. The fact that P does not understand [x] is nowhere in your formulation, not in premise #1. Conclusion #3 is wrong and should be written as “the fact that humans can [x] cannot be explainable by P”. This conclusion does not need the premise that “P does not understand [x]” but only premise #2. In fact, at least two conclusions can be derived from premise #2, including a conclusion that “P does not understand [x]”.
I state that—using a premise #2 that does not talk about any program—both Searle’s conclusions hold true, but do not apply to an algorithm which performs (simulates) semiosis.
The fact that P does not understand [x] is nowhere in your formulation, not in premise #1.
Yes it is. Reread more closely, please.
Conclusion #3 is wrong and should be written as “the fact that humans can [x] cannot be explainable by P”.
That is not Searle’s argument.
I don’t think anything more may productively be said in this conversation as long as (as seems to be the case) you don’t understand what Searle was arguing.
If you want to argue against that piece of reasoning give it a different name because it’s not the Chinese room argument. I took multiple graduate classes with professor Searle and, while there are a number of details Said definitely gets the overall outline correct and the argument you advanced is not his Chinese room argument.
That doesn’t mean we can’t talk about your argument just don’t insist it is Searle’s Chinese room argument.
In his paper, Searle brings forward a lot of arguments.
Early in his argumentation and referring to the Chinese room, Searle makes this argument (which I ask you not to mix with later arguments without care):
it seems to me quite obvious in the example that I do not understand a word of the Chinese stories. I have inputs and outputs that are indistinguishable from those of the native Chinese speaker, and I can have any formal program you like, but I still understand nothing. For the same reasons, Schank’s computer understands nothing of any stories. whether in Chinese. English. or whatever. since in the Chinese case the computer is me. and in cases where the computer is not me, the computer has nothing more than I
Later, he writes:
the whole point of the original example was to argue that such symbol manipulation by itself couldn’t be sufficient for understanding Chinese.
I am framing this argument in a way it can be analyzed:
1) P (the Chinese room) is X (a program capable of passing Turing test in Chinese);
2) Searle can be any X and not understanding Chinese (as exemplified by Searle being the Chinese room and not understanding Chinese, which can be demonstrated for certain programs)
thus 3) no X is understanding Chinese
Searle is arguing that “no program is understanding Chinese” (I stress this in order to reply to Said). The argument “P is X, P is not B, thus no X is B” is an invalid syllogism. Nevertheless, Searle believes in this case that “P not being B” implies (or strongly points towards) “X not being B”.
Yes, Searle’s intuition is known to be problematic and can be argued against accordingly.
My point however is that there is out there in the space of X a program P that is quite unintuitive. I am suggesting a positive example of “P possibly understanding Chinese” which could cut short the debate. Don’t you see that giving a positive answer to the question “can a program understand?” may bring some insight in Searle’s argument too (such as developing it into a “Chinese room test” to assess whether a given program can indeed understand)? Don’t you want to look into my suggested program P (semiotic AI)?
In the beginning of my post I made it very clear:
Humans learn Chinese all the time; yet it is uncommon having them learning Chinese by running a program
Searle can be any X?? WTF? That’s a bit confusingly written.
The intuition Searle is pumping is that since he, as a component of the total system doesn’t understand Chinese it seems counterintuitive to conclude that the whole system understands Chinese. When Searle says he is the system he is pointing to the fact that he is doing all the actual interpretation of instructions and is seems weird to think that the whole system has some extra experiences that let it understand Chinese even though he does not. When Searle uses the word understand he does not mean demonstrate the appropriate input output behavior he is presuming it has that behavior and asking about the system’s experiences.
Searle’s view from his philosophy of language is that our understanding and mening is grounded in our experiences and what makes a person count as understanding (as opposed to merely dumbly parroting) Chinese is that they have certain kinds of experiences while manipulating the words. When Searle asserts the room doesn’t understand Chinese he is asserting that it doesn’t have the requisite experiences (because it’s not having any experiences) that someone would need to have to count as understanding Chinese.
Look, I’ve listened to Searle explain this himself multiple times during the 2 years of graduate seminars on philosophy of mind I took with him and have discussed this very argument with him at some length. I’m sorry but you are interpreting him incorrectly.
I know I’m not making the confusion you suggest because I’ve personally talked with him at some length about his argument.
You are getting the statement of the Chinese room wrong. The claim isn’t that the human inside the room will learn Chinese. Indeed, it’s a key feature of the argument that the person *doesn’t* ever count as knowing Chinese. It is only the system consisting of the person plus all the rules written down in the room etc.. which knows Chinese. This is what’s supposed to (but not convincingly IMO) be an unpalatable conclusion.
Secondly, no one is suggesting that there isn’t an algorithm that can be followed which makes it appear as if the room understands Chinese. The question is whether or not there is some conscious entity corresponding to the system of the guy plus all the rules which has the qualitative experience of understanding the Chinese words submitted etc.. As such the points you raise don’t really address the main issue.
TruePath, you are mistaken, my argument addresses the main issue of explaining computer understanding (moreover, it seems that you are making confusion between the Chinese room argument and the “system reply” to it).
Let me clarify. I could write the Chinese room argument as the following deduction argument:
1) P is a computer program that does [x]
2) There is no computer program sufficient for explaining human understanding of [x]
=> 3) Computer program P does not understand [x]
In my view, assumption (2) is not demonstrated and the argument should be reformulated as:
1) P is a computer program that does [x]
2’) Computer program P is not sufficient for explaining human understanding of [x]
=> 3) Computer program P does not understand [x]
The argument still holds against any computer program satisfying assumption (2’). Does however a program exist that can explain human understanding of [x] (a program such that a human executing it understands [x])?
My reply focuses on this question. I suggest to consider artificial semiosis. For example, a program P learns solely from symbolic experience of observing a symbols in a sequence that it should output “I say” (I have described how such a program would look like in my post). Another program Q could learn from symbolic experience solely how to speak Chinese. Humans do not normally learn these ways a rule for using “I say” or how to speak Chinese, because their experience is much richer. However, we could reason about the understanding that a human would have if he could have only symbolic experience and the right program instructions to follow. The semiosis performed by the human would not differ from the semiosis performed by the computer program. It can be said that program P understands a rule for using “I say”. It could be said that the computer program Q understands Chinese.
You can consider [x] to be a capability enabled by sensory-motion. You can consider [x] to be consciousness. My “semiosis reply” could of course be adapted to these situations too.
This is not at all correct as a summary of Searle’s argument.
A more proper summary would read as follows:
P is an instantiated algorithm that behaves as if it [x]. (Where [x] = “understands and speaks Chinese”.)
If we examine P, we can easily see that its inner workings cannot possibly explain how it could [x].
Therefore, the fact that humans can [x] cannot be explainable by any algorithm.
That the Room does not understand Chinese is not a conclusion of the argument. It’s taken as a premise; and the reader is induced to accede to taking it as a premise, on the basis of the “intuition pump” of the Room’s description (with the papers and so on).
Now, you seem to disagree with this premise (#2). Fair enough; so do I. But then there’s nothing more to discuss. Searle’s argument collapses, and we’re done here.
The rest of your argument seems aimed at shoring up the opposing intuition (unnecessary, but let’s go with it). However, it would not impress John Searle. He might say: very well, you propose to construct a computer program in a certain way, you propose to expose it to certain stimuli, yes, very good. Having done this, the resulting program would appear to understand Chinese. Would it still be some deterministic algorithm? Yes, of course; all computer programs are. Could you instantiate it in a Room-like structure, just like in the original thought experiment? Naturally. And so it would succumb to the same argument as the original Room.
I have some problem with your formulation. The fact that P does not understand [x] is nowhere in your formulation, not in premise #1. Conclusion #3 is wrong and should be written as “the fact that humans can [x] cannot be explainable by P”. This conclusion does not need the premise that “P does not understand [x]” but only premise #2. In fact, at least two conclusions can be derived from premise #2, including a conclusion that “P does not understand [x]”.
I state that—using a premise #2 that does not talk about any program—both Searle’s conclusions hold true, but do not apply to an algorithm which performs (simulates) semiosis.
Yes it is. Reread more closely, please.
That is not Searle’s argument.
I don’t think anything more may productively be said in this conversation as long as (as seems to be the case) you don’t understand what Searle was arguing.
If you want to argue against that piece of reasoning give it a different name because it’s not the Chinese room argument. I took multiple graduate classes with professor Searle and, while there are a number of details Said definitely gets the overall outline correct and the argument you advanced is not his Chinese room argument.
That doesn’t mean we can’t talk about your argument just don’t insist it is Searle’s Chinese room argument.
In his paper, Searle brings forward a lot of arguments.
Early in his argumentation and referring to the Chinese room, Searle makes this argument (which I ask you not to mix with later arguments without care):
Later, he writes:
I am framing this argument in a way it can be analyzed:
1) P (the Chinese room) is X (a program capable of passing Turing test in Chinese);
2) Searle can be any X and not understanding Chinese (as exemplified by Searle being the Chinese room and not understanding Chinese, which can be demonstrated for certain programs)
thus 3) no X is understanding Chinese
Searle is arguing that “no program is understanding Chinese” (I stress this in order to reply to Said). The argument “P is X, P is not B, thus no X is B” is an invalid syllogism. Nevertheless, Searle believes in this case that “P not being B” implies (or strongly points towards) “X not being B”.
Yes, Searle’s intuition is known to be problematic and can be argued against accordingly.
My point however is that there is out there in the space of X a program P that is quite unintuitive. I am suggesting a positive example of “P possibly understanding Chinese” which could cut short the debate. Don’t you see that giving a positive answer to the question “can a program understand?” may bring some insight in Searle’s argument too (such as developing it into a “Chinese room test” to assess whether a given program can indeed understand)? Don’t you want to look into my suggested program P (semiotic AI)?
In the beginning of my post I made it very clear:
Searle can be any X?? WTF? That’s a bit confusingly written.
The intuition Searle is pumping is that since he, as a component of the total system doesn’t understand Chinese it seems counterintuitive to conclude that the whole system understands Chinese. When Searle says he is the system he is pointing to the fact that he is doing all the actual interpretation of instructions and is seems weird to think that the whole system has some extra experiences that let it understand Chinese even though he does not. When Searle uses the word understand he does not mean demonstrate the appropriate input output behavior he is presuming it has that behavior and asking about the system’s experiences.
Searle’s view from his philosophy of language is that our understanding and mening is grounded in our experiences and what makes a person count as understanding (as opposed to merely dumbly parroting) Chinese is that they have certain kinds of experiences while manipulating the words. When Searle asserts the room doesn’t understand Chinese he is asserting that it doesn’t have the requisite experiences (because it’s not having any experiences) that someone would need to have to count as understanding Chinese.
Look, I’ve listened to Searle explain this himself multiple times during the 2 years of graduate seminars on philosophy of mind I took with him and have discussed this very argument with him at some length. I’m sorry but you are interpreting him incorrectly.
I know I’m not making the confusion you suggest because I’ve personally talked with him at some length about his argument.