Let me clarify. I could write the Chinese room argument as the following deduction argument:
P is a computer program that does [x]
There is no computer program sufficient for explaining human understanding of [x]
⇒ 3) Computer program P does not understand [x]
This is not at all correct as a summary of Searle’s argument.
A more proper summary would read as follows:
P is an instantiated algorithm that behaves as if it [x]. (Where [x] = “understands and speaks Chinese”.)
If we examine P, we can easily see that its inner workings cannot possibly explain how it could [x].
Therefore, the fact that humans can [x] cannot be explainable by any algorithm.
That the Room does not understand Chinese is not a conclusion of the argument. It’s taken as a premise; and the reader is induced to accede to taking it as a premise, on the basis of the “intuition pump” of the Room’s description (with the papers and so on).
Now, you seem to disagree with this premise (#2). Fair enough; so do I. But then there’s nothing more to discuss. Searle’s argument collapses, and we’re done here.
The rest of your argument seems aimed at shoring up the opposing intuition (unnecessary, but let’s go with it). However, it would not impress John Searle. He might say: very well, you propose to construct a computer program in a certain way, you propose to expose it to certain stimuli, yes, very good. Having done this, the resulting program would appear to understand Chinese. Would it still be some deterministic algorithm? Yes, of course; all computer programs are. Could you instantiate it in a Room-like structure, just like in the original thought experiment? Naturally. And so it would succumb to the same argument as the original Room.
1. P is an instantiated algorithm that behaves as if it [x]. (Where [x] = “understands and speaks Chinese”.)
2. If we examine P, we can easily see that its inner workings cannot possibly explain how it could [x].
3. Therefore, the fact that humans can [x] cannot be explainable by any algorithm.
I have some problem with your formulation. The fact that P does not understand [x] is nowhere in your formulation, not in premise #1. Conclusion #3 is wrong and should be written as “the fact that humans can [x] cannot be explainable by P”. This conclusion does not need the premise that “P does not understand [x]” but only premise #2. In fact, at least two conclusions can be derived from premise #2, including a conclusion that “P does not understand [x]”.
I state that—using a premise #2 that does not talk about any program—both Searle’s conclusions hold true, but do not apply to an algorithm which performs (simulates) semiosis.
The fact that P does not understand [x] is nowhere in your formulation, not in premise #1.
Yes it is. Reread more closely, please.
Conclusion #3 is wrong and should be written as “the fact that humans can [x] cannot be explainable by P”.
That is not Searle’s argument.
I don’t think anything more may productively be said in this conversation as long as (as seems to be the case) you don’t understand what Searle was arguing.
This is not at all correct as a summary of Searle’s argument.
A more proper summary would read as follows:
P is an instantiated algorithm that behaves as if it [x]. (Where [x] = “understands and speaks Chinese”.)
If we examine P, we can easily see that its inner workings cannot possibly explain how it could [x].
Therefore, the fact that humans can [x] cannot be explainable by any algorithm.
That the Room does not understand Chinese is not a conclusion of the argument. It’s taken as a premise; and the reader is induced to accede to taking it as a premise, on the basis of the “intuition pump” of the Room’s description (with the papers and so on).
Now, you seem to disagree with this premise (#2). Fair enough; so do I. But then there’s nothing more to discuss. Searle’s argument collapses, and we’re done here.
The rest of your argument seems aimed at shoring up the opposing intuition (unnecessary, but let’s go with it). However, it would not impress John Searle. He might say: very well, you propose to construct a computer program in a certain way, you propose to expose it to certain stimuli, yes, very good. Having done this, the resulting program would appear to understand Chinese. Would it still be some deterministic algorithm? Yes, of course; all computer programs are. Could you instantiate it in a Room-like structure, just like in the original thought experiment? Naturally. And so it would succumb to the same argument as the original Room.
I have some problem with your formulation. The fact that P does not understand [x] is nowhere in your formulation, not in premise #1. Conclusion #3 is wrong and should be written as “the fact that humans can [x] cannot be explainable by P”. This conclusion does not need the premise that “P does not understand [x]” but only premise #2. In fact, at least two conclusions can be derived from premise #2, including a conclusion that “P does not understand [x]”.
I state that—using a premise #2 that does not talk about any program—both Searle’s conclusions hold true, but do not apply to an algorithm which performs (simulates) semiosis.
Yes it is. Reread more closely, please.
That is not Searle’s argument.
I don’t think anything more may productively be said in this conversation as long as (as seems to be the case) you don’t understand what Searle was arguing.