I have a better argument now, and the answer is that the argument fails in the conclusion.
The issue is that conditional on assuming that a computer program (speaking very generally here) is able to give a correct response to every input of Chinese characters, and it knows the rules of Chinese completely, then it must know/understand Chinese in order to do the things that Searle claims it to be doing, and in this instance we’d say that it does understand Chinese/decide Chinese for all purposes.
Basically, I’m claiming that the premises lead to a different, opposite conclusion.
These premises:
“Imagine a native English speaker who knows no Chinese locked in a room full of boxes of Chinese symbols (a data base) together with a book of instructions for manipulating the symbols (the program). Imagine that people outside the room send in other Chinese symbols which, unknown to the person in the room, are questions in Chinese (the input). And imagine that by following the instructions in the program the man in the room is able to pass out Chinese symbols which are correct answers to the questions (the output).
assuming that every input has in fact been used, contradicts this conclusion:
The program enables the person in the room to pass the Turing Test for understanding Chinese but he does not understand a word of Chinese.”
The correct conclusion, including all assumptions is that they do understand/decide Chinese completely.
The one-sentence slogan is “Look-up table programs are a valid form of intelligence/understanding, albeit the most inefficient form of intelligence/understanding.”
What it does say is that without any restrictions on how the program computes Chinese or any problem, other than it must give a correct answer to every input, the answer to the question of “Is it intelligent on this specific problem/does it understand this specific problem?” is always yes, and to have the possibility of it being no, you need to add more restrictions than that to make the answer be no.
I have a better argument now, and the answer is that the argument fails in the conclusion.
The issue is that conditional on assuming that a computer program (speaking very generally here) is able to give a correct response to every input of Chinese characters, and it knows the rules of Chinese completely, then it must know/understand Chinese in order to do the things that Searle claims it to be doing, and in this instance we’d say that it does understand Chinese/decide Chinese for all purposes.
Basically, I’m claiming that the premises lead to a different, opposite conclusion.
These premises:
assuming that every input has in fact been used, contradicts this conclusion:
The correct conclusion, including all assumptions is that they do understand/decide Chinese completely.
The one-sentence slogan is “Look-up table programs are a valid form of intelligence/understanding, albeit the most inefficient form of intelligence/understanding.”
What it does say is that without any restrictions on how the program computes Chinese or any problem, other than it must give a correct answer to every input, the answer to the question of “Is it intelligent on this specific problem/does it understand this specific problem?” is always yes, and to have the possibility of it being no, you need to add more restrictions than that to make the answer be no.