If by “digital computer” we mean anything at all that has a level of description where it can correctly be described as the instantiation of a computer program, then again the answer is, of course, yes, since we are the instantiations of any number of computer programs, and we can think.
“But could something think, understand, and so on solely in virtue of being a computer with the right sort of program? Could instantiating a program, the right program of course, by itself be a sufficient condition of understanding?”
This I think is the right question to ask, though it is usually confused with one or more of the earlier questions, and the answer to it is no.
“Why not?”
Because the formal symbol manipulations by themselves don’t have any intentionality; they are quite meaningless; they aren’t even symbol manipulations, since the symbols don’t symbolize anything. In the linguistic jargon, they have only a syntax but no semantics. Such intentionality as computers appear to have is solely in the minds of those who program them and those who use them, those who send in the input and those who interpret the output.
I can’t say I really understand what he’s trying to say, but it’s different from what I thought it was.
I think he’s actually quite confused here—I imagine saying
Hang on—you say that (a) we can think, and (b) we are the instantiations of any number of computer programs. Wouldn’t instantiating one of those programs be a sufficient condition of understanding? Surely if two things are isomorphic even in their implementation, either both can think, or neither.
(the Turing test suggests ‘indistinguishable in input/output behaviour’, which I think is much too weak)
IMO he’s trying to say that if you observe a machine from the outside (ie only what (sequences of) inputs lead to what (sequences of) outputs), then the mere observation that it behaves as if it understands the problem is not sufficient to conclude that it understands the problem. This is because understanding is some property of the internals. The presence of understanding is not deducible from the outside, even given infinitely many maximally diverse observations.
The no free lunch theorems basically say that if you are unlucky enough with your prior, and the problem to be solved is maximally general, then you can’t improve on your efficiency beyond random sampling/brute force search, which requires you to examine every input, and thus you can’t get away with algorithms that don’t require you to examine all inputs like in brute-force search.
It’s closer to a maximal inefficiency for intelligence/inapproximability result for intelligence than an impossibility result, which is still very important.
Oh, huh. Searle’s original Chinese room paper (first eight pages) doesn’t say machines can’t think.
I can’t say I really understand what he’s trying to say, but it’s different from what I thought it was.
I think he’s actually quite confused here—I imagine saying
(the Turing test suggests ‘indistinguishable in input/output behaviour’, which I think is much too weak)
IMO he’s trying to say that if you observe a machine from the outside (ie only what (sequences of) inputs lead to what (sequences of) outputs), then the mere observation that it behaves as if it understands the problem is not sufficient to conclude that it understands the problem. This is because understanding is some property of the internals. The presence of understanding is not deducible from the outside, even given infinitely many maximally diverse observations.
Only if you can’t examine all of the inputs.
The no free lunch theorems basically say that if you are unlucky enough with your prior, and the problem to be solved is maximally general, then you can’t improve on your efficiency beyond random sampling/brute force search, which requires you to examine every input, and thus you can’t get away with algorithms that don’t require you to examine all inputs like in brute-force search.
It’s closer to a maximal inefficiency for intelligence/inapproximability result for intelligence than an impossibility result, which is still very important.
I think Searle would disagree. But I also think this entire thought experiment is dumb.