IMO, the entire Chinese room thought experiment dissolves into clarity once we remember that the intuitive meaning of understanding is formed around algorithms that are not lookup tables, because trying to create an infinite look-up table would be infeasible in reality, thus our intuitions go wrong in extreme cases.
I agree with the discord comments here on this point:
The portion of the argument where I contest is step 2 here (it’s a summarized version):
If Strong AI is true, then there is a program for Chinese, C, such that if any computing system runs C, that system thereby comes to understand Chinese.
I could run C without thereby coming to understand Chinese.
Therefore Strong AI is false.
Or this argument here:
Searle imagines himself alone in a room following a computer program for responding to Chinese characters slipped under the door. Searle understands nothing of Chinese, and yet, by following the program for manipulating symbols and numerals just as a computer does, he sends appropriate strings of Chinese characters back out under the door, and this leads those outside to mistakenly suppose there is a Chinese speaker in the room.
As stated, if the computer program is accessible to him, then for all intents and purposes, he does understand Chinese for the purposes of interacting until the program is removed (assuming that it completely characterizes Chinese and correctly works for all inputs).
I think the key issue is that people don’t want to accept that if we were completely unconstrained physically, even very large look-up tables would be a valid answer to making an AI that is useful.
IMO, the entire Chinese room thought experiment dissolves into clarity once we remember that the intuitive meaning of understanding is formed around algorithms that are not lookup tables, because trying to create an infinite look-up table would be infeasible in reality, thus our intuitions go wrong in extreme cases.
I agree with the discord comments here on this point:
The portion of the argument where I contest is step 2 here (it’s a summarized version):
Or this argument here:
As stated, if the computer program is accessible to him, then for all intents and purposes, he does understand Chinese for the purposes of interacting until the program is removed (assuming that it completely characterizes Chinese and correctly works for all inputs).
I think the key issue is that people don’t want to accept that if we were completely unconstrained physically, even very large look-up tables would be a valid answer to making an AI that is useful.