I agree that the Chinese Room thought experiment dissolves into clarity once you realize that it is the room as a whole, not just the person, that implements the understanding.
But then wouldn’t the mapping, like the inert book in the room, need to be included in the system?
IMO, the entire Chinese room thought experiment dissolves into clarity once we remember that the intuitive meaning of understanding is formed around algorithms that are not lookup tables, because trying to create an infinite look-up table would be infeasible in reality, thus our intuitions go wrong in extreme cases.
I agree with the discord comments here on this point:
The portion of the argument where I contest is step 2 here (it’s a summarized version):
If Strong AI is true, then there is a program for Chinese, C, such that if any computing system runs C, that system thereby comes to understand Chinese.
I could run C without thereby coming to understand Chinese.
Therefore Strong AI is false.
Or this argument here:
Searle imagines himself alone in a room following a computer program for responding to Chinese characters slipped under the door. Searle understands nothing of Chinese, and yet, by following the program for manipulating symbols and numerals just as a computer does, he sends appropriate strings of Chinese characters back out under the door, and this leads those outside to mistakenly suppose there is a Chinese speaker in the room.
As stated, if the computer program is accessible to him, then for all intents and purposes, he does understand Chinese for the purposes of interacting until the program is removed (assuming that it completely characterizes Chinese and correctly works for all inputs).
I think the key issue is that people don’t want to accept that if we were completely unconstrained physically, even very large look-up tables would be a valid answer to making an AI that is useful.
The book in the room isn’t inert, though. It instructs the little guy on what to do as he manipulates symbols and stuff. As such, it is an important part of the computation that takes place.
The mapping of popcorn-to-computation, though, doesn’t do anything equivalent to this. It’s just an off-to-the-side interpretation of what is happening in the popcorn: it does nothing to move the popcorn or cause it to be configured in such a way. It doesn’t have to even exist: if you just know that in theory there is a way to map the popcorn to the computation, then if (by the terms of the argument) the computation itself is sufficient to generate consciousness, the popcorn should be able to do it as well, with the mapping left as an exercise for the reader. Otherwise you are implying some special property of a headful of meat such that it does not need to be interpreted in this way for its computation to be equivalent to consciousness.
That doesn’t quite follow to me. The book seems just as inert as the popcorn-to-consciousness map. The book doesn’t change yhe incoming slip of paper (popcorn) in any way, it just responds with an inert static map yo result in an outgoing slip of paper (consciousness), utilizing the map-and-popcorn-analyzing-agent-who-lacks-understanding (man in the room).
The book in the Chinese Room directs the actions of the little man in the room. Without the book, the man doesn’t act, and the text doesn’t get translated.
The popcorn map on the other hand doesn’t direct the popcorn to do what it does. The popcorn does what it does, and then the map in a post-hoc way is generated to explain how what the popcorn did maps to some particular calculation.
You can say that “oh well, then, the popcorn wasn’t really conscious until the map was generated; it was the additional calculations that went into generating the map that really caused the consciousness to emerge from the calculating” and then you’re back in Chinese Room territory. But if you do this, you’re left with the task of explaining how a brain can be conscious solely by means of executing a calculation before anyone has gotten around to creating a map between brain-states and whatever the relevant calculation-states might be. You have to posit some way in which calculations capable of embodying consciousness are inherent to brains but must be interpreted into being elsewhere.
You believe that something inert cannot be doing computation. I agree. But you seem to think it’s coherent that a system with no action—a post-hoc mapping of states—can be.
The place where comprehension of Chinese exists in the “chinese room” is the creation of the mapping—the mapping itself is a static object, and the person in the room by assumption is doing to cognitive work, just looking up entries. “But wait!” we can object, “this means that the Chinese room doesn’t understand Chinese!” And I think that’s the point of confusion—repeating someone else telling you answers isn’t the same as understanding. The fact that the “someone else” wrote down the answers changes nothing. The question is where and when the computation occurred.
In our scenarios, there are a couple different computations—but the creation of the mapping unfairly sneaks in the conclusion that the execution of the computation, which is required to build the mapping, isn’t what creates consciousness!
Good point. The problem I have with that is that in every listed example, the mapping either requires the execution of the conscious mind and a readout of its output and process in order to build it, or it stipulates that it is well enough understood that it can be mapped to an arbitrary process, thereby implicitly also requiring that it was run elsewhere.
I agree that the Chinese Room thought experiment dissolves into clarity once you realize that it is the room as a whole, not just the person, that implements the understanding.
But then wouldn’t the mapping, like the inert book in the room, need to be included in the system?
IMO, the entire Chinese room thought experiment dissolves into clarity once we remember that the intuitive meaning of understanding is formed around algorithms that are not lookup tables, because trying to create an infinite look-up table would be infeasible in reality, thus our intuitions go wrong in extreme cases.
I agree with the discord comments here on this point:
The portion of the argument where I contest is step 2 here (it’s a summarized version):
Or this argument here:
As stated, if the computer program is accessible to him, then for all intents and purposes, he does understand Chinese for the purposes of interacting until the program is removed (assuming that it completely characterizes Chinese and correctly works for all inputs).
I think the key issue is that people don’t want to accept that if we were completely unconstrained physically, even very large look-up tables would be a valid answer to making an AI that is useful.
The book in the room isn’t inert, though. It instructs the little guy on what to do as he manipulates symbols and stuff. As such, it is an important part of the computation that takes place.
The mapping of popcorn-to-computation, though, doesn’t do anything equivalent to this. It’s just an off-to-the-side interpretation of what is happening in the popcorn: it does nothing to move the popcorn or cause it to be configured in such a way. It doesn’t have to even exist: if you just know that in theory there is a way to map the popcorn to the computation, then if (by the terms of the argument) the computation itself is sufficient to generate consciousness, the popcorn should be able to do it as well, with the mapping left as an exercise for the reader. Otherwise you are implying some special property of a headful of meat such that it does not need to be interpreted in this way for its computation to be equivalent to consciousness.
That doesn’t quite follow to me. The book seems just as inert as the popcorn-to-consciousness map. The book doesn’t change yhe incoming slip of paper (popcorn) in any way, it just responds with an inert static map yo result in an outgoing slip of paper (consciousness), utilizing the map-and-popcorn-analyzing-agent-who-lacks-understanding (man in the room).
The book in the Chinese Room directs the actions of the little man in the room. Without the book, the man doesn’t act, and the text doesn’t get translated.
The popcorn map on the other hand doesn’t direct the popcorn to do what it does. The popcorn does what it does, and then the map in a post-hoc way is generated to explain how what the popcorn did maps to some particular calculation.
You can say that “oh well, then, the popcorn wasn’t really conscious until the map was generated; it was the additional calculations that went into generating the map that really caused the consciousness to emerge from the calculating” and then you’re back in Chinese Room territory. But if you do this, you’re left with the task of explaining how a brain can be conscious solely by means of executing a calculation before anyone has gotten around to creating a map between brain-states and whatever the relevant calculation-states might be. You have to posit some way in which calculations capable of embodying consciousness are inherent to brains but must be interpreted into being elsewhere.
You believe that something inert cannot be doing computation. I agree. But you seem to think it’s coherent that a system with no action—a post-hoc mapping of states—can be.
The place where comprehension of Chinese exists in the “chinese room” is the creation of the mapping—the mapping itself is a static object, and the person in the room by assumption is doing to cognitive work, just looking up entries. “But wait!” we can object, “this means that the Chinese room doesn’t understand Chinese!” And I think that’s the point of confusion—repeating someone else telling you answers isn’t the same as understanding. The fact that the “someone else” wrote down the answers changes nothing. The question is where and when the computation occurred.
In our scenarios, there are a couple different computations—but the creation of the mapping unfairly sneaks in the conclusion that the execution of the computation, which is required to build the mapping, isn’t what creates consciousness!
Good point. The problem I have with that is that in every listed example, the mapping either requires the execution of the conscious mind and a readout of its output and process in order to build it, or it stipulates that it is well enough understood that it can be mapped to an arbitrary process, thereby implicitly also requiring that it was run elsewhere.