That doesn’t quite follow to me. The book seems just as inert as the popcorn-to-consciousness map. The book doesn’t change yhe incoming slip of paper (popcorn) in any way, it just responds with an inert static map yo result in an outgoing slip of paper (consciousness), utilizing the map-and-popcorn-analyzing-agent-who-lacks-understanding (man in the room).
The book in the Chinese Room directs the actions of the little man in the room. Without the book, the man doesn’t act, and the text doesn’t get translated.
The popcorn map on the other hand doesn’t direct the popcorn to do what it does. The popcorn does what it does, and then the map in a post-hoc way is generated to explain how what the popcorn did maps to some particular calculation.
You can say that “oh well, then, the popcorn wasn’t really conscious until the map was generated; it was the additional calculations that went into generating the map that really caused the consciousness to emerge from the calculating” and then you’re back in Chinese Room territory. But if you do this, you’re left with the task of explaining how a brain can be conscious solely by means of executing a calculation before anyone has gotten around to creating a map between brain-states and whatever the relevant calculation-states might be. You have to posit some way in which calculations capable of embodying consciousness are inherent to brains but must be interpreted into being elsewhere.
You believe that something inert cannot be doing computation. I agree. But you seem to think it’s coherent that a system with no action—a post-hoc mapping of states—can be.
The place where comprehension of Chinese exists in the “chinese room” is the creation of the mapping—the mapping itself is a static object, and the person in the room by assumption is doing to cognitive work, just looking up entries. “But wait!” we can object, “this means that the Chinese room doesn’t understand Chinese!” And I think that’s the point of confusion—repeating someone else telling you answers isn’t the same as understanding. The fact that the “someone else” wrote down the answers changes nothing. The question is where and when the computation occurred.
In our scenarios, there are a couple different computations—but the creation of the mapping unfairly sneaks in the conclusion that the execution of the computation, which is required to build the mapping, isn’t what creates consciousness!
That doesn’t quite follow to me. The book seems just as inert as the popcorn-to-consciousness map. The book doesn’t change yhe incoming slip of paper (popcorn) in any way, it just responds with an inert static map yo result in an outgoing slip of paper (consciousness), utilizing the map-and-popcorn-analyzing-agent-who-lacks-understanding (man in the room).
The book in the Chinese Room directs the actions of the little man in the room. Without the book, the man doesn’t act, and the text doesn’t get translated.
The popcorn map on the other hand doesn’t direct the popcorn to do what it does. The popcorn does what it does, and then the map in a post-hoc way is generated to explain how what the popcorn did maps to some particular calculation.
You can say that “oh well, then, the popcorn wasn’t really conscious until the map was generated; it was the additional calculations that went into generating the map that really caused the consciousness to emerge from the calculating” and then you’re back in Chinese Room territory. But if you do this, you’re left with the task of explaining how a brain can be conscious solely by means of executing a calculation before anyone has gotten around to creating a map between brain-states and whatever the relevant calculation-states might be. You have to posit some way in which calculations capable of embodying consciousness are inherent to brains but must be interpreted into being elsewhere.
You believe that something inert cannot be doing computation. I agree. But you seem to think it’s coherent that a system with no action—a post-hoc mapping of states—can be.
The place where comprehension of Chinese exists in the “chinese room” is the creation of the mapping—the mapping itself is a static object, and the person in the room by assumption is doing to cognitive work, just looking up entries. “But wait!” we can object, “this means that the Chinese room doesn’t understand Chinese!” And I think that’s the point of confusion—repeating someone else telling you answers isn’t the same as understanding. The fact that the “someone else” wrote down the answers changes nothing. The question is where and when the computation occurred.
In our scenarios, there are a couple different computations—but the creation of the mapping unfairly sneaks in the conclusion that the execution of the computation, which is required to build the mapping, isn’t what creates consciousness!