That doesn’t quite follow to me. The book seems just as inert as the popcorn-to-consciousness map. The book doesn’t change yhe incoming slip of paper (popcorn) in any way, it just responds with an inert static map yo result in an outgoing slip of paper (consciousness), utilizing the map-and-popcorn-analyzing-agent-who-lacks-understanding (man in the room).
The book in the Chinese Room directs the actions of the little man in the room. Without the book, the man doesn’t act, and the text doesn’t get translated.
The popcorn map on the other hand doesn’t direct the popcorn to do what it does. The popcorn does what it does, and then the map in a post-hoc way is generated to explain how what the popcorn did maps to some particular calculation.
You can say that “oh well, then, the popcorn wasn’t really conscious until the map was generated; it was the additional calculations that went into generating the map that really caused the consciousness to emerge from the calculating” and then you’re back in Chinese Room territory. But if you do this, you’re left with the task of explaining how a brain can be conscious solely by means of executing a calculation before anyone has gotten around to creating a map between brain-states and whatever the relevant calculation-states might be. You have to posit some way in which calculations capable of embodying consciousness are inherent to brains but must be interpreted into being elsewhere.
That doesn’t quite follow to me. The book seems just as inert as the popcorn-to-consciousness map. The book doesn’t change yhe incoming slip of paper (popcorn) in any way, it just responds with an inert static map yo result in an outgoing slip of paper (consciousness), utilizing the map-and-popcorn-analyzing-agent-who-lacks-understanding (man in the room).
The book in the Chinese Room directs the actions of the little man in the room. Without the book, the man doesn’t act, and the text doesn’t get translated.
The popcorn map on the other hand doesn’t direct the popcorn to do what it does. The popcorn does what it does, and then the map in a post-hoc way is generated to explain how what the popcorn did maps to some particular calculation.
You can say that “oh well, then, the popcorn wasn’t really conscious until the map was generated; it was the additional calculations that went into generating the map that really caused the consciousness to emerge from the calculating” and then you’re back in Chinese Room territory. But if you do this, you’re left with the task of explaining how a brain can be conscious solely by means of executing a calculation before anyone has gotten around to creating a map between brain-states and whatever the relevant calculation-states might be. You have to posit some way in which calculations capable of embodying consciousness are inherent to brains but must be interpreted into being elsewhere.