The book in the room isn’t inert, though. It instructs the little guy on what to do as he manipulates symbols and stuff. As such, it is an important part of the computation that takes place.
The mapping of popcorn-to-computation, though, doesn’t do anything equivalent to this. It’s just an off-to-the-side interpretation of what is happening in the popcorn: it does nothing to move the popcorn or cause it to be configured in such a way. It doesn’t have to even exist: if you just know that in theory there is a way to map the popcorn to the computation, then if (by the terms of the argument) the computation itself is sufficient to generate consciousness, the popcorn should be able to do it as well, with the mapping left as an exercise for the reader. Otherwise you are implying some special property of a headful of meat such that it does not need to be interpreted in this way for its computation to be equivalent to consciousness.
That doesn’t quite follow to me. The book seems just as inert as the popcorn-to-consciousness map. The book doesn’t change yhe incoming slip of paper (popcorn) in any way, it just responds with an inert static map yo result in an outgoing slip of paper (consciousness), utilizing the map-and-popcorn-analyzing-agent-who-lacks-understanding (man in the room).
The book in the Chinese Room directs the actions of the little man in the room. Without the book, the man doesn’t act, and the text doesn’t get translated.
The popcorn map on the other hand doesn’t direct the popcorn to do what it does. The popcorn does what it does, and then the map in a post-hoc way is generated to explain how what the popcorn did maps to some particular calculation.
You can say that “oh well, then, the popcorn wasn’t really conscious until the map was generated; it was the additional calculations that went into generating the map that really caused the consciousness to emerge from the calculating” and then you’re back in Chinese Room territory. But if you do this, you’re left with the task of explaining how a brain can be conscious solely by means of executing a calculation before anyone has gotten around to creating a map between brain-states and whatever the relevant calculation-states might be. You have to posit some way in which calculations capable of embodying consciousness are inherent to brains but must be interpreted into being elsewhere.
The book in the room isn’t inert, though. It instructs the little guy on what to do as he manipulates symbols and stuff. As such, it is an important part of the computation that takes place.
The mapping of popcorn-to-computation, though, doesn’t do anything equivalent to this. It’s just an off-to-the-side interpretation of what is happening in the popcorn: it does nothing to move the popcorn or cause it to be configured in such a way. It doesn’t have to even exist: if you just know that in theory there is a way to map the popcorn to the computation, then if (by the terms of the argument) the computation itself is sufficient to generate consciousness, the popcorn should be able to do it as well, with the mapping left as an exercise for the reader. Otherwise you are implying some special property of a headful of meat such that it does not need to be interpreted in this way for its computation to be equivalent to consciousness.
That doesn’t quite follow to me. The book seems just as inert as the popcorn-to-consciousness map. The book doesn’t change yhe incoming slip of paper (popcorn) in any way, it just responds with an inert static map yo result in an outgoing slip of paper (consciousness), utilizing the map-and-popcorn-analyzing-agent-who-lacks-understanding (man in the room).
The book in the Chinese Room directs the actions of the little man in the room. Without the book, the man doesn’t act, and the text doesn’t get translated.
The popcorn map on the other hand doesn’t direct the popcorn to do what it does. The popcorn does what it does, and then the map in a post-hoc way is generated to explain how what the popcorn did maps to some particular calculation.
You can say that “oh well, then, the popcorn wasn’t really conscious until the map was generated; it was the additional calculations that went into generating the map that really caused the consciousness to emerge from the calculating” and then you’re back in Chinese Room territory. But if you do this, you’re left with the task of explaining how a brain can be conscious solely by means of executing a calculation before anyone has gotten around to creating a map between brain-states and whatever the relevant calculation-states might be. You have to posit some way in which calculations capable of embodying consciousness are inherent to brains but must be interpreted into being elsewhere.