This reminds me of my intuitive rejection of the Chinese Room thought experiment, in which the intuition pump seems to rely on the little guy in the room not knowing Chinese, but that it’s obviously the whole mechanism that is the room, the books in the room, etc. that is doing the “knowing” while the little guy is just a cog.
Part of what makes the rock/popcorn/wall thought experiment more appealing, even given your objections here, is that even if you imagine that you have offloaded the complex mapping somewhere else, the actual thinking-action that the mapping interprets is happening in the rock/popcorn/wall. The mapping itself is inert and passive at that point. So if you imagine consciousness as an activity that is equivalent to a physical process of computation you still have to imagine it taking place in the popcorn, not in the mapping.
You seem maybe to be implying that we have underinvestigated the claim that one really can arbitrarily map any complex computation to any finite collection of stuff (that this would e.g. imply that we have solved the halting problem). But I think these thought experiments don’t require us to wrestle with that because they assume ad arguendo that you can instantiate the computations we’re interested in (consciousness) in a headful of meat, and then try to show that if this is the case, many other finite collections of matter ought to be able to do the job just as well.
“the actual thinking-action that the mapping interprets”
I don’t think this is conceptually correct. Looking at the chess playing waterfall that Aaronson discusses, the mapping itself is doing all of the computation. The fact that the mapping ran in the past doesn’t change the fact that it’s the location of the computation, any more than the fact that it takes milliseconds for my nerve impulses to reach my fingers means that my fingers are doing the thinking in writing this essay. (Though given the typos you found, it would be convenient to blame them.)
they assume ad arguendo that you can instantiate the computations we’re interested in (consciousness) in a headful of meat, and then try to show that if this is the case, many other finite collections of matter ought to be able to do the job just as well.
Yes, they assume that whatever runs the algorithm is experiencing running the algorithm from the inside. And yes, many specific finite systems can do so—namely, GPUs and CPUs, as well as the wetware in our head. But without the claim that arbitrary items can do these computations, it seems that the arguendo is saying nothing different than the conclusion—right?
I agree that the Chinese Room thought experiment dissolves into clarity once you realize that it is the room as a whole, not just the person, that implements the understanding.
But then wouldn’t the mapping, like the inert book in the room, need to be included in the system?
IMO, the entire Chinese room thought experiment dissolves into clarity once we remember that the intuitive meaning of understanding is formed around algorithms that are not lookup tables, because trying to create an infinite look-up table would be infeasible in reality, thus our intuitions go wrong in extreme cases.
I agree with the discord comments here on this point:
The portion of the argument where I contest is step 2 here (it’s a summarized version):
If Strong AI is true, then there is a program for Chinese, C, such that if any computing system runs C, that system thereby comes to understand Chinese.
I could run C without thereby coming to understand Chinese.
Therefore Strong AI is false.
Or this argument here:
Searle imagines himself alone in a room following a computer program for responding to Chinese characters slipped under the door. Searle understands nothing of Chinese, and yet, by following the program for manipulating symbols and numerals just as a computer does, he sends appropriate strings of Chinese characters back out under the door, and this leads those outside to mistakenly suppose there is a Chinese speaker in the room.
As stated, if the computer program is accessible to him, then for all intents and purposes, he does understand Chinese for the purposes of interacting until the program is removed (assuming that it completely characterizes Chinese and correctly works for all inputs).
I think the key issue is that people don’t want to accept that if we were completely unconstrained physically, even very large look-up tables would be a valid answer to making an AI that is useful.
The book in the room isn’t inert, though. It instructs the little guy on what to do as he manipulates symbols and stuff. As such, it is an important part of the computation that takes place.
The mapping of popcorn-to-computation, though, doesn’t do anything equivalent to this. It’s just an off-to-the-side interpretation of what is happening in the popcorn: it does nothing to move the popcorn or cause it to be configured in such a way. It doesn’t have to even exist: if you just know that in theory there is a way to map the popcorn to the computation, then if (by the terms of the argument) the computation itself is sufficient to generate consciousness, the popcorn should be able to do it as well, with the mapping left as an exercise for the reader. Otherwise you are implying some special property of a headful of meat such that it does not need to be interpreted in this way for its computation to be equivalent to consciousness.
That doesn’t quite follow to me. The book seems just as inert as the popcorn-to-consciousness map. The book doesn’t change yhe incoming slip of paper (popcorn) in any way, it just responds with an inert static map yo result in an outgoing slip of paper (consciousness), utilizing the map-and-popcorn-analyzing-agent-who-lacks-understanding (man in the room).
The book in the Chinese Room directs the actions of the little man in the room. Without the book, the man doesn’t act, and the text doesn’t get translated.
The popcorn map on the other hand doesn’t direct the popcorn to do what it does. The popcorn does what it does, and then the map in a post-hoc way is generated to explain how what the popcorn did maps to some particular calculation.
You can say that “oh well, then, the popcorn wasn’t really conscious until the map was generated; it was the additional calculations that went into generating the map that really caused the consciousness to emerge from the calculating” and then you’re back in Chinese Room territory. But if you do this, you’re left with the task of explaining how a brain can be conscious solely by means of executing a calculation before anyone has gotten around to creating a map between brain-states and whatever the relevant calculation-states might be. You have to posit some way in which calculations capable of embodying consciousness are inherent to brains but must be interpreted into being elsewhere.
This reminds me of my intuitive rejection of the Chinese Room thought experiment, in which the intuition pump seems to rely on the little guy in the room not knowing Chinese, but that it’s obviously the whole mechanism that is the room, the books in the room, etc. that is doing the “knowing” while the little guy is just a cog.
Part of what makes the rock/popcorn/wall thought experiment more appealing, even given your objections here, is that even if you imagine that you have offloaded the complex mapping somewhere else, the actual thinking-action that the mapping interprets is happening in the rock/popcorn/wall. The mapping itself is inert and passive at that point. So if you imagine consciousness as an activity that is equivalent to a physical process of computation you still have to imagine it taking place in the popcorn, not in the mapping.
You seem maybe to be implying that we have underinvestigated the claim that one really can arbitrarily map any complex computation to any finite collection of stuff (that this would e.g. imply that we have solved the halting problem). But I think these thought experiments don’t require us to wrestle with that because they assume ad arguendo that you can instantiate the computations we’re interested in (consciousness) in a headful of meat, and then try to show that if this is the case, many other finite collections of matter ought to be able to do the job just as well.
I don’t think this is conceptually correct. Looking at the chess playing waterfall that Aaronson discusses, the mapping itself is doing all of the computation. The fact that the mapping ran in the past doesn’t change the fact that it’s the location of the computation, any more than the fact that it takes milliseconds for my nerve impulses to reach my fingers means that my fingers are doing the thinking in writing this essay. (Though given the typos you found, it would be convenient to blame them.)
Yes, they assume that whatever runs the algorithm is experiencing running the algorithm from the inside. And yes, many specific finite systems can do so—namely, GPUs and CPUs, as well as the wetware in our head. But without the claim that arbitrary items can do these computations, it seems that the arguendo is saying nothing different than the conclusion—right?
I agree that the Chinese Room thought experiment dissolves into clarity once you realize that it is the room as a whole, not just the person, that implements the understanding.
But then wouldn’t the mapping, like the inert book in the room, need to be included in the system?
IMO, the entire Chinese room thought experiment dissolves into clarity once we remember that the intuitive meaning of understanding is formed around algorithms that are not lookup tables, because trying to create an infinite look-up table would be infeasible in reality, thus our intuitions go wrong in extreme cases.
I agree with the discord comments here on this point:
The portion of the argument where I contest is step 2 here (it’s a summarized version):
Or this argument here:
As stated, if the computer program is accessible to him, then for all intents and purposes, he does understand Chinese for the purposes of interacting until the program is removed (assuming that it completely characterizes Chinese and correctly works for all inputs).
I think the key issue is that people don’t want to accept that if we were completely unconstrained physically, even very large look-up tables would be a valid answer to making an AI that is useful.
The book in the room isn’t inert, though. It instructs the little guy on what to do as he manipulates symbols and stuff. As such, it is an important part of the computation that takes place.
The mapping of popcorn-to-computation, though, doesn’t do anything equivalent to this. It’s just an off-to-the-side interpretation of what is happening in the popcorn: it does nothing to move the popcorn or cause it to be configured in such a way. It doesn’t have to even exist: if you just know that in theory there is a way to map the popcorn to the computation, then if (by the terms of the argument) the computation itself is sufficient to generate consciousness, the popcorn should be able to do it as well, with the mapping left as an exercise for the reader. Otherwise you are implying some special property of a headful of meat such that it does not need to be interpreted in this way for its computation to be equivalent to consciousness.
That doesn’t quite follow to me. The book seems just as inert as the popcorn-to-consciousness map. The book doesn’t change yhe incoming slip of paper (popcorn) in any way, it just responds with an inert static map yo result in an outgoing slip of paper (consciousness), utilizing the map-and-popcorn-analyzing-agent-who-lacks-understanding (man in the room).
The book in the Chinese Room directs the actions of the little man in the room. Without the book, the man doesn’t act, and the text doesn’t get translated.
The popcorn map on the other hand doesn’t direct the popcorn to do what it does. The popcorn does what it does, and then the map in a post-hoc way is generated to explain how what the popcorn did maps to some particular calculation.
You can say that “oh well, then, the popcorn wasn’t really conscious until the map was generated; it was the additional calculations that went into generating the map that really caused the consciousness to emerge from the calculating” and then you’re back in Chinese Room territory. But if you do this, you’re left with the task of explaining how a brain can be conscious solely by means of executing a calculation before anyone has gotten around to creating a map between brain-states and whatever the relevant calculation-states might be. You have to posit some way in which calculations capable of embodying consciousness are inherent to brains but must be interpreted into being elsewhere.