Searle’s Chinese Room is a great (awful) case to test out how well people think. The argument can be attacked (successfully) in so many different ways, it is a good marker of both ability to analyze an argument and ability to think creatively. Even better if after your interlocutor kills the argument one way, you ask him or her to kill it another, different way. (Then repeat as desired.)
What do you mean by “great (awful)”? Do you mean that the thought experiment itself is an awful argument against AI, but describing the argument is a good way to test how people think?
Yes, that’s exactly what I mean. The argument itself is terrible. But it invites so many reasonable challenges that it is still very useful as a test of clear thinking. So, awful argument; great test case.
On a related note, I remember the day when I found out my PhD advisor (a computability theorist!) revealed that he believed the argument against AI from Gödel’s incompleteness theorem. It was not reassuring.
Picture a room larger than Library of Congress which answers a simplest question in a million years, and the argument entirely dissolves. Imagine some nonsense the way Searle wants you to (small room, talks fast enough), take possibility of such as a postulate, and you’ll create yourself a logically inconsistent system* in which you can prove anything including impossibility of AI.
*Postulating that, say, good ol zx spectrum can run human mind equivalent intelligence in real-time on 128 kilobytes of ram, is ultimately postulating a mathematical impossibility, and you should in principle be able to get all the way to 1=2 from there.
I’m not sure I understand the Library of Congress bit, but the footnote is exactly right. Even so, that is only one way of resisting Searle’s argument. The point for me is that we can measure cleverness to some tolerance by how many ways one finds to fault the argument. For example:
a. The architecture is completely wrong. People don’t work by simple look-up tables.
b. Failure of imagination. We are asked to imagine something that passes the Turing test. Anyone convinced by the argument is probably not imagining that premiss vividly enough.
c. The argument depends on a fallacy of division/composition. Searle argues that the system does not understand Chinese since none of its parts understand Chinese. But some humans understand Chinese, and it is implausible that any individual human cell understands Chinese. So, the argument is logically flawed.
d. In order to have an interactive conversation, the room needs to have something like a memory or history. Understanding isn’t just about translation but about connecting language to other parts of life.
e. Similarly to (d), the room is not embodied in any interesting way. The room has no perceptual apparatus and no motor functions. Understanding is partly about connecting language to the world. Intelligence is partly about successful navigation in the world. Connect the room to a robot body and then present the case again.
...
Further challenges could be given, I think. But you get the idea.
I meant, the room got to store many terabytes of information, very well organized too (for the state dump of a chinese speaking person). It’s a very big room, library sized, and there’s enormous amount of paper that gets processed before it says anything, and enormous timespan.
The argument relies on imagining a room that couldn’t possibly have understood anything; imagine the room ‘to scale’ and the timing to scale, and then assertion that room couldn’t possibly have understood anything loses ground.
There’s another argument like chinese room, about giant archive of answers to all possible questions. Works by severely under-imagining size of the archive, too.
There’s another argument like chinese room, about giant archive of answers to all possible questions. Works by severely under-imagining size of the archive, too.
Searle’s Chinese Room is a great (awful) case to test out how well people think. The argument can be attacked (successfully) in so many different ways, it is a good marker of both ability to analyze an argument and ability to think creatively. Even better if after your interlocutor kills the argument one way, you ask him or her to kill it another, different way. (Then repeat as desired.)
What do you mean by “great (awful)”? Do you mean that the thought experiment itself is an awful argument against AI, but describing the argument is a good way to test how people think?
Yes, that’s exactly what I mean. The argument itself is terrible. But it invites so many reasonable challenges that it is still very useful as a test of clear thinking. So, awful argument; great test case.
On a related note, I remember the day when I found out my PhD advisor (a computability theorist!) revealed that he believed the argument against AI from Gödel’s incompleteness theorem. It was not reassuring.
Smarter than human AI, or artificial human level general intelligence?
The latter.
Ya.
Picture a room larger than Library of Congress which answers a simplest question in a million years, and the argument entirely dissolves. Imagine some nonsense the way Searle wants you to (small room, talks fast enough), take possibility of such as a postulate, and you’ll create yourself a logically inconsistent system* in which you can prove anything including impossibility of AI.
*Postulating that, say, good ol zx spectrum can run human mind equivalent intelligence in real-time on 128 kilobytes of ram, is ultimately postulating a mathematical impossibility, and you should in principle be able to get all the way to 1=2 from there.
I’m not sure I understand the Library of Congress bit, but the footnote is exactly right. Even so, that is only one way of resisting Searle’s argument. The point for me is that we can measure cleverness to some tolerance by how many ways one finds to fault the argument. For example:
a. The architecture is completely wrong. People don’t work by simple look-up tables.
b. Failure of imagination. We are asked to imagine something that passes the Turing test. Anyone convinced by the argument is probably not imagining that premiss vividly enough.
c. The argument depends on a fallacy of division/composition. Searle argues that the system does not understand Chinese since none of its parts understand Chinese. But some humans understand Chinese, and it is implausible that any individual human cell understands Chinese. So, the argument is logically flawed.
d. In order to have an interactive conversation, the room needs to have something like a memory or history. Understanding isn’t just about translation but about connecting language to other parts of life.
e. Similarly to (d), the room is not embodied in any interesting way. The room has no perceptual apparatus and no motor functions. Understanding is partly about connecting language to the world. Intelligence is partly about successful navigation in the world. Connect the room to a robot body and then present the case again.
...
Further challenges could be given, I think. But you get the idea.
I meant, the room got to store many terabytes of information, very well organized too (for the state dump of a chinese speaking person). It’s a very big room, library sized, and there’s enormous amount of paper that gets processed before it says anything, and enormous timespan.
The argument relies on imagining a room that couldn’t possibly have understood anything; imagine the room ‘to scale’ and the timing to scale, and then assertion that room couldn’t possibly have understood anything loses ground.
There’s another argument like chinese room, about giant archive of answers to all possible questions. Works by severely under-imagining size of the archive, too.
Agreed.