Hm. The Chinese Room seems to be different in my head than on wikipedia. I guess I assumed that writing a book that covers all possible inputs convincingly would necessarily involve lots of brute force.
Well, the man in the Chinese Room is supposed to be manually ‘stepping through’ an algorithm that can respond intelligently to questions in Chinese. He’s not necessarily just “matching up” inputs with outputs, although Searle wants you to think that he may as well just be doing that.
Searle seems to have very little appreciation of how complicated his program would have to be, though to be fair, his intuitions were shaped by chatbots like Eliza.
Anyway, the “Systems Reply” is correct (hurrah—we have a philosophical “result”). Even those philosophers who think this is in some way controversial ought to agree that it’s irrelevant whether the man in the room understands Chinese, because he is analogous to the CPU, not the program.
Therefore, his thought experiment has zero value—if you can imagine a conscious machine then you can imagine the “Systems Reply” being correct, and if you can’t, you can’t.
searle is an idiot, the nebulous “understanding” he talks about in the original paper is obviously informationally contained in the algorithm. the degree to which someone believes that “understanding” can’t be contained in an algorithm is the degree to which they believe in dualism. just because executing an algorithm from the inside feels like something we label understanding doesn’t make it magic.
Hm. The Chinese Room seems to be different in my head than on wikipedia. I guess I assumed that writing a book that covers all possible inputs convincingly would necessarily involve lots of brute force.
Well, the man in the Chinese Room is supposed to be manually ‘stepping through’ an algorithm that can respond intelligently to questions in Chinese. He’s not necessarily just “matching up” inputs with outputs, although Searle wants you to think that he may as well just be doing that.
Searle seems to have very little appreciation of how complicated his program would have to be, though to be fair, his intuitions were shaped by chatbots like Eliza.
Anyway, the “Systems Reply” is correct (hurrah—we have a philosophical “result”). Even those philosophers who think this is in some way controversial ought to agree that it’s irrelevant whether the man in the room understands Chinese, because he is analogous to the CPU, not the program.
Therefore, his thought experiment has zero value—if you can imagine a conscious machine then you can imagine the “Systems Reply” being correct, and if you can’t, you can’t.
searle is an idiot, the nebulous “understanding” he talks about in the original paper is obviously informationally contained in the algorithm. the degree to which someone believes that “understanding” can’t be contained in an algorithm is the degree to which they believe in dualism. just because executing an algorithm from the inside feels like something we label understanding doesn’t make it magic.