My blog has returned to the world of the living, accompanied by a post pointing out why solipsism is really silly, aimed at readers without much of a background in philosophy. I would be grateful for any criticisms, suggestions, or other comments LessWrongians have to offer.
I will make a second post by Monday, December 27th, probably about Searle’s Chinese Room and why it’s dumb.
Does anyone other than Searle actually consider Searle’s Chinese Room a compelling argument? (And if so, what do they consider it a compelling argument for?)
Well, when you put it that way, I guess I consider it a respectable argument, myself.
That is, it’s a useful exercise for starting to think rigorously about what it means to be a mind. That’s what thought experiments are for, after all, to make you think about things you might not have thought about otherwise. That function deserves respect.
If you decide the Chinese Box really does understand Chinese, that implies certain things about the nature of understanding. If you decide the Chinese Box simply can’t exist at all, that implies other things. If you decide it could understand Chinese if only X or Y, ditto. If you decide that neither the Chinese Box nor any other system is actually capable of understanding Chinese, ibid.
But Searle really does seem to believe that it provides a reason to conclude one way over another, and that seems downright bizarre to me.
My blog has returned to the world of the living, accompanied by a post pointing out why solipsism is really silly, aimed at readers without much of a background in philosophy. I would be grateful for any criticisms, suggestions, or other comments LessWrongians have to offer.
I will make a second post by Monday, December 27th, probably about Searle’s Chinese Room and why it’s dumb.
Does anyone other than Searle actually consider Searle’s Chinese Room a compelling argument? (And if so, what do they consider it a compelling argument for?)
I consider it an interesting (though flawed) argument that humans aren’t actually conscious or intelligent.
Unfortunately, yes. (Or if not compelling, at least respectable.)
Well, when you put it that way, I guess I consider it a respectable argument, myself.
That is, it’s a useful exercise for starting to think rigorously about what it means to be a mind. That’s what thought experiments are for, after all, to make you think about things you might not have thought about otherwise. That function deserves respect.
If you decide the Chinese Box really does understand Chinese, that implies certain things about the nature of understanding. If you decide the Chinese Box simply can’t exist at all, that implies other things. If you decide it could understand Chinese if only X or Y, ditto. If you decide that neither the Chinese Box nor any other system is actually capable of understanding Chinese, ibid.
But Searle really does seem to believe that it provides a reason to conclude one way over another, and that seems downright bizarre to me.