Hm, I have a lot problems with Searle’s argument. But even if you skip over all of the little issues, such as “The Turing Test is not a reasonable test of conscious experience”, I think his biggest flaw is this assumption:
The intuition that the Chinese room follows a purely syntactic (symbol-manipulating) process rather than a semantic (understanding) one is a correct philosophical judgement.
If you begin with the theory that consciousness arises from information theoretical properties of a computation(such as Koch and Tononi’s Integrated Information Theory), then while you may reach some unintuitive conclusions, you certainly don’t reach any contradiction, meaning that Searle’s argument is not at all a sufficient disproof of AI’s conscious experience.
Instead, you simply hit the conclusion that for some implementations of rulesets, the human-ruleset system IS conscious, and DOES understand Chinese, in the same sense that a native speaker does.
I think we can undo the intuition scrambling by stating that the ruleset is analogous to a human brain, and the human carrying out the mindless computation is analogous to the laws of physics themselves. Do we demand that “the laws of physics” understand Chinese in order to say that a human does? Of course not. So why does it make sense to demand that the human (who, in the chinese room is really playing the same role as physics) understand Chinese in order to believe that the room-human system does?
So why does it make sense to demand that the human (who, in the chinese room is really playing the same role as physics) understand Chinese in order to believe that the room-human system does?
Hm, I have a lot problems with Searle’s argument. But even if you skip over all of the little issues, such as “The Turing Test is not a reasonable test of conscious experience”, I think his biggest flaw is this assumption:
If you begin with the theory that consciousness arises from information theoretical properties of a computation(such as Koch and Tononi’s Integrated Information Theory), then while you may reach some unintuitive conclusions, you certainly don’t reach any contradiction, meaning that Searle’s argument is not at all a sufficient disproof of AI’s conscious experience. Instead, you simply hit the conclusion that for some implementations of rulesets, the human-ruleset system IS conscious, and DOES understand Chinese, in the same sense that a native speaker does. I think we can undo the intuition scrambling by stating that the ruleset is analogous to a human brain, and the human carrying out the mindless computation is analogous to the laws of physics themselves. Do we demand that “the laws of physics” understand Chinese in order to say that a human does? Of course not. So why does it make sense to demand that the human (who, in the chinese room is really playing the same role as physics) understand Chinese in order to believe that the room-human system does?
It doesn’t.
I think the argument obscures what might be a genuine point, which I look at here: http://lesswrong.com/lw/lxi/hedoniums_semantic_problem/