I was wondering if someone can point me to good LW’s article(s)/refutation(s) of Searle’s Chinese room argument and consciousness in general. A search comes up with a lot of articles mentioning it but I assume it is addressed in some form in the sequences?
The Zombie sequence) may be related. (We’ll see if I can actually link it here.) As far as the Chinese Room goes:
I think a necessary condition for consciousness is approximating a Bayesian update. So in the (ridiculous) version where the rules for speaking Chinese have no ability to learn, they also can’t be conscious.
Searle talks about “understanding” Chinese. Now, the way I would interpret this word depends on context—that’s how language works—but normally I’d incline towards a Bayesian interpretation of “understanding” as well. So this again might depend on something Searle left out of his scenario, though the question might not have a fixed meaning.
Some versions of the “Chinese Gym” have many people working together to implement the algorithm. Now, your neurons are all technically alive in one sense. I genuinely feel unsure how much consciousness a single neuron can have. If I decide to claim it’s comparable to a man blindly following rules in a room, I don’t think Searle could refute this. (I also don’t think it makes sense to say one neuron alone can understand Chinese; neurologists, feel free to correct me.) So what is his argument supposed to be?
Thanks for the pointer to the zombie sequence. I ’ve read part of it in the past and did not think it addressed the issue but I will revisit.
What about it seems worth refuting?
Well, the way it shows that you can not get consciousness from syntactic symbol manipulation. And Bayesian update is also a type of syntactic symbol manipulation so I am not clear why you are treating it differently. Are you sure you are not making the assumption that consciousness arises algorithmically to justify your conclusion and thus introduce circularity in your logic?
I don’t know. Many people are rejecting the ‘Chinese room’ argument as naive but I haven’t understood why yet so I am honestly open to the possibility that I am missing something.
I repeat: show that none of your neurons have consciousness separate from your own.
Why on Earth would you think Searle’s argument shows anything, when you can’t establish that you aren’t a Chinese Gym? In order to even cast doubt on the idea that neurons are people, don’t you need to rely on functionalism or a similar premise?
(I am not sure at all about all this so please correct me if you recognise any inconsistencies)
First of all, I honestly don’t understand your claim that neurons have consciousness separate from our own. I don’t know but I surely don’t have any indication of that...
Why on Earth would you think Searle’s argument shows anything, when you can’t establish that you aren’t a Chinese Gym?
The point is that the brain is not a Touring machine since it does not seem to be digital. A Chinese Gym would still be a syntactic system that uses ‘instructions’ between people.This is related to the way Giulio Tononi is attempting to solve the problem of consciousness with his Phi theory.
I was wondering if someone can point me to good LW’s article(s)/refutation(s) of Searle’s Chinese room argument and consciousness in general. A search comes up with a lot of articles mentioning it but I assume it is addressed in some form in the sequences?
I don’t remember if the Sequences cover it. But if you haven’t already, you might check out SEP’s section on Replies to the Chinese Room Argument.
That is great! Thanks :)
What about it seems worth refuting?
The Zombie sequence) may be related. (We’ll see if I can actually link it here.) As far as the Chinese Room goes:
I think a necessary condition for consciousness is approximating a Bayesian update. So in the (ridiculous) version where the rules for speaking Chinese have no ability to learn, they also can’t be conscious.
Searle talks about “understanding” Chinese. Now, the way I would interpret this word depends on context—that’s how language works—but normally I’d incline towards a Bayesian interpretation of “understanding” as well. So this again might depend on something Searle left out of his scenario, though the question might not have a fixed meaning.
Some versions of the “Chinese Gym” have many people working together to implement the algorithm. Now, your neurons are all technically alive in one sense. I genuinely feel unsure how much consciousness a single neuron can have. If I decide to claim it’s comparable to a man blindly following rules in a room, I don’t think Searle could refute this. (I also don’t think it makes sense to say one neuron alone can understand Chinese; neurologists, feel free to correct me.) So what is his argument supposed to be?
Thanks for the pointer to the zombie sequence. I ’ve read part of it in the past and did not think it addressed the issue but I will revisit.
Well, the way it shows that you can not get consciousness from syntactic symbol manipulation. And Bayesian update is also a type of syntactic symbol manipulation so I am not clear why you are treating it differently. Are you sure you are not making the assumption that consciousness arises algorithmically to justify your conclusion and thus introduce circularity in your logic?
I don’t know. Many people are rejecting the ‘Chinese room’ argument as naive but I haven’t understood why yet so I am honestly open to the possibility that I am missing something.
I repeat: show that none of your neurons have consciousness separate from your own.
Why on Earth would you think Searle’s argument shows anything, when you can’t establish that you aren’t a Chinese Gym? In order to even cast doubt on the idea that neurons are people, don’t you need to rely on functionalism or a similar premise?
(I am not sure at all about all this so please correct me if you recognise any inconsistencies)
First of all, I honestly don’t understand your claim that neurons have consciousness separate from our own. I don’t know but I surely don’t have any indication of that...
The point is that the brain is not a Touring machine since it does not seem to be digital. A Chinese Gym would still be a syntactic system that uses ‘instructions’ between people.This is related to the way Giulio Tononi is attempting to solve the problem of consciousness with his Phi theory.