A few days ago I asked for LW articles regarding the Chinese Room argument and got into a conversation with the user hairyfigment. As I am certainly not convinced of the validity of the Chinese room argument myself I tried to understand the Chinese gym extension of the argument and if/why it matters to the original point. In particular I pointed to the relevance of the brain not evidently being a digital computer. I went back to the 2014 book The Future of the Brain: Essays by the World’s Leading Neuroscientists which is a recent exposition of our current (quite poor) understanding of the brain. In particular I went back to the chapter The Computational Brain by Gary Marcus. Here are some quotes that I believe are relevant. Unfortunately I can not provide the full chapter for copyright reasons but I do recommend the book.
[...] we still haven’t even resolved the basic question of whether brains are analog, digital, or (as I suspect but certainly can’t prove) a hybrid of the two.
and
Going hand in hand with the neural network community’s odd presumption of initial randomness was a needless commitment to extreme simplicity, exemplified by models that almost invariably included a single neuronal type, abstracted from the details of biology. We now know that there are hundreds of different kinds of neurons , and the exact details—of where synapses are placed, of what kinds of of neurons are interconnected where-make an enormous difference. Just in the retina (itself a part of the brain), there are roughly twenty different types of ganglion cells; there, the idea that you could adequately capture what’s going on with a single kind of neuron is absurd. Across the brain as a whole, there are hundreds of different types of neurons, perhaps more than a thousand, and it is doubtful that evolution would sustain such diversity if each type of neurons were essentially doing the same type of thing.
Is the non or partially digital nature of the brain relevant to certain arguments based on neural networks presented in the sequences?
Does it open the possibility that Searle’s argument on syntactic symbol manipulation might be relevant?
Apart from the digital/analog point what about the neural complexity and variety? What if anything does it show about the current state of AI research?
Ah, you mean to ask if the brain is special in a way that evades our ability to construct an analogy of the chinese room argument for it? E.g. “our neurons don’t indiviually understand English, and my behavior is just the product of a bunch of neurons following the simple laws of chemistry, therefore there is nothing in my body that understands English.”
I think such an argument is totally valid imitation. It doesn’t necessarily bear on the Chinese room itself, which is a more artificial case, but it certainly applies to AI in general.
“our neurons don’t indiviually understand English, and my behavior is just the product of a bunch of neurons following the simple laws of chemistry”
The question is what the word “just” means in that sentence. Ordinarily it means to limit yourself to what is said there. The implication is that your behavior is explained by those simple laws, and not by anything else. But as I pointed out recently, having one explanation does not exclude others. So your behavior can be explained by those simple laws, and at the same time by the fact that you were seeking certain goals, or in other ways. In other words, the argument is false because the word “just” here implies something false.
The implication is that your behavior is explained by those simple laws
I don’t think the laws of physics (chemistry) are actually simple in the case of large systems. Note that this understanding applies to the Chinese Room idea too—the contents of the rules/slips of paper are not “simple” by any means.
But I’m more concerned about a confusion in interpreting
and not by anything else
Are you merely claiming that there are other models which can alternatively be used to explain some of all of the behaviors (instead of trying to understand the lower-level physics/chemistry)? Or are you saying that the physics is insufficient and you must supplement it with something else in order to identify all causes of behavior?
I agree with the first, and disagree with the second.
Are you merely claiming that there are other models which can alternatively be used to explain some of all of the behaviors
There’s that word, “merely,” there, like your other word “just,” which makes me say no to this. You could describe the situation as “there are many models,” but you are likely to be misled by this. In particular, you will likely be misled to think there is a highly accurate model, which is that someone did what he did because of chemicals, and a vague and inaccurate model, which says for example that someone went to the store to buy milk. So rather than talking about models, it is better simply to say that we are talking about two facts about the world:
Fact 1: the person went to the store because of the behavior of chemicals etc.
Fact 2: the person went to the store to buy milk.
These are not “merely” two different models: they are two different facts about the world.
Or are you saying that the physics is insufficient
I said in my comment, “So your behavior can be explained by those simple laws, and at the same time by the fact that you were seeking certain goals.” If the first were insufficient, it would not be an explanation. Both are sufficient, and both are correct.
you must supplement it with something else in order to identify all causes of behavior?
Yes, if we mean by “cause”, “explanation,” which is normally meant, then you have to mention both to mention all causes, i.e. all explanations, since both are explanations, and both are causes.
Fact 1: the person went to the store because of the behavior of chemicals etc. Fact 2: the person went to the store to buy milk.
These are not “merely” two different models: they are two different facts about the world.
Not independent facts, surely. The person went to the store to buy milk because of the behavior of chemicals, right? Even longer chains … because they were thirsty and they like milk because it reminds them of childhood because their parents thought it was important for bone growth because … because … ends eventually with because of the quantum configuration of the universe at some point. and you can correctly shortcut to there at any point in between.
I said they were two different facts, not two independent facts. So dependent or not (and this question itself is also more confused and complicated than you realize), if you do not mention them both, you are not mentioning everything that is there.
if you do not mention them both, you are not mentioning everything that is there.
Hmm. I don’t think “mention everything that is there” is on my list of goals for such discussions. I was thinking more along the lines of “mention the minimum necessary”. I’m still unclear whether you agree that physics is sufficient to describe all events in the universe including human behavior, even while acknowledging that there are higher-level models which are way easier to understand.
I’m still unclear whether you agree that physics is sufficient to describe all events in the universe including human behavior
It is sufficient to describe them in the way that it does describe them, which certainly includes (among other things) all physical motions. But it is obvious that physics does not make statements like “the person went to the store to buy milk,” even though that is a true fact about the world, and in that way it does not describe everything.
Ok, one more attempt. Which part of “the person went to the store to buy milk” is not described by the quantum configuration of the local space? The person certainly is. Movement toward and in the store certainly is. The neural impulses that correspond to desire for milk very probably are.
Which part of “the person went to the store to buy milk” is not described by the quantum configuration of the local space?
All of it.
The person certainly is.
The person certainly is not; this is why you have arguments about whether a fetus is a person. There would be no such arguments if the question were settled by physics.
Movement toward and in the store certainly is.
Movement is, but stores are not; physics has nothing to say about stores.
The neural impulses that correspond to desire for milk very probably are.
Indeed, physics contains neural impulses that correspond to the desire for milk, but it does not contain desire, nor does it contain milk.
Hmm.. I do not think that is what I mean, no. I lean towards agreeing with Searle’s conclusion but I am examining my thought process for errors.
Searle’s argument is not that consciousness is not created in the brain. It is that it is not based on syntactic symbol manipulation in the way a computer is and for that reason it is not going to be simulated by a computer with our current architecture (binary, logic gates etc.) as the AI community thought (and thinks). He does not deny that we might discover the architecture of the brain in the future. All he does is demonstrate through analogy how syntactic operations work.
In the Chinese gym rebuttal the issues is not really addressed. There is no denial by Searle that the brain is a system, with sub components, through which structure, consciousness emerges. That is a different discussion. He is arguing that the system must be doing something, different or in addition to, syntactic symbol manipulation.
Since the neuroscience does not support the digital information processing view where is the certainty of the community coming from? Am I missing something fundamental here?
I think people get too hung up on computers as being mechanistic. People usually think of symbol manipulation in terms of easy-to-imagine language-like models, but then try to generalize their intuitions to computation in general, which can be unimaginably complicated. It’s perfectly possible to simulate a human on an ordinary classical computer (to arbitrary precision). Would that simulation of a human be conscious, if they matched the behavior of a flesh and blood human almost perfectly, and could output to you via text channel and output things like “well, I sure feel conscious”?
The reason LWers are so confident that this simulation is conscious is because we think of concepts like “consciousness,” to the extent that they exist, as having something to do with the cause of us talking and thinking about consciousness. It’s just like how the concept of “apples” exists because apples exist, and when I correctly think I see an apple, it’s because there’s an apple. Talking about “consciousness” is presumed to be a consequence of our experience with consciousness. And the things we have experience with that we can label “consciousness” are introspective phenomena, physically realized as patterns of neurons firing, that have exact analogies in the simulation. Demanding that one has to be made of flesh to be conscious is not merely chauvinism, it’s a misunderstanding of what we have access to when we encounter consciousness.
I think people get too hung up on computers as being mechanistic. People usually think of symbol manipulation in terms of easy-to-imagine language-like models, but then try to generalize their intuitions to computation in general, which can be unimaginably complicated.
The working of a computer is not unimaginably complicated. Its basis is quite straightforward really. As I said in my answer to MrMind below “As Searle points out the meaning of zeros, ones, logic gates etc. is observer relative in the same way money (not the paper, the meaning) is observer relative and thus ontologically subjective. The electrons are indeed ontologically objective but that is not true regarding the syntactic structures of which they are elements in a computer. Watch this video of Searle explaining this (from 9:12).”.
Talking about “consciousness” is presumed to be a consequence of our experience with consciousness. And the things we have experience with that we can label “consciousness” are introspective phenomena, physically realized as patterns of neurons firing, that have exact analogies in the simulation.
In our debate I am holding the position that there can not be a simulation of consciousness using the current architectural basis of a computer. Searle has provided a logical argument. In my quotes above I show that the state of neuroscience does not point towards a purely digital brain. What is your evidence?
It is that it is not based on syntactic symbol manipulation in the way a computer is and for that reason it is not going to be simulated by a computer with our current architecture (binary, logic gates etc.) as the AI community thought (and thinks).
Well, that would run counter to the Church-Turing thesis. Either the brain is capable of doing things that would require infinite resources for a computer to perform, or the power of the brain and the computer is the same. Indeed, not even computers are based on symbolic manipulation: at the deepest level, it’s all electrons flowing back and forth.
Well, that would run counter to the Church-Turing thesis. Either the brain is capable of doing things that would require infinite resources for a computer to perform, or the power of the brain and the computer is the same.
Am I right to think that this statement is based on the assumption that the brain (and all computation machines) have been proven to have Turing machine equivalents based on the Church-Turing thesis? If that is the case I would refer you to this article’s section Misunderstandings of the Thesis. If I have understood wrong I would be grateful if you could offer some more details on your point.
Indeed, not even computers are based on symbolic manipulation: at the deepest level, it’s all electrons flowing back and forth.
We can demonstrate the erroneous logic of this statement by saying something like: ”Indeed, not even language is based on symbolic manipulation: at the deepest level, it’s all sound waves pushing air particles back and forth”.
As Searle points out the meaning of zeros, ones, logic gates etc. is observer relative in the same way money (not the paper, the meaning) is observer relative and thus ontologically subjective. The electrons are indeed ontologically objective but that is not true regarding the syntactic structures of which they are elements in a computer. Watch this video of Searle explaining this (from 9:12).
Am I right to think that this statement is based on the assumption that the brain (and all computation machines) have been proven to have Turing machine equivalents based on the Church-Turing thesis?
No, otherwise we would have the certainty that the brain is Turing-equivalent and I wouldn’t have prefaced with “Either the brain is capable of doing things that would require infinite resources for a computer to perform”.
We do not have proof that everything not calculable by a Turing machine requires infinite resources, otherwise Church-Turing will be a theorem and not a thesis, but we have strong hints: every hypercomputation model is based on accessing some infinite resource (whether it’s infinite time or infinite energy or infinite precision). Plus recently we had this theorem: any function on the naturals is computable by some machine in some non-standard time. So either the brain can compute things that a computer would take infinite resources to do, or the brain is at most as powerful as a Turing machine.
As per the electron thing, there’s a level where there is symbolic manipulation and a level where there isn’t. I don’t understand why it’s symbolic manipulation for electronics but not for neurons. At the right abstraction level, neurons too manipulate symbols.
As per the electron thing, there’s a level where there is symbolic manipulation and a level where there isn’t. I don’t understand why it’s symbolic manipulation for electronics but not for neurons. At the right abstraction level, neurons too manipulate symbols.
It is not the symbols that are the problem. It is that the semantic content of the symbol used in a digital computer is observer relative. The circuits depend on someone understanding their meaning. The meaning provided by the human engineer that, since he possesses the semantic content, understands the method of implementation and the calculation results at each level of abstraction. This is clearly not the case in the human brain in which the symbols arise in a manner that allows for intrinsic semantic content.
A few days ago I asked for LW articles regarding the Chinese Room argument and got into a conversation with the user hairyfigment. As I am certainly not convinced of the validity of the Chinese room argument myself I tried to understand the Chinese gym extension of the argument and if/why it matters to the original point. In particular I pointed to the relevance of the brain not evidently being a digital computer. I went back to the 2014 book The Future of the Brain: Essays by the World’s Leading Neuroscientists which is a recent exposition of our current (quite poor) understanding of the brain. In particular I went back to the chapter The Computational Brain by Gary Marcus. Here are some quotes that I believe are relevant. Unfortunately I can not provide the full chapter for copyright reasons but I do recommend the book.
and
Is the non or partially digital nature of the brain relevant to certain arguments based on neural networks presented in the sequences?
Does it open the possibility that Searle’s argument on syntactic symbol manipulation might be relevant?
Apart from the digital/analog point what about the neural complexity and variety? What if anything does it show about the current state of AI research?
Ah, you mean to ask if the brain is special in a way that evades our ability to construct an analogy of the chinese room argument for it? E.g. “our neurons don’t indiviually understand English, and my behavior is just the product of a bunch of neurons following the simple laws of chemistry, therefore there is nothing in my body that understands English.”
I think such an argument is totally valid imitation. It doesn’t necessarily bear on the Chinese room itself, which is a more artificial case, but it certainly applies to AI in general.
“our neurons don’t indiviually understand English, and my behavior is just the product of a bunch of neurons following the simple laws of chemistry”
The question is what the word “just” means in that sentence. Ordinarily it means to limit yourself to what is said there. The implication is that your behavior is explained by those simple laws, and not by anything else. But as I pointed out recently, having one explanation does not exclude others. So your behavior can be explained by those simple laws, and at the same time by the fact that you were seeking certain goals, or in other ways. In other words, the argument is false because the word “just” here implies something false.
Yeah, whenever you see a modifier like “just” or “merely” in a philosophical argument, that word is probably doing a lot of undeserved work.
I don’t think the laws of physics (chemistry) are actually simple in the case of large systems. Note that this understanding applies to the Chinese Room idea too—the contents of the rules/slips of paper are not “simple” by any means.
But I’m more concerned about a confusion in interpreting
Are you merely claiming that there are other models which can alternatively be used to explain some of all of the behaviors (instead of trying to understand the lower-level physics/chemistry)? Or are you saying that the physics is insufficient and you must supplement it with something else in order to identify all causes of behavior?
I agree with the first, and disagree with the second.
There’s that word, “merely,” there, like your other word “just,” which makes me say no to this. You could describe the situation as “there are many models,” but you are likely to be misled by this. In particular, you will likely be misled to think there is a highly accurate model, which is that someone did what he did because of chemicals, and a vague and inaccurate model, which says for example that someone went to the store to buy milk. So rather than talking about models, it is better simply to say that we are talking about two facts about the world:
Fact 1: the person went to the store because of the behavior of chemicals etc. Fact 2: the person went to the store to buy milk.
These are not “merely” two different models: they are two different facts about the world.
I said in my comment, “So your behavior can be explained by those simple laws, and at the same time by the fact that you were seeking certain goals.” If the first were insufficient, it would not be an explanation. Both are sufficient, and both are correct.
Yes, if we mean by “cause”, “explanation,” which is normally meant, then you have to mention both to mention all causes, i.e. all explanations, since both are explanations, and both are causes.
Not independent facts, surely. The person went to the store to buy milk because of the behavior of chemicals, right? Even longer chains … because they were thirsty and they like milk because it reminds them of childhood because their parents thought it was important for bone growth because … because … ends eventually with because of the quantum configuration of the universe at some point. and you can correctly shortcut to there at any point in between.
I said they were two different facts, not two independent facts. So dependent or not (and this question itself is also more confused and complicated than you realize), if you do not mention them both, you are not mentioning everything that is there.
Hmm. I don’t think “mention everything that is there” is on my list of goals for such discussions. I was thinking more along the lines of “mention the minimum necessary”. I’m still unclear whether you agree that physics is sufficient to describe all events in the universe including human behavior, even while acknowledging that there are higher-level models which are way easier to understand.
It is sufficient to describe them in the way that it does describe them, which certainly includes (among other things) all physical motions. But it is obvious that physics does not make statements like “the person went to the store to buy milk,” even though that is a true fact about the world, and in that way it does not describe everything.
Ok, one more attempt. Which part of “the person went to the store to buy milk” is not described by the quantum configuration of the local space? The person certainly is. Movement toward and in the store certainly is. The neural impulses that correspond to desire for milk very probably are.
All of it.
The person certainly is not; this is why you have arguments about whether a fetus is a person. There would be no such arguments if the question were settled by physics.
Movement is, but stores are not; physics has nothing to say about stores.
Indeed, physics contains neural impulses that correspond to the desire for milk, but it does not contain desire, nor does it contain milk.
Hmm.. I do not think that is what I mean, no. I lean towards agreeing with Searle’s conclusion but I am examining my thought process for errors.
Searle’s argument is not that consciousness is not created in the brain. It is that it is not based on syntactic symbol manipulation in the way a computer is and for that reason it is not going to be simulated by a computer with our current architecture (binary, logic gates etc.) as the AI community thought (and thinks). He does not deny that we might discover the architecture of the brain in the future. All he does is demonstrate through analogy how syntactic operations work.
In the Chinese gym rebuttal the issues is not really addressed. There is no denial by Searle that the brain is a system, with sub components, through which structure, consciousness emerges. That is a different discussion. He is arguing that the system must be doing something, different or in addition to, syntactic symbol manipulation.
Since the neuroscience does not support the digital information processing view where is the certainty of the community coming from? Am I missing something fundamental here?
I think people get too hung up on computers as being mechanistic. People usually think of symbol manipulation in terms of easy-to-imagine language-like models, but then try to generalize their intuitions to computation in general, which can be unimaginably complicated. It’s perfectly possible to simulate a human on an ordinary classical computer (to arbitrary precision). Would that simulation of a human be conscious, if they matched the behavior of a flesh and blood human almost perfectly, and could output to you via text channel and output things like “well, I sure feel conscious”?
The reason LWers are so confident that this simulation is conscious is because we think of concepts like “consciousness,” to the extent that they exist, as having something to do with the cause of us talking and thinking about consciousness. It’s just like how the concept of “apples” exists because apples exist, and when I correctly think I see an apple, it’s because there’s an apple. Talking about “consciousness” is presumed to be a consequence of our experience with consciousness. And the things we have experience with that we can label “consciousness” are introspective phenomena, physically realized as patterns of neurons firing, that have exact analogies in the simulation. Demanding that one has to be made of flesh to be conscious is not merely chauvinism, it’s a misunderstanding of what we have access to when we encounter consciousness.
The working of a computer is not unimaginably complicated. Its basis is quite straightforward really. As I said in my answer to MrMind below “As Searle points out the meaning of zeros, ones, logic gates etc. is observer relative in the same way money (not the paper, the meaning) is observer relative and thus ontologically subjective. The electrons are indeed ontologically objective but that is not true regarding the syntactic structures of which they are elements in a computer. Watch this video of Searle explaining this (from 9:12).”.
In our debate I am holding the position that there can not be a simulation of consciousness using the current architectural basis of a computer. Searle has provided a logical argument. In my quotes above I show that the state of neuroscience does not point towards a purely digital brain. What is your evidence?
Well, that would run counter to the Church-Turing thesis. Either the brain is capable of doing things that would require infinite resources for a computer to perform, or the power of the brain and the computer is the same. Indeed, not even computers are based on symbolic manipulation: at the deepest level, it’s all electrons flowing back and forth.
Am I right to think that this statement is based on the assumption that the brain (and all computation machines) have been proven to have Turing machine equivalents based on the Church-Turing thesis? If that is the case I would refer you to this article’s section Misunderstandings of the Thesis. If I have understood wrong I would be grateful if you could offer some more details on your point.
We can demonstrate the erroneous logic of this statement by saying something like: ”Indeed, not even language is based on symbolic manipulation: at the deepest level, it’s all sound waves pushing air particles back and forth”.
As Searle points out the meaning of zeros, ones, logic gates etc. is observer relative in the same way money (not the paper, the meaning) is observer relative and thus ontologically subjective. The electrons are indeed ontologically objective but that is not true regarding the syntactic structures of which they are elements in a computer. Watch this video of Searle explaining this (from 9:12).
No, otherwise we would have the certainty that the brain is Turing-equivalent and I wouldn’t have prefaced with “Either the brain is capable of doing things that would require infinite resources for a computer to perform”. We do not have proof that everything not calculable by a Turing machine requires infinite resources, otherwise Church-Turing will be a theorem and not a thesis, but we have strong hints: every hypercomputation model is based on accessing some infinite resource (whether it’s infinite time or infinite energy or infinite precision). Plus recently we had this theorem: any function on the naturals is computable by some machine in some non-standard time.
So either the brain can compute things that a computer would take infinite resources to do, or the brain is at most as powerful as a Turing machine.
As per the electron thing, there’s a level where there is symbolic manipulation and a level where there isn’t. I don’t understand why it’s symbolic manipulation for electronics but not for neurons. At the right abstraction level, neurons too manipulate symbols.
It is not the symbols that are the problem. It is that the semantic content of the symbol used in a digital computer is observer relative. The circuits depend on someone understanding their meaning. The meaning provided by the human engineer that, since he possesses the semantic content, understands the method of implementation and the calculation results at each level of abstraction. This is clearly not the case in the human brain in which the symbols arise in a manner that allows for intrinsic semantic content.