Hmm.. I do not think that is what I mean, no. I lean towards agreeing with Searle’s conclusion but I am examining my thought process for errors.
Searle’s argument is not that consciousness is not created in the brain. It is that it is not based on syntactic symbol manipulation in the way a computer is and for that reason it is not going to be simulated by a computer with our current architecture (binary, logic gates etc.) as the AI community thought (and thinks). He does not deny that we might discover the architecture of the brain in the future. All he does is demonstrate through analogy how syntactic operations work.
In the Chinese gym rebuttal the issues is not really addressed. There is no denial by Searle that the brain is a system, with sub components, through which structure, consciousness emerges. That is a different discussion. He is arguing that the system must be doing something, different or in addition to, syntactic symbol manipulation.
Since the neuroscience does not support the digital information processing view where is the certainty of the community coming from? Am I missing something fundamental here?
I think people get too hung up on computers as being mechanistic. People usually think of symbol manipulation in terms of easy-to-imagine language-like models, but then try to generalize their intuitions to computation in general, which can be unimaginably complicated. It’s perfectly possible to simulate a human on an ordinary classical computer (to arbitrary precision). Would that simulation of a human be conscious, if they matched the behavior of a flesh and blood human almost perfectly, and could output to you via text channel and output things like “well, I sure feel conscious”?
The reason LWers are so confident that this simulation is conscious is because we think of concepts like “consciousness,” to the extent that they exist, as having something to do with the cause of us talking and thinking about consciousness. It’s just like how the concept of “apples” exists because apples exist, and when I correctly think I see an apple, it’s because there’s an apple. Talking about “consciousness” is presumed to be a consequence of our experience with consciousness. And the things we have experience with that we can label “consciousness” are introspective phenomena, physically realized as patterns of neurons firing, that have exact analogies in the simulation. Demanding that one has to be made of flesh to be conscious is not merely chauvinism, it’s a misunderstanding of what we have access to when we encounter consciousness.
I think people get too hung up on computers as being mechanistic. People usually think of symbol manipulation in terms of easy-to-imagine language-like models, but then try to generalize their intuitions to computation in general, which can be unimaginably complicated.
The working of a computer is not unimaginably complicated. Its basis is quite straightforward really. As I said in my answer to MrMind below “As Searle points out the meaning of zeros, ones, logic gates etc. is observer relative in the same way money (not the paper, the meaning) is observer relative and thus ontologically subjective. The electrons are indeed ontologically objective but that is not true regarding the syntactic structures of which they are elements in a computer. Watch this video of Searle explaining this (from 9:12).”.
Talking about “consciousness” is presumed to be a consequence of our experience with consciousness. And the things we have experience with that we can label “consciousness” are introspective phenomena, physically realized as patterns of neurons firing, that have exact analogies in the simulation.
In our debate I am holding the position that there can not be a simulation of consciousness using the current architectural basis of a computer. Searle has provided a logical argument. In my quotes above I show that the state of neuroscience does not point towards a purely digital brain. What is your evidence?
It is that it is not based on syntactic symbol manipulation in the way a computer is and for that reason it is not going to be simulated by a computer with our current architecture (binary, logic gates etc.) as the AI community thought (and thinks).
Well, that would run counter to the Church-Turing thesis. Either the brain is capable of doing things that would require infinite resources for a computer to perform, or the power of the brain and the computer is the same. Indeed, not even computers are based on symbolic manipulation: at the deepest level, it’s all electrons flowing back and forth.
Well, that would run counter to the Church-Turing thesis. Either the brain is capable of doing things that would require infinite resources for a computer to perform, or the power of the brain and the computer is the same.
Am I right to think that this statement is based on the assumption that the brain (and all computation machines) have been proven to have Turing machine equivalents based on the Church-Turing thesis? If that is the case I would refer you to this article’s section Misunderstandings of the Thesis. If I have understood wrong I would be grateful if you could offer some more details on your point.
Indeed, not even computers are based on symbolic manipulation: at the deepest level, it’s all electrons flowing back and forth.
We can demonstrate the erroneous logic of this statement by saying something like: ”Indeed, not even language is based on symbolic manipulation: at the deepest level, it’s all sound waves pushing air particles back and forth”.
As Searle points out the meaning of zeros, ones, logic gates etc. is observer relative in the same way money (not the paper, the meaning) is observer relative and thus ontologically subjective. The electrons are indeed ontologically objective but that is not true regarding the syntactic structures of which they are elements in a computer. Watch this video of Searle explaining this (from 9:12).
Am I right to think that this statement is based on the assumption that the brain (and all computation machines) have been proven to have Turing machine equivalents based on the Church-Turing thesis?
No, otherwise we would have the certainty that the brain is Turing-equivalent and I wouldn’t have prefaced with “Either the brain is capable of doing things that would require infinite resources for a computer to perform”.
We do not have proof that everything not calculable by a Turing machine requires infinite resources, otherwise Church-Turing will be a theorem and not a thesis, but we have strong hints: every hypercomputation model is based on accessing some infinite resource (whether it’s infinite time or infinite energy or infinite precision). Plus recently we had this theorem: any function on the naturals is computable by some machine in some non-standard time. So either the brain can compute things that a computer would take infinite resources to do, or the brain is at most as powerful as a Turing machine.
As per the electron thing, there’s a level where there is symbolic manipulation and a level where there isn’t. I don’t understand why it’s symbolic manipulation for electronics but not for neurons. At the right abstraction level, neurons too manipulate symbols.
As per the electron thing, there’s a level where there is symbolic manipulation and a level where there isn’t. I don’t understand why it’s symbolic manipulation for electronics but not for neurons. At the right abstraction level, neurons too manipulate symbols.
It is not the symbols that are the problem. It is that the semantic content of the symbol used in a digital computer is observer relative. The circuits depend on someone understanding their meaning. The meaning provided by the human engineer that, since he possesses the semantic content, understands the method of implementation and the calculation results at each level of abstraction. This is clearly not the case in the human brain in which the symbols arise in a manner that allows for intrinsic semantic content.
Hmm.. I do not think that is what I mean, no. I lean towards agreeing with Searle’s conclusion but I am examining my thought process for errors.
Searle’s argument is not that consciousness is not created in the brain. It is that it is not based on syntactic symbol manipulation in the way a computer is and for that reason it is not going to be simulated by a computer with our current architecture (binary, logic gates etc.) as the AI community thought (and thinks). He does not deny that we might discover the architecture of the brain in the future. All he does is demonstrate through analogy how syntactic operations work.
In the Chinese gym rebuttal the issues is not really addressed. There is no denial by Searle that the brain is a system, with sub components, through which structure, consciousness emerges. That is a different discussion. He is arguing that the system must be doing something, different or in addition to, syntactic symbol manipulation.
Since the neuroscience does not support the digital information processing view where is the certainty of the community coming from? Am I missing something fundamental here?
I think people get too hung up on computers as being mechanistic. People usually think of symbol manipulation in terms of easy-to-imagine language-like models, but then try to generalize their intuitions to computation in general, which can be unimaginably complicated. It’s perfectly possible to simulate a human on an ordinary classical computer (to arbitrary precision). Would that simulation of a human be conscious, if they matched the behavior of a flesh and blood human almost perfectly, and could output to you via text channel and output things like “well, I sure feel conscious”?
The reason LWers are so confident that this simulation is conscious is because we think of concepts like “consciousness,” to the extent that they exist, as having something to do with the cause of us talking and thinking about consciousness. It’s just like how the concept of “apples” exists because apples exist, and when I correctly think I see an apple, it’s because there’s an apple. Talking about “consciousness” is presumed to be a consequence of our experience with consciousness. And the things we have experience with that we can label “consciousness” are introspective phenomena, physically realized as patterns of neurons firing, that have exact analogies in the simulation. Demanding that one has to be made of flesh to be conscious is not merely chauvinism, it’s a misunderstanding of what we have access to when we encounter consciousness.
The working of a computer is not unimaginably complicated. Its basis is quite straightforward really. As I said in my answer to MrMind below “As Searle points out the meaning of zeros, ones, logic gates etc. is observer relative in the same way money (not the paper, the meaning) is observer relative and thus ontologically subjective. The electrons are indeed ontologically objective but that is not true regarding the syntactic structures of which they are elements in a computer. Watch this video of Searle explaining this (from 9:12).”.
In our debate I am holding the position that there can not be a simulation of consciousness using the current architectural basis of a computer. Searle has provided a logical argument. In my quotes above I show that the state of neuroscience does not point towards a purely digital brain. What is your evidence?
Well, that would run counter to the Church-Turing thesis. Either the brain is capable of doing things that would require infinite resources for a computer to perform, or the power of the brain and the computer is the same. Indeed, not even computers are based on symbolic manipulation: at the deepest level, it’s all electrons flowing back and forth.
Am I right to think that this statement is based on the assumption that the brain (and all computation machines) have been proven to have Turing machine equivalents based on the Church-Turing thesis? If that is the case I would refer you to this article’s section Misunderstandings of the Thesis. If I have understood wrong I would be grateful if you could offer some more details on your point.
We can demonstrate the erroneous logic of this statement by saying something like: ”Indeed, not even language is based on symbolic manipulation: at the deepest level, it’s all sound waves pushing air particles back and forth”.
As Searle points out the meaning of zeros, ones, logic gates etc. is observer relative in the same way money (not the paper, the meaning) is observer relative and thus ontologically subjective. The electrons are indeed ontologically objective but that is not true regarding the syntactic structures of which they are elements in a computer. Watch this video of Searle explaining this (from 9:12).
No, otherwise we would have the certainty that the brain is Turing-equivalent and I wouldn’t have prefaced with “Either the brain is capable of doing things that would require infinite resources for a computer to perform”. We do not have proof that everything not calculable by a Turing machine requires infinite resources, otherwise Church-Turing will be a theorem and not a thesis, but we have strong hints: every hypercomputation model is based on accessing some infinite resource (whether it’s infinite time or infinite energy or infinite precision). Plus recently we had this theorem: any function on the naturals is computable by some machine in some non-standard time.
So either the brain can compute things that a computer would take infinite resources to do, or the brain is at most as powerful as a Turing machine.
As per the electron thing, there’s a level where there is symbolic manipulation and a level where there isn’t. I don’t understand why it’s symbolic manipulation for electronics but not for neurons. At the right abstraction level, neurons too manipulate symbols.
It is not the symbols that are the problem. It is that the semantic content of the symbol used in a digital computer is observer relative. The circuits depend on someone understanding their meaning. The meaning provided by the human engineer that, since he possesses the semantic content, understands the method of implementation and the calculation results at each level of abstraction. This is clearly not the case in the human brain in which the symbols arise in a manner that allows for intrinsic semantic content.