Well, that would run counter to the Church-Turing thesis. Either the brain is capable of doing things that would require infinite resources for a computer to perform, or the power of the brain and the computer is the same.
Am I right to think that this statement is based on the assumption that the brain (and all computation machines) have been proven to have Turing machine equivalents based on the Church-Turing thesis? If that is the case I would refer you to this article’s section Misunderstandings of the Thesis. If I have understood wrong I would be grateful if you could offer some more details on your point.
Indeed, not even computers are based on symbolic manipulation: at the deepest level, it’s all electrons flowing back and forth.
We can demonstrate the erroneous logic of this statement by saying something like: ”Indeed, not even language is based on symbolic manipulation: at the deepest level, it’s all sound waves pushing air particles back and forth”.
As Searle points out the meaning of zeros, ones, logic gates etc. is observer relative in the same way money (not the paper, the meaning) is observer relative and thus ontologically subjective. The electrons are indeed ontologically objective but that is not true regarding the syntactic structures of which they are elements in a computer. Watch this video of Searle explaining this (from 9:12).
Am I right to think that this statement is based on the assumption that the brain (and all computation machines) have been proven to have Turing machine equivalents based on the Church-Turing thesis?
No, otherwise we would have the certainty that the brain is Turing-equivalent and I wouldn’t have prefaced with “Either the brain is capable of doing things that would require infinite resources for a computer to perform”.
We do not have proof that everything not calculable by a Turing machine requires infinite resources, otherwise Church-Turing will be a theorem and not a thesis, but we have strong hints: every hypercomputation model is based on accessing some infinite resource (whether it’s infinite time or infinite energy or infinite precision). Plus recently we had this theorem: any function on the naturals is computable by some machine in some non-standard time. So either the brain can compute things that a computer would take infinite resources to do, or the brain is at most as powerful as a Turing machine.
As per the electron thing, there’s a level where there is symbolic manipulation and a level where there isn’t. I don’t understand why it’s symbolic manipulation for electronics but not for neurons. At the right abstraction level, neurons too manipulate symbols.
As per the electron thing, there’s a level where there is symbolic manipulation and a level where there isn’t. I don’t understand why it’s symbolic manipulation for electronics but not for neurons. At the right abstraction level, neurons too manipulate symbols.
It is not the symbols that are the problem. It is that the semantic content of the symbol used in a digital computer is observer relative. The circuits depend on someone understanding their meaning. The meaning provided by the human engineer that, since he possesses the semantic content, understands the method of implementation and the calculation results at each level of abstraction. This is clearly not the case in the human brain in which the symbols arise in a manner that allows for intrinsic semantic content.
Am I right to think that this statement is based on the assumption that the brain (and all computation machines) have been proven to have Turing machine equivalents based on the Church-Turing thesis? If that is the case I would refer you to this article’s section Misunderstandings of the Thesis. If I have understood wrong I would be grateful if you could offer some more details on your point.
We can demonstrate the erroneous logic of this statement by saying something like: ”Indeed, not even language is based on symbolic manipulation: at the deepest level, it’s all sound waves pushing air particles back and forth”.
As Searle points out the meaning of zeros, ones, logic gates etc. is observer relative in the same way money (not the paper, the meaning) is observer relative and thus ontologically subjective. The electrons are indeed ontologically objective but that is not true regarding the syntactic structures of which they are elements in a computer. Watch this video of Searle explaining this (from 9:12).
No, otherwise we would have the certainty that the brain is Turing-equivalent and I wouldn’t have prefaced with “Either the brain is capable of doing things that would require infinite resources for a computer to perform”. We do not have proof that everything not calculable by a Turing machine requires infinite resources, otherwise Church-Turing will be a theorem and not a thesis, but we have strong hints: every hypercomputation model is based on accessing some infinite resource (whether it’s infinite time or infinite energy or infinite precision). Plus recently we had this theorem: any function on the naturals is computable by some machine in some non-standard time.
So either the brain can compute things that a computer would take infinite resources to do, or the brain is at most as powerful as a Turing machine.
As per the electron thing, there’s a level where there is symbolic manipulation and a level where there isn’t. I don’t understand why it’s symbolic manipulation for electronics but not for neurons. At the right abstraction level, neurons too manipulate symbols.
It is not the symbols that are the problem. It is that the semantic content of the symbol used in a digital computer is observer relative. The circuits depend on someone understanding their meaning. The meaning provided by the human engineer that, since he possesses the semantic content, understands the method of implementation and the calculation results at each level of abstraction. This is clearly not the case in the human brain in which the symbols arise in a manner that allows for intrinsic semantic content.