AI cannot be just “programmed” as, for example, a chess game. When we talk about computers, programming, languages, hardware, compilers, source code, etc., - we’re, essentially implying a Von Neumann architecture. This architecture represents a certain principle of information processing, which has its fundamental limitations. That ghost that makes an intelligence cannot be programmed inside a Von Neumann machine. It requires a different type of information processing, similar to that implemented in humans. The real progress in building AI will be achieved only after we understand the fundamental principal that lies behind information processing in our brains. And it`s not only us, even primitive nervous systems of simple creatures use this principle and benefit from it. A simple kitchen cockroach is infinitely smarter than the most sophisticated robot that we have built so far.
Just to be clear, there isn’t strong direct evidence of that, is there? My understanding is that there just isn’t evidence of it being impossible and a whole lot of evidence that simulating most things is computable.
Just to be clear, there isn’t strong direct evidence of that, is there?
What does ‘direct’ mean? Does it mean “has already been done”? If so then no. The evidence is more of the kind “either it is possible or everything we know about reductionism, physics and human biology is bullshit”.
“either it is possible or everything we know about reductionism, physics and human biology is bullshit”
That seems too strong. If intelligence really did turn out to rely on quantum computing or some other non turing computation, that would mean you couldn’t program intelligence on a computer in a remotely efficient way. Though presumably you could program it on a quantum computer (or whatever the special feature of physics is that lets you do this fancy computer). Of course this doesn’t seem too likely given what we know about neurons.
Of course this doesn’t seem too likely given what we know about neurons.
Yes, for a suitable instantiation of “not too likely” this is a rough translation of what I meant by “either it is possible or everything we know about reductionism, physics and human biology is bullshit”.
That’s true if and only if some aspect of biological neural architecture (as opposed to the many artificial neural network architectures out there) turns out to be Turing irreducible; all computing systems meeting some basic requirements are able to simulate each other in a pretty strong and general way. As far as I’m aware, we don’t know about any physical processes which can’t be simulated on a von Neumann (or any other Turing-complete) architecture, so claiming natural neurology as part of that category seems to be jumping the gun just a little bit.
AI cannot be just “programmed” as, for example, a chess game. When we talk about computers, programming, languages, hardware, compilers, source code, etc., - we’re, essentially implying a Von Neumann architecture. This architecture represents a certain principle of information processing, which has its fundamental limitations. That ghost that makes an intelligence cannot be programmed inside a Von Neumann machine. It requires a different type of information processing, similar to that implemented in humans. The real progress in building AI will be achieved only after we understand the fundamental principal that lies behind information processing in our brains. And it`s not only us, even primitive nervous systems of simple creatures use this principle and benefit from it. A simple kitchen cockroach is infinitely smarter than the most sophisticated robot that we have built so far.
Yes it can. It’s just harder. An AI can be “just programmed” in Conway’s Life if you really want to.
Just to be clear, there isn’t strong direct evidence of that, is there? My understanding is that there just isn’t evidence of it being impossible and a whole lot of evidence that simulating most things is computable.
What does ‘direct’ mean? Does it mean “has already been done”? If so then no. The evidence is more of the kind “either it is possible or everything we know about reductionism, physics and human biology is bullshit”.
That seems too strong. If intelligence really did turn out to rely on quantum computing or some other non turing computation, that would mean you couldn’t program intelligence on a computer in a remotely efficient way. Though presumably you could program it on a quantum computer (or whatever the special feature of physics is that lets you do this fancy computer). Of course this doesn’t seem too likely given what we know about neurons.
Yes, for a suitable instantiation of “not too likely” this is a rough translation of what I meant by “either it is possible or everything we know about reductionism, physics and human biology is bullshit”.
We agree then.
That’s true if and only if some aspect of biological neural architecture (as opposed to the many artificial neural network architectures out there) turns out to be Turing irreducible; all computing systems meeting some basic requirements are able to simulate each other in a pretty strong and general way. As far as I’m aware, we don’t know about any physical processes which can’t be simulated on a von Neumann (or any other Turing-complete) architecture, so claiming natural neurology as part of that category seems to be jumping the gun just a little bit.