Dissolving the Chinese Room Experiment teaches you a heck of a lot about what you’re intending to do.
You’ve just demonstrated that the experiment is flawed—but you haven’t actually demonstrated -why- it is flawed. Don’t just prove the idea wrong, dissolve it, figure out exactly where the mistake is made.
You’ll see that it, in fact, does have considerable value to those studying AI.
Schroedinger’s Cat has a lot of parallels to the Chinese Room Experiment; they both represent major hurdles to understanding, to truly dissolving the problem you intend to understand. Unfortunately a lot of people stop there, and think that the problem, as posed, represents some kind of understanding in itself.
The Chinese room argument is wrong because it fails to account for emergence. A system can possess properties that its components don’t; for example, my brain is made of neurons that don’t understand English, but that doesn’t mean my brain as a while doesn’t. The same argument could applied to the Chinese room.
The broader failure is assuming that things that apply to one level of abstraction apply to another.
A system can possess properties that its components don’t;
But a computational system can’t be mysteriously emergent. Your response is equivalent to saying that senantics is constructed, reductionistically out of syntax. How?
I think you do not fully understand the idea if you regard it as an open problem. It hints and nudges and points at an open problem (with a single interpretation of declining popularity of quantum physics), which is where dissolution comes in, but in itself it is not an open problem, nor is resolution of that open problem necessary to its dissolution. At best it suggests that that interpretation of quantum physics is absurd, in the “This conflicts with every intuition I have about the universe” sense.
Outside the domain of that interpretation, it maintains the ability to be dissolved for understanding, although it doesn’t say much of meaning about the intuitiveness of physics any longer.
Or, in other words: If you think that Schroedinger’s Cat is an open problem in physics, you’ve made the basic mistake I alluded to before, in that thinking that the problem as posed represents an understanding. The understanding comes from dissolving it; without that step, it’s just a badly misrepresented meme.
The Cat has many solutions as there are interpretations of QM, andmost are countintuituve. The Cat is an open problem, inasmuch as we do not know which solution is correct.
Feel free to dissolve it then without referring to interpretations. As far as I can tell, you will hit the Born rule at some point, which is the open problem I was alluding to.
Born’s Rule is a -bit- beyond the scope of Schroedinger’s Cat. That’s a bit like saying the Chinese Room Experiment isn’t dissolved because we haven’t solved the Hard Problem of Consciousness yet. [ETA: Only more so, because the Hard Problem of Consciousness is what the Chinese Room Experiment is pointing its fingers and waving at.]
But it’s actually true that solving the Hard Problem of Consciousness is necessary to fully explode the Chinese Room! Without having solved it, it’s still possible that the Room isn’t understanding anything, even if you don’t regard this as a knock against the possibility of GAI. I think the Room does say something useful about Turing tests: that behavior suggests implementation, but doesn’t necessarily constrain it. The Giant Lookup Table is another, similarly impractical, argument that makes the same point.
Understanding is either only inferred from behavior, or actually a process that needs to be duplicated for a system to understand. If the latter, then the Room may speak Chinese without understanding it. If the former, then it makes no sense to say that a system can speak Chinese without understanding it.
Exploding the Chinese Room leads to understanding that the Hard Problem of Consciousness is in fact a problem; its purpose was to demonstrate that computers can’t implement consciousness, which it doesn’t actually do.
Hence my view that it’s a useful idea for somebody considering AI to dissolve, but not necessarily a problem in and of itself.
Dissolving the Chinese Room Experiment teaches you a heck of a lot about what you’re intending to do.
You’ve just demonstrated that the experiment is flawed—but you haven’t actually demonstrated -why- it is flawed. Don’t just prove the idea wrong, dissolve it, figure out exactly where the mistake is made.
You’ll see that it, in fact, does have considerable value to those studying AI.
Schroedinger’s Cat has a lot of parallels to the Chinese Room Experiment; they both represent major hurdles to understanding, to truly dissolving the problem you intend to understand. Unfortunately a lot of people stop there, and think that the problem, as posed, represents some kind of understanding in itself.
The Chinese room argument is wrong because it fails to account for emergence. A system can possess properties that its components don’t; for example, my brain is made of neurons that don’t understand English, but that doesn’t mean my brain as a while doesn’t. The same argument could applied to the Chinese room.
The broader failure is assuming that things that apply to one level of abstraction apply to another.
But a computational system can’t be mysteriously emergent. Your response is equivalent to saying that senantics is constructed, reductionistically out of syntax. How?
...except, unlike the Chinese room one, it is not a dissolved problem, It’s a real open problem in Physics.
I think you do not fully understand the idea if you regard it as an open problem. It hints and nudges and points at an open problem (with a single interpretation of declining popularity of quantum physics), which is where dissolution comes in, but in itself it is not an open problem, nor is resolution of that open problem necessary to its dissolution. At best it suggests that that interpretation of quantum physics is absurd, in the “This conflicts with every intuition I have about the universe” sense.
Outside the domain of that interpretation, it maintains the ability to be dissolved for understanding, although it doesn’t say much of meaning about the intuitiveness of physics any longer.
Or, in other words: If you think that Schroedinger’s Cat is an open problem in physics, you’ve made the basic mistake I alluded to before, in that thinking that the problem as posed represents an understanding. The understanding comes from dissolving it; without that step, it’s just a badly misrepresented meme.
The Cat has many solutions as there are interpretations of QM, andmost are countintuituve. The Cat is an open problem, inasmuch as we do not know which solution is correct.
Feel free to dissolve it then without referring to interpretations. As far as I can tell, you will hit the Born rule at some point, which is the open problem I was alluding to.
Born’s Rule is a -bit- beyond the scope of Schroedinger’s Cat. That’s a bit like saying the Chinese Room Experiment isn’t dissolved because we haven’t solved the Hard Problem of Consciousness yet. [ETA: Only more so, because the Hard Problem of Consciousness is what the Chinese Room Experiment is pointing its fingers and waving at.]
But it’s actually true that solving the Hard Problem of Consciousness is necessary to fully explode the Chinese Room! Without having solved it, it’s still possible that the Room isn’t understanding anything, even if you don’t regard this as a knock against the possibility of GAI. I think the Room does say something useful about Turing tests: that behavior suggests implementation, but doesn’t necessarily constrain it. The Giant Lookup Table is another, similarly impractical, argument that makes the same point.
Understanding is either only inferred from behavior, or actually a process that needs to be duplicated for a system to understand. If the latter, then the Room may speak Chinese without understanding it. If the former, then it makes no sense to say that a system can speak Chinese without understanding it.
Exploding the Chinese Room leads to understanding that the Hard Problem of Consciousness is in fact a problem; its purpose was to demonstrate that computers can’t implement consciousness, which it doesn’t actually do.
Hence my view that it’s a useful idea for somebody considering AI to dissolve, but not necessarily a problem in and of itself.