I think you do not fully understand the idea if you regard it as an open problem. It hints and nudges and points at an open problem (with a single interpretation of declining popularity of quantum physics), which is where dissolution comes in, but in itself it is not an open problem, nor is resolution of that open problem necessary to its dissolution. At best it suggests that that interpretation of quantum physics is absurd, in the “This conflicts with every intuition I have about the universe” sense.
Outside the domain of that interpretation, it maintains the ability to be dissolved for understanding, although it doesn’t say much of meaning about the intuitiveness of physics any longer.
Or, in other words: If you think that Schroedinger’s Cat is an open problem in physics, you’ve made the basic mistake I alluded to before, in that thinking that the problem as posed represents an understanding. The understanding comes from dissolving it; without that step, it’s just a badly misrepresented meme.
The Cat has many solutions as there are interpretations of QM, andmost are countintuituve. The Cat is an open problem, inasmuch as we do not know which solution is correct.
Feel free to dissolve it then without referring to interpretations. As far as I can tell, you will hit the Born rule at some point, which is the open problem I was alluding to.
Born’s Rule is a -bit- beyond the scope of Schroedinger’s Cat. That’s a bit like saying the Chinese Room Experiment isn’t dissolved because we haven’t solved the Hard Problem of Consciousness yet. [ETA: Only more so, because the Hard Problem of Consciousness is what the Chinese Room Experiment is pointing its fingers and waving at.]
But it’s actually true that solving the Hard Problem of Consciousness is necessary to fully explode the Chinese Room! Without having solved it, it’s still possible that the Room isn’t understanding anything, even if you don’t regard this as a knock against the possibility of GAI. I think the Room does say something useful about Turing tests: that behavior suggests implementation, but doesn’t necessarily constrain it. The Giant Lookup Table is another, similarly impractical, argument that makes the same point.
Understanding is either only inferred from behavior, or actually a process that needs to be duplicated for a system to understand. If the latter, then the Room may speak Chinese without understanding it. If the former, then it makes no sense to say that a system can speak Chinese without understanding it.
Exploding the Chinese Room leads to understanding that the Hard Problem of Consciousness is in fact a problem; its purpose was to demonstrate that computers can’t implement consciousness, which it doesn’t actually do.
Hence my view that it’s a useful idea for somebody considering AI to dissolve, but not necessarily a problem in and of itself.
I think you do not fully understand the idea if you regard it as an open problem. It hints and nudges and points at an open problem (with a single interpretation of declining popularity of quantum physics), which is where dissolution comes in, but in itself it is not an open problem, nor is resolution of that open problem necessary to its dissolution. At best it suggests that that interpretation of quantum physics is absurd, in the “This conflicts with every intuition I have about the universe” sense.
Outside the domain of that interpretation, it maintains the ability to be dissolved for understanding, although it doesn’t say much of meaning about the intuitiveness of physics any longer.
Or, in other words: If you think that Schroedinger’s Cat is an open problem in physics, you’ve made the basic mistake I alluded to before, in that thinking that the problem as posed represents an understanding. The understanding comes from dissolving it; without that step, it’s just a badly misrepresented meme.
The Cat has many solutions as there are interpretations of QM, andmost are countintuituve. The Cat is an open problem, inasmuch as we do not know which solution is correct.
Feel free to dissolve it then without referring to interpretations. As far as I can tell, you will hit the Born rule at some point, which is the open problem I was alluding to.
Born’s Rule is a -bit- beyond the scope of Schroedinger’s Cat. That’s a bit like saying the Chinese Room Experiment isn’t dissolved because we haven’t solved the Hard Problem of Consciousness yet. [ETA: Only more so, because the Hard Problem of Consciousness is what the Chinese Room Experiment is pointing its fingers and waving at.]
But it’s actually true that solving the Hard Problem of Consciousness is necessary to fully explode the Chinese Room! Without having solved it, it’s still possible that the Room isn’t understanding anything, even if you don’t regard this as a knock against the possibility of GAI. I think the Room does say something useful about Turing tests: that behavior suggests implementation, but doesn’t necessarily constrain it. The Giant Lookup Table is another, similarly impractical, argument that makes the same point.
Understanding is either only inferred from behavior, or actually a process that needs to be duplicated for a system to understand. If the latter, then the Room may speak Chinese without understanding it. If the former, then it makes no sense to say that a system can speak Chinese without understanding it.
Exploding the Chinese Room leads to understanding that the Hard Problem of Consciousness is in fact a problem; its purpose was to demonstrate that computers can’t implement consciousness, which it doesn’t actually do.
Hence my view that it’s a useful idea for somebody considering AI to dissolve, but not necessarily a problem in and of itself.