Born’s Rule is a -bit- beyond the scope of Schroedinger’s Cat. That’s a bit like saying the Chinese Room Experiment isn’t dissolved because we haven’t solved the Hard Problem of Consciousness yet. [ETA: Only more so, because the Hard Problem of Consciousness is what the Chinese Room Experiment is pointing its fingers and waving at.]
But it’s actually true that solving the Hard Problem of Consciousness is necessary to fully explode the Chinese Room! Without having solved it, it’s still possible that the Room isn’t understanding anything, even if you don’t regard this as a knock against the possibility of GAI. I think the Room does say something useful about Turing tests: that behavior suggests implementation, but doesn’t necessarily constrain it. The Giant Lookup Table is another, similarly impractical, argument that makes the same point.
Understanding is either only inferred from behavior, or actually a process that needs to be duplicated for a system to understand. If the latter, then the Room may speak Chinese without understanding it. If the former, then it makes no sense to say that a system can speak Chinese without understanding it.
Exploding the Chinese Room leads to understanding that the Hard Problem of Consciousness is in fact a problem; its purpose was to demonstrate that computers can’t implement consciousness, which it doesn’t actually do.
Hence my view that it’s a useful idea for somebody considering AI to dissolve, but not necessarily a problem in and of itself.
Born’s Rule is a -bit- beyond the scope of Schroedinger’s Cat. That’s a bit like saying the Chinese Room Experiment isn’t dissolved because we haven’t solved the Hard Problem of Consciousness yet. [ETA: Only more so, because the Hard Problem of Consciousness is what the Chinese Room Experiment is pointing its fingers and waving at.]
But it’s actually true that solving the Hard Problem of Consciousness is necessary to fully explode the Chinese Room! Without having solved it, it’s still possible that the Room isn’t understanding anything, even if you don’t regard this as a knock against the possibility of GAI. I think the Room does say something useful about Turing tests: that behavior suggests implementation, but doesn’t necessarily constrain it. The Giant Lookup Table is another, similarly impractical, argument that makes the same point.
Understanding is either only inferred from behavior, or actually a process that needs to be duplicated for a system to understand. If the latter, then the Room may speak Chinese without understanding it. If the former, then it makes no sense to say that a system can speak Chinese without understanding it.
Exploding the Chinese Room leads to understanding that the Hard Problem of Consciousness is in fact a problem; its purpose was to demonstrate that computers can’t implement consciousness, which it doesn’t actually do.
Hence my view that it’s a useful idea for somebody considering AI to dissolve, but not necessarily a problem in and of itself.