The only good taboo of understanding I’ve ever read came from an LW quotes thread, quoting Feynman, quoting Dirac:
I understand what an equation means if I have a way of figuring out the characteristics of its solution without actually solving it.
By this criterion, the Chinese Room might not actually understand Chinese, where a human Chinese speaker does—ie, you hear all but the last word of a sentence, can you give a much tighter probability distribution over the end of the sentence than maxent over common vocabulary that fits grammatically?
I would say I understand a system to the extent that I’m capable of predicting its behavior given novel inputs. Which seems to be getting at something similar to Dirac’s version.
you hear all but the last word of a sentence, can you give a much tighter probability distribution over the end of the sentence than maxent over common vocabulary that fits grammatically?
IIRC, the CR as Searle describes it would include rules for responding to the question “What are likely last words that end this sentence?” in the same way a Chinese speaker would. So presumably it is capable of doing that, if asked.
And, definitionally, of doing so without understanding.
To my way of thinking, that makes the CR a logical impossibility, and reasoning forward from an assumption of its existence can lead to nonsensical conclusions.
Good point—I was thinking of “figuring out the characteristics” fuzzily; but if defined as giving correctly predictive output in response to a given interrogative, the room either does it correctly, or isn’t a fully-functioning Chinese Room.
The only good taboo of understanding I’ve ever read came from an LW quotes thread, quoting Feynman, quoting Dirac:
By this criterion, the Chinese Room might not actually understand Chinese, where a human Chinese speaker does—ie, you hear all but the last word of a sentence, can you give a much tighter probability distribution over the end of the sentence than maxent over common vocabulary that fits grammatically?
I would say I understand a system to the extent that I’m capable of predicting its behavior given novel inputs. Which seems to be getting at something similar to Dirac’s version.
IIRC, the CR as Searle describes it would include rules for responding to the question “What are likely last words that end this sentence?” in the same way a Chinese speaker would. So presumably it is capable of doing that, if asked.
And, definitionally, of doing so without understanding.
To my way of thinking, that makes the CR a logical impossibility, and reasoning forward from an assumption of its existence can lead to nonsensical conclusions.
Good point—I was thinking of “figuring out the characteristics” fuzzily; but if defined as giving correctly predictive output in response to a given interrogative, the room either does it correctly, or isn’t a fully-functioning Chinese Room.