Wouldn’t such a GLUT by necessity require someone possessing immensely fine understanding of Chinese and English both, though? You could then say that the person+GLUT system as a whole understands Chinese, as it combines both the person’s symbol-manipulation capabilities and the actual understanding represented by the GLUT.
You might still not possess understanding of Chinese, but that does not mean a meaningful conversation has not taken place.
I have no idea whether a GLUT-based Chinese Room would require someone possessing immensely fine understanding of Chinese and English both. As far as I can tell, a GLUT-based Chinese Room is impossible, and asking what is or isn’t required to bring about an impossible situation seems a silly question. Conversely, if it turns out that a GLUT-based Chinese Room is not impossible, I don’t trust my intuitions about what is or isn’t required to construct one.
I have no problem with saying a Chinese-speaking-person+GLUT system as a whole understands Chinese, in much the same sense that I have no problem saying that a Chinese-speaking-person+tuna-fish-sandwich system as a whole understands Chinese. I’m not sure how interesting that is.
I’m perfectly content to posit an artificial system capable of understanding Chinese and having a meaningful conversation. I’m unable to conceive specifically of a GLUT that can do so.
I’m perfectly content to posit an artificial system capable of understanding Chinese and having a meaningful conversation. I’m unable to conceive specifically of a GLUT that can do so.
I don’t think it’s that hard to conceive of. Imagine that the Simulation Argument is true; then, we could easily imagine a GLUT that exists outside of our own simulation, using additional resources; then our Chinese Room could just be an interface for such a GLUT.
As you said though, I don’t find the proposal very interesting, especially since I’m not a big fan of the Simulation Argument anyway.
I find I am unable, on brief consideration, to conceive of a GLUT sitting in some real world within which my observable universe is being computed… I have no sense of what such a thing might be like, or what its existence implies about the real world and how it differs from my observed simulation, or really much of anything interesting.
It’s possible that I might be able to if I thought about it for noticeably longer than I’m inclined to.
Not necessarily. Theoretically, one could have very specific knowledge of Chinese, possibly acquired from very limited but deep experience. Imagine one person who has spoken Chinese only at the harbor, and has complete and total mastery of the maritime vocabulary of Chinese but would lack all but the simplest verbs relevant to the conversations happening just a mile further inland. Conceivably, a series of experts in a very localized domain could separately contribute their understanding, perhaps governed by a person who understands (in English) every conceivable key to the GLUT, but does not understand the values which must be placed in it.
Then, imagine someone whose entire knowledge of Chinese is the translation of the phrase: “Does my reply make sense in the context of this conversation?” This person takes an arbitrary amount of time, randomly combining phonemes and carrying out every conceivable conversation with an unlimited supply of Chinese speakers. (This is substantially more realistic if there are many people working in a field with fewer potential combinations than language). Through perhaps the least efficient trial and error possible, they learn to carry on a conversation by rote, keeping only those conversational threads which, through pure chance, make sense throughout the entire dialogue.
In neither of these human experts do we find a real understanding of Chinese. It could be said that the understandings of the domain experts combine to form one great understanding, but the inefficient trial-and-error GLUT manufacturers certainly do not have any understanding, merely memory.
I agree on the basic point, but then my deeper point was that somewhere down the line you’ll find the intelligence(s) that created a high-fidelity converter for an arbitrary amount of information from one format to another. Sarle is free to claim that the system does not understand Chinese, but its very function could only have been imparted by parties who collectively speak Chinese very well, making the room at very least a medium of communication utilizing this understanding.
And this is before we mention the entirely plausible claim that the room-person system as a whole understands Chinese, even though neither of its two parts does. Any system you’ll take apart to sufficient degrees will stop displaying the properties of the whole, so having us peer inside an electronic brain asking “but where does the intelligence/understanding reside?” misses the point entirely.
Theoretically, one could have very specific knowledge of Chinese, possibly acquired from very limited but deep experience. Imagine one person who has spoken Chinese only at the harbor, and has complete and total mastery of the maritime vocabulary of Chinese but would lack all but the simplest verbs relevant to the conversations happening just a mile further inland. Conceivably, a series of experts in a very localized domain could separately contribute their understanding, perhaps governed by a person who understands (in English) every conceivable key to the GLUT, but does not understand the values which must be placed in it.
This does not pass the simplest plausibility test. Do you imagine that being at a harbor causes people to have only conversations which are uniquely applicable to harbor activities? Does one not need words and phrases for concepts like “person”, “weather”, “hello”, “food”, “where”, “friend”, “tomorrow”, “city”, “want”, etc., not to mention rules of Chinese grammar and syntax? Such a “harbor-only” Chinese speaker may lack certain specific vocabulary, but he certainly will not lack a general understanding of Chinese.
Your other example is even sillier, especially given that the number of possible conversations in a human language is infinite. For one thing, a conversation where one person is constantly asking “Does my reply make sense?” is very, very different from the “same” conversation without such constant verbal monitoring. (Not to mention the specific fact that your imaginary expert would not be able to understand his interlocutor’s response to his question about whether his utterances made sense.)
A more realistic version would be for for an observer to record all conversations between two Chinese speakers with length N, where N is some arbitrarily large but still finite conversation length. (If a GLUT were to capture every possible conversation, you are correct in saying that it would have to be infinite).
From a sufficiently large sample size (though it is implausible to capture every probable conversation in any realistic amount of time, not to mention in any amount of time during which the language is relatively stable and unchanging), a tree of conversations could be built, with an arbitrarily large probability of including a given conversation within it.
From this, one could built a GLUT (though it would probably be more efficient as a tree) of the possible questions given context and the appropriate responses. Though it would be utterly unfeasible to build, that is a limitation of the availability of data, rather than the GLUT structure itself. It would not be perfect—one cannot build an infinite GLUT, nor can one acquire the infinite amount of data with which to fill it—but it could, perhaps, surpass even a native speaker by some measures.
Consider: what would the table contain as appropriate responses for the following questions? (Each question would certainly appear many, many times in our record of all conversations up to length N.)
“Hello, what is your name?”
“Where do you live?”
“What do you look like?”
“Tell me about your favorite television show.”
Remember that a GLUT, by definition, matches each input to one output. If you have to algorithmically consider context, whether environmental (what year is it? where are we?), personal (who am I?), or conversation history (what’s been said up to this point?), then that is not a GLUT, it is a program. You can of course convert any program that deterministically gives output for given input into a GLUT, but to do that successfully, you really do need all possible inputs and their outputs; and “input” here means “question, plus conversation history, plus complete description of world-state” (complete because we don’t know what context we’ll need in order to give an appropriate response).
In other words, to construct such a GLUT, you would have to be well-nigh omniscient. But, admittedly, you would not then have to “know” any Chinese.
Wouldn’t such a GLUT by necessity require someone possessing immensely fine understanding of Chinese and English both, though? You could then say that the person+GLUT system as a whole understands Chinese, as it combines both the person’s symbol-manipulation capabilities and the actual understanding represented by the GLUT.
You might still not possess understanding of Chinese, but that does not mean a meaningful conversation has not taken place.
I have no idea whether a GLUT-based Chinese Room would require someone possessing immensely fine understanding of Chinese and English both. As far as I can tell, a GLUT-based Chinese Room is impossible, and asking what is or isn’t required to bring about an impossible situation seems a silly question. Conversely, if it turns out that a GLUT-based Chinese Room is not impossible, I don’t trust my intuitions about what is or isn’t required to construct one.
I have no problem with saying a Chinese-speaking-person+GLUT system as a whole understands Chinese, in much the same sense that I have no problem saying that a Chinese-speaking-person+tuna-fish-sandwich system as a whole understands Chinese. I’m not sure how interesting that is.
I’m perfectly content to posit an artificial system capable of understanding Chinese and having a meaningful conversation. I’m unable to conceive specifically of a GLUT that can do so.
I don’t think it’s that hard to conceive of. Imagine that the Simulation Argument is true; then, we could easily imagine a GLUT that exists outside of our own simulation, using additional resources; then our Chinese Room could just be an interface for such a GLUT.
As you said though, I don’t find the proposal very interesting, especially since I’m not a big fan of the Simulation Argument anyway.
I find I am unable, on brief consideration, to conceive of a GLUT sitting in some real world within which my observable universe is being computed… I have no sense of what such a thing might be like, or what its existence implies about the real world and how it differs from my observed simulation, or really much of anything interesting.
It’s possible that I might be able to if I thought about it for noticeably longer than I’m inclined to.
If you can do so easily, good for you.
Not necessarily. Theoretically, one could have very specific knowledge of Chinese, possibly acquired from very limited but deep experience. Imagine one person who has spoken Chinese only at the harbor, and has complete and total mastery of the maritime vocabulary of Chinese but would lack all but the simplest verbs relevant to the conversations happening just a mile further inland. Conceivably, a series of experts in a very localized domain could separately contribute their understanding, perhaps governed by a person who understands (in English) every conceivable key to the GLUT, but does not understand the values which must be placed in it.
Then, imagine someone whose entire knowledge of Chinese is the translation of the phrase: “Does my reply make sense in the context of this conversation?” This person takes an arbitrary amount of time, randomly combining phonemes and carrying out every conceivable conversation with an unlimited supply of Chinese speakers. (This is substantially more realistic if there are many people working in a field with fewer potential combinations than language). Through perhaps the least efficient trial and error possible, they learn to carry on a conversation by rote, keeping only those conversational threads which, through pure chance, make sense throughout the entire dialogue.
In neither of these human experts do we find a real understanding of Chinese. It could be said that the understandings of the domain experts combine to form one great understanding, but the inefficient trial-and-error GLUT manufacturers certainly do not have any understanding, merely memory.
I agree on the basic point, but then my deeper point was that somewhere down the line you’ll find the intelligence(s) that created a high-fidelity converter for an arbitrary amount of information from one format to another. Sarle is free to claim that the system does not understand Chinese, but its very function could only have been imparted by parties who collectively speak Chinese very well, making the room at very least a medium of communication utilizing this understanding.
And this is before we mention the entirely plausible claim that the room-person system as a whole understands Chinese, even though neither of its two parts does. Any system you’ll take apart to sufficient degrees will stop displaying the properties of the whole, so having us peer inside an electronic brain asking “but where does the intelligence/understanding reside?” misses the point entirely.
This does not pass the simplest plausibility test. Do you imagine that being at a harbor causes people to have only conversations which are uniquely applicable to harbor activities? Does one not need words and phrases for concepts like “person”, “weather”, “hello”, “food”, “where”, “friend”, “tomorrow”, “city”, “want”, etc., not to mention rules of Chinese grammar and syntax? Such a “harbor-only” Chinese speaker may lack certain specific vocabulary, but he certainly will not lack a general understanding of Chinese.
Your other example is even sillier, especially given that the number of possible conversations in a human language is infinite. For one thing, a conversation where one person is constantly asking “Does my reply make sense?” is very, very different from the “same” conversation without such constant verbal monitoring. (Not to mention the specific fact that your imaginary expert would not be able to understand his interlocutor’s response to his question about whether his utterances made sense.)
You make some valid points.
A more realistic version would be for for an observer to record all conversations between two Chinese speakers with length N, where N is some arbitrarily large but still finite conversation length. (If a GLUT were to capture every possible conversation, you are correct in saying that it would have to be infinite).
From a sufficiently large sample size (though it is implausible to capture every probable conversation in any realistic amount of time, not to mention in any amount of time during which the language is relatively stable and unchanging), a tree of conversations could be built, with an arbitrarily large probability of including a given conversation within it.
From this, one could built a GLUT (though it would probably be more efficient as a tree) of the possible questions given context and the appropriate responses. Though it would be utterly unfeasible to build, that is a limitation of the availability of data, rather than the GLUT structure itself. It would not be perfect—one cannot build an infinite GLUT, nor can one acquire the infinite amount of data with which to fill it—but it could, perhaps, surpass even a native speaker by some measures.
I remain dubious.
Consider: what would the table contain as appropriate responses for the following questions? (Each question would certainly appear many, many times in our record of all conversations up to length N.)
“Hello, what is your name?”
“Where do you live?”
“What do you look like?”
“Tell me about your favorite television show.”
Remember that a GLUT, by definition, matches each input to one output. If you have to algorithmically consider context, whether environmental (what year is it? where are we?), personal (who am I?), or conversation history (what’s been said up to this point?), then that is not a GLUT, it is a program. You can of course convert any program that deterministically gives output for given input into a GLUT, but to do that successfully, you really do need all possible inputs and their outputs; and “input” here means “question, plus conversation history, plus complete description of world-state” (complete because we don’t know what context we’ll need in order to give an appropriate response).
In other words, to construct such a GLUT, you would have to be well-nigh omniscient. But, admittedly, you would not then have to “know” any Chinese.