Uhm, an Aboriginal tends to see meaning in anything. The more the regularities, the more meaning she will form. Semiosis is the dynamic process of interpreting these signs.
If you were put in a Chinese room with no other input than some incomprehensible scribbles you will probably start considering that what you are doing has indeed a meaning.
Of course, a less intelligent human in the room or a human put under pressure would not be able to understand Chinese even with the right algorithm. My point is that the right algorithm enables the right human to understand Chinese. Do you see that?
Then that’s an unnecessary assumption about Aboriginals. Take a native Madagascan instead (arbitrary choice of ethnicity) and he might not.
As far as I know it is not true, and certainly not based on any concrete evidence, that humans must see intentional patterns in everything. Not every culture thought cloud patterns were a language for example. In such a culture, the one beholding the sky doesn’t necessarily think it displays the actions of an intentful agent recording a message. The same can be true for Chinese scribbles.
If what you’re saying was true, it would be a very surprising fact that there are a whole bunch of human cultures in history that never invented writing.
At any rate, if there exists a not-an-anomaly-example of a human that given sufficient time could not learn Chinese in a Chinese Room, the entire argument as a solution to the problem doesn’t hold (lets call this “the normal man argument”).
If it were enough that there exists a human that *could* learn Chinese in the room, then you could have just given some example of really intuitive learners throughout history or some such.
It is enough for the original Chinese room to show a complete system that emulates understanding
Chinese, but no part of it (specifically the human part) understands Chinese, and therefore you can’t prove a machine is “actually thinking” and all that jazz because it might be constructed like the aforementioned system (this is the basis for the normal man argument).
Of course, there are answers to this conundrum, but the one you posit doesn’t contradict the original point.
Uhm, an Aboriginal tends to see meaning in anything. The more the regularities, the more meaning she will form. Semiosis is the dynamic process of interpreting these signs.
If you were put in a Chinese room with no other input than some incomprehensible scribbles you will probably start considering that what you are doing has indeed a meaning.
Of course, a less intelligent human in the room or a human put under pressure would not be able to understand Chinese even with the right algorithm. My point is that the right algorithm enables the right human to understand Chinese. Do you see that?
Then that’s an unnecessary assumption about Aboriginals. Take a native Madagascan instead (arbitrary choice of ethnicity) and he might not.
As far as I know it is not true, and certainly not based on any concrete evidence, that humans must see intentional patterns in everything. Not every culture thought cloud patterns were a language for example. In such a culture, the one beholding the sky doesn’t necessarily think it displays the actions of an intentful agent recording a message. The same can be true for Chinese scribbles.
If what you’re saying was true, it would be a very surprising fact that there are a whole bunch of human cultures in history that never invented writing.
At any rate, if there exists a not-an-anomaly-example of a human that given sufficient time could not learn Chinese in a Chinese Room, the entire argument as a solution to the problem doesn’t hold (lets call this “the normal man argument”).
If it were enough that there exists a human that *could* learn Chinese in the room, then you could have just given some example of really intuitive learners throughout history or some such.
It is enough for the original Chinese room to show a complete system that emulates understanding
Chinese, but no part of it (specifically the human part) understands Chinese, and therefore you can’t prove a machine is “actually thinking” and all that jazz because it might be constructed like the aforementioned system (this is the basis for the normal man argument).
Of course, there are answers to this conundrum, but the one you posit doesn’t contradict the original point.