I don’t think this captures the fundamental natures of intelligence, and I think others are right to throw an error at the word “stupidly.”
Suppose there is some cognitive faculty, which we’ll call adaptability, which agents have. When presented with a novel environment (i.e. sense data, set of possible actions, and consequences of those actions if taken), more adaptable agents will more rapidly choose actions with positive consequences.
Suppose there is some other cognitive faculty, which we’ll call knowledge, which agents also have. This is a characterization of the breadth of environments to which they have adapted, and how well they have adapted to them.
Designing an agent with specific knowledge requires adaptability on the part of the designer; designing an agent with high adaptability requires adaptability on the part of the agent. Your general criticism seems to be “an agent can become knowledgeable about human conversations carried out over text channels with little adaptability of its own, and thus that is not a good test of adaptability.”
I would agree: a GLUT written in stone, which is not adaptable at all, could still contain all the knowledge necessary to pass the Turing test. An adaptable algorithm could pass the Turing Test, but only after consuming a sample set containing thousands of conversations and millions of words and then participating in those conversations itself. After all, that’s how I learned to speak English.
Perhaps there is an optimal learner that we can compare agents against. But communication has finite information transfer, and the bandwidth varies significantly; the quality of the instruction (or the match between the instruction and the learner) should be part of the test. Even exploration is an environment where knowledge can help, especially if the exploration is in a field linked to reality. (Indeed, it’s not clear that humans are adaptable to anything, and so the binary “adaptable or not?” makes as much sense as an “intelligent or not?”.)
These two faculties suggest different thresholds for AI: an AI can eat the jobs of knowledge workers once it has their knowledge, and an AGI can eat the job of creating knowledge workers once it has adaptability.
(Here I used two clusters of cognitive faculties, but I think the DIKW pyramid is also relevant.)
I don’t think this captures the fundamental natures of intelligence, and I think others are right to throw an error at the word “stupidly.”
Suppose there is some cognitive faculty, which we’ll call adaptability, which agents have. When presented with a novel environment (i.e. sense data, set of possible actions, and consequences of those actions if taken), more adaptable agents will more rapidly choose actions with positive consequences.
Suppose there is some other cognitive faculty, which we’ll call knowledge, which agents also have. This is a characterization of the breadth of environments to which they have adapted, and how well they have adapted to them.
Designing an agent with specific knowledge requires adaptability on the part of the designer; designing an agent with high adaptability requires adaptability on the part of the agent. Your general criticism seems to be “an agent can become knowledgeable about human conversations carried out over text channels with little adaptability of its own, and thus that is not a good test of adaptability.”
I would agree: a GLUT written in stone, which is not adaptable at all, could still contain all the knowledge necessary to pass the Turing test. An adaptable algorithm could pass the Turing Test, but only after consuming a sample set containing thousands of conversations and millions of words and then participating in those conversations itself. After all, that’s how I learned to speak English.
Perhaps there is an optimal learner that we can compare agents against. But communication has finite information transfer, and the bandwidth varies significantly; the quality of the instruction (or the match between the instruction and the learner) should be part of the test. Even exploration is an environment where knowledge can help, especially if the exploration is in a field linked to reality. (Indeed, it’s not clear that humans are adaptable to anything, and so the binary “adaptable or not?” makes as much sense as an “intelligent or not?”.)
These two faculties suggest different thresholds for AI: an AI can eat the jobs of knowledge workers once it has their knowledge, and an AGI can eat the job of creating knowledge workers once it has adaptability.
(Here I used two clusters of cognitive faculties, but I think the DIKW pyramid is also relevant.)