If it doesn’t have the capacity to understand human level language then it’s not an AGI—as that is the defining characteristic of the concept (by my/Turing’s definition).
Turing never intended his test to be adopted as “the defining characteristic of the concept [of AGI]” in anything like this fashion. Human ‘level’ language is also somewhat misleading in as much as it implies it is reaching a level of communication power rather than adapting specifically to the kind of communications humans happen to have evolved—especially the quirks and weaknesses.
Turing never intended his test to be adopted as “the defining characteristic of the concept [of AGI]” in anything like this fashion.
I disagree somewhat. It’s difficult to know exactly what “he intended”, but the opening of his paper which introduces the concept, starts with “Can machines think?”, and describes a reasonable language based test: an intelligent machine is one that can convince us of it’s intelligence in plain human language.
Human ‘level’ language is also somewhat misleading in as much as it implies it is reaching a level of communication power rather than adapting specifically to the kind of communications humans happen to have evolved—especially the quirks and weaknesses
I meant natural language, the understanding of which certainly does require a certain minimum level of cognitive capabilities.
We have a much greater understanding of what the “think” in “Can machines think?” means now. We have better tests than seeing if they can fake human language.
The test isn’t about faking human language, it’s about using language to probe another mind. Whales and elephants have brains built out of similar quantities of the same cortical circuits but without a common language stepping into their minds is very difficult.
But how do you describe the task and how does the AI learn about it? There’s a massive gulf between AI’s which can have the task/game described in human language and those that can not. Whale brains and elephants fall in the latter category. An AI which can realistically self-improve to human levels needs to be in the former category, like a human child.
You could define intelligence with an AIQ concept so abstract that it captures only learning from scratch without absorbing human knowledge, but that would be a different concept—it wouldn’t represent practical capacity to intellectually self-improve in our world.
But how do you describe the task and how does the AI learn about it?
Use something like Prolog to declare the environment and problem. If I knew how the AI would learn about it, I could build an AI already. And indeed, there are fields of machine learning for things such as Bayesian inference.
Describe the problem of learning how to become a computer scientist or quantum physicist, then let it solve that problem. Now it can learn to become a computer scientists or quantum physicist.
(That said, a better method would be to describe computer science and quantum physics and just let it solve those fields.)
Agreement that human children are more intelligent than whales or elephants is likely to be the closest we get to agreement on this subject. You would need to absorb a lot of new knowledge from all the replies from various sources that have been provided to you here already before in progress is possible.
Unfortunately it seems we are not even fully in agreement about that. A turing style test is a test of knowledge, the AIQ style test is a test of abstract intelligence.
An AIQ type test which just measures abstract intelligence fails to differentiate between feral einstein and educated einstein.
Effective intelligence, perhaps call it wisdom, is some product of intelligence and knowledge. The difference between human minds and those of elephants or whales is that of knowledge.
My core point, to reiterate again: the defining characteristic of human minds is knowledge, not raw intelligence.
Intelligence can produce knowledge from the environment. Feral Einstein would develop knowledge of the world, to the extent that he wasn’t limited by non-knowledge/intelligence factors (like finding shelter or feeding himself).
Turing never intended his test to be adopted as “the defining characteristic of the concept [of AGI]” in anything like this fashion. Human ‘level’ language is also somewhat misleading in as much as it implies it is reaching a level of communication power rather than adapting specifically to the kind of communications humans happen to have evolved—especially the quirks and weaknesses.
I disagree somewhat. It’s difficult to know exactly what “he intended”, but the opening of his paper which introduces the concept, starts with “Can machines think?”, and describes a reasonable language based test: an intelligent machine is one that can convince us of it’s intelligence in plain human language.
I meant natural language, the understanding of which certainly does require a certain minimum level of cognitive capabilities.
We have a much greater understanding of what the “think” in “Can machines think?” means now. We have better tests than seeing if they can fake human language.
The test isn’t about faking human language, it’s about using language to probe another mind. Whales and elephants have brains built out of similar quantities of the same cortical circuits but without a common language stepping into their minds is very difficult.
What’s a better test for AI than the turing test?
Give it a series of fairly difficult and broad ranging tasks, none of which it has been created with existing specialised knowledge to handle.
Yes—the AIQ idea.
But how do you describe the task and how does the AI learn about it? There’s a massive gulf between AI’s which can have the task/game described in human language and those that can not. Whale brains and elephants fall in the latter category. An AI which can realistically self-improve to human levels needs to be in the former category, like a human child.
You could define intelligence with an AIQ concept so abstract that it captures only learning from scratch without absorbing human knowledge, but that would be a different concept—it wouldn’t represent practical capacity to intellectually self-improve in our world.
Use something like Prolog to declare the environment and problem. If I knew how the AI would learn about it, I could build an AI already. And indeed, there are fields of machine learning for things such as Bayesian inference.
If you have to describe every potential probelm to the AI in Prolog, how will it learn to become a computer scientist or quantum physicist?
Describe the problem of learning how to become a computer scientist or quantum physicist, then let it solve that problem. Now it can learn to become a computer scientists or quantum physicist.
(That said, a better method would be to describe computer science and quantum physics and just let it solve those fields.)
Or a much better method: describe the problem of an AI that can learn natural language, the rest follows.
Except for all problems which are underspecified in natural language.
Which might be some pretty important ones.
Agreement that human children are more intelligent than whales or elephants is likely to be the closest we get to agreement on this subject. You would need to absorb a lot of new knowledge from all the replies from various sources that have been provided to you here already before in progress is possible.
Unfortunately it seems we are not even fully in agreement about that. A turing style test is a test of knowledge, the AIQ style test is a test of abstract intelligence.
An AIQ type test which just measures abstract intelligence fails to differentiate between feral einstein and educated einstein.
Effective intelligence, perhaps call it wisdom, is some product of intelligence and knowledge. The difference between human minds and those of elephants or whales is that of knowledge.
My core point, to reiterate again: the defining characteristic of human minds is knowledge, not raw intelligence.
Intelligence can produce knowledge from the environment. Feral Einstein would develop knowledge of the world, to the extent that he wasn’t limited by non-knowledge/intelligence factors (like finding shelter or feeding himself).
Possibly relevant: AIXI-style IQ tests.