But how do you describe the task and how does the AI learn about it? There’s a massive gulf between AI’s which can have the task/game described in human language and those that can not. Whale brains and elephants fall in the latter category. An AI which can realistically self-improve to human levels needs to be in the former category, like a human child.
You could define intelligence with an AIQ concept so abstract that it captures only learning from scratch without absorbing human knowledge, but that would be a different concept—it wouldn’t represent practical capacity to intellectually self-improve in our world.
But how do you describe the task and how does the AI learn about it?
Use something like Prolog to declare the environment and problem. If I knew how the AI would learn about it, I could build an AI already. And indeed, there are fields of machine learning for things such as Bayesian inference.
Describe the problem of learning how to become a computer scientist or quantum physicist, then let it solve that problem. Now it can learn to become a computer scientists or quantum physicist.
(That said, a better method would be to describe computer science and quantum physics and just let it solve those fields.)
Agreement that human children are more intelligent than whales or elephants is likely to be the closest we get to agreement on this subject. You would need to absorb a lot of new knowledge from all the replies from various sources that have been provided to you here already before in progress is possible.
Unfortunately it seems we are not even fully in agreement about that. A turing style test is a test of knowledge, the AIQ style test is a test of abstract intelligence.
An AIQ type test which just measures abstract intelligence fails to differentiate between feral einstein and educated einstein.
Effective intelligence, perhaps call it wisdom, is some product of intelligence and knowledge. The difference between human minds and those of elephants or whales is that of knowledge.
My core point, to reiterate again: the defining characteristic of human minds is knowledge, not raw intelligence.
Intelligence can produce knowledge from the environment. Feral Einstein would develop knowledge of the world, to the extent that he wasn’t limited by non-knowledge/intelligence factors (like finding shelter or feeding himself).
Yes—the AIQ idea.
But how do you describe the task and how does the AI learn about it? There’s a massive gulf between AI’s which can have the task/game described in human language and those that can not. Whale brains and elephants fall in the latter category. An AI which can realistically self-improve to human levels needs to be in the former category, like a human child.
You could define intelligence with an AIQ concept so abstract that it captures only learning from scratch without absorbing human knowledge, but that would be a different concept—it wouldn’t represent practical capacity to intellectually self-improve in our world.
Use something like Prolog to declare the environment and problem. If I knew how the AI would learn about it, I could build an AI already. And indeed, there are fields of machine learning for things such as Bayesian inference.
If you have to describe every potential probelm to the AI in Prolog, how will it learn to become a computer scientist or quantum physicist?
Describe the problem of learning how to become a computer scientist or quantum physicist, then let it solve that problem. Now it can learn to become a computer scientists or quantum physicist.
(That said, a better method would be to describe computer science and quantum physics and just let it solve those fields.)
Or a much better method: describe the problem of an AI that can learn natural language, the rest follows.
Except for all problems which are underspecified in natural language.
Which might be some pretty important ones.
Agreement that human children are more intelligent than whales or elephants is likely to be the closest we get to agreement on this subject. You would need to absorb a lot of new knowledge from all the replies from various sources that have been provided to you here already before in progress is possible.
Unfortunately it seems we are not even fully in agreement about that. A turing style test is a test of knowledge, the AIQ style test is a test of abstract intelligence.
An AIQ type test which just measures abstract intelligence fails to differentiate between feral einstein and educated einstein.
Effective intelligence, perhaps call it wisdom, is some product of intelligence and knowledge. The difference between human minds and those of elephants or whales is that of knowledge.
My core point, to reiterate again: the defining characteristic of human minds is knowledge, not raw intelligence.
Intelligence can produce knowledge from the environment. Feral Einstein would develop knowledge of the world, to the extent that he wasn’t limited by non-knowledge/intelligence factors (like finding shelter or feeding himself).