There’s reason to suspect that any human-level AI must be programmed in human languages.
In fact, that’s almost tautological by virtue of the Turing Test.
What?
Do you mean humanlike AIs? An AI capable of passing the Turing Test would of course need to understand human language well enough to act convincingly human (or at least do a really good imitation), but that’s not necessarily a human-level AI (convincing people that you’re human is a separate task from actually being human, probably a much easier one), and human-level AIs in general needn’t necessarily understand human language any better than any other sort of language by default.
Anyway, an AI being “programmed in human languages” seems to be going by the “programming = instructions being given to a human servant” metaphor, and if you want that to work, you clearly first need to write the servant in something other than human language. And copying human psychology well enough that the AI actually understands human language as well as a human does, rather than being able to imitate understanding well enough to carry on a text-based conversation, is no easy task, and is probably a lot harder than manually coding a simple goal system like paperclip maximization in a lower-level language. But that could still be an AGI.
Human level AI—an AGI design capable of matching the full intellectual capabilities of the best human scientists/engineers.
To get to H level in a practical timeframe, a human AI will have to learn human knowledge, it will have to experience an equivalent to a standard 20-25 year education.
Learning human knowledge in practice requires learning human language as an early initial precursor step.
The software of a human mind—the memeset or belief network, is essentially a complex human language program.
For an AI to achieve human-level, it will have to actually understand human language as well as a human does, and this requires a bunch of algorithmic complexity from the human brain at the hardware level and it implies the capability to parse and run human language programs.
So you only need to program the infant brain in a programming language—the rest can be programmed in human language.
is probably a lot harder than manually coding a simple goal system like paperclip maximization in a lower-level language. But that could still be an AGI
If it doesn’t have the capacity to understand human level language then it’s not an AGI—as that is the defining characteristic of the concept (by my/Turing’s definition).
And thus by extension, the defining characteristic of a human-mind is human language capability.
EDIT: Why are you downvoting? Don’t agree and don’t want to comment?
If it doesn’t have the capacity to understand human level language then it’s not an AGI—as that is the defining characteristic of the concept (by my/Turing’s definition).
Turing never intended his test to be adopted as “the defining characteristic of the concept [of AGI]” in anything like this fashion. Human ‘level’ language is also somewhat misleading in as much as it implies it is reaching a level of communication power rather than adapting specifically to the kind of communications humans happen to have evolved—especially the quirks and weaknesses.
Turing never intended his test to be adopted as “the defining characteristic of the concept [of AGI]” in anything like this fashion.
I disagree somewhat. It’s difficult to know exactly what “he intended”, but the opening of his paper which introduces the concept, starts with “Can machines think?”, and describes a reasonable language based test: an intelligent machine is one that can convince us of it’s intelligence in plain human language.
Human ‘level’ language is also somewhat misleading in as much as it implies it is reaching a level of communication power rather than adapting specifically to the kind of communications humans happen to have evolved—especially the quirks and weaknesses
I meant natural language, the understanding of which certainly does require a certain minimum level of cognitive capabilities.
We have a much greater understanding of what the “think” in “Can machines think?” means now. We have better tests than seeing if they can fake human language.
The test isn’t about faking human language, it’s about using language to probe another mind. Whales and elephants have brains built out of similar quantities of the same cortical circuits but without a common language stepping into their minds is very difficult.
But how do you describe the task and how does the AI learn about it? There’s a massive gulf between AI’s which can have the task/game described in human language and those that can not. Whale brains and elephants fall in the latter category. An AI which can realistically self-improve to human levels needs to be in the former category, like a human child.
You could define intelligence with an AIQ concept so abstract that it captures only learning from scratch without absorbing human knowledge, but that would be a different concept—it wouldn’t represent practical capacity to intellectually self-improve in our world.
But how do you describe the task and how does the AI learn about it?
Use something like Prolog to declare the environment and problem. If I knew how the AI would learn about it, I could build an AI already. And indeed, there are fields of machine learning for things such as Bayesian inference.
Describe the problem of learning how to become a computer scientist or quantum physicist, then let it solve that problem. Now it can learn to become a computer scientists or quantum physicist.
(That said, a better method would be to describe computer science and quantum physics and just let it solve those fields.)
Agreement that human children are more intelligent than whales or elephants is likely to be the closest we get to agreement on this subject. You would need to absorb a lot of new knowledge from all the replies from various sources that have been provided to you here already before in progress is possible.
Unfortunately it seems we are not even fully in agreement about that. A turing style test is a test of knowledge, the AIQ style test is a test of abstract intelligence.
An AIQ type test which just measures abstract intelligence fails to differentiate between feral einstein and educated einstein.
Effective intelligence, perhaps call it wisdom, is some product of intelligence and knowledge. The difference between human minds and those of elephants or whales is that of knowledge.
My core point, to reiterate again: the defining characteristic of human minds is knowledge, not raw intelligence.
Intelligence can produce knowledge from the environment. Feral Einstein would develop knowledge of the world, to the extent that he wasn’t limited by non-knowledge/intelligence factors (like finding shelter or feeding himself).
What?
Do you mean humanlike AIs? An AI capable of passing the Turing Test would of course need to understand human language well enough to act convincingly human (or at least do a really good imitation), but that’s not necessarily a human-level AI (convincing people that you’re human is a separate task from actually being human, probably a much easier one), and human-level AIs in general needn’t necessarily understand human language any better than any other sort of language by default.
Anyway, an AI being “programmed in human languages” seems to be going by the “programming = instructions being given to a human servant” metaphor, and if you want that to work, you clearly first need to write the servant in something other than human language. And copying human psychology well enough that the AI actually understands human language as well as a human does, rather than being able to imitate understanding well enough to carry on a text-based conversation, is no easy task, and is probably a lot harder than manually coding a simple goal system like paperclip maximization in a lower-level language. But that could still be an AGI.
Human level AI—an AGI design capable of matching the full intellectual capabilities of the best human scientists/engineers.
To get to H level in a practical timeframe, a human AI will have to learn human knowledge, it will have to experience an equivalent to a standard 20-25 year education.
Learning human knowledge in practice requires learning human language as an early initial precursor step.
The software of a human mind—the memeset or belief network, is essentially a complex human language program.
For an AI to achieve human-level, it will have to actually understand human language as well as a human does, and this requires a bunch of algorithmic complexity from the human brain at the hardware level and it implies the capability to parse and run human language programs.
So you only need to program the infant brain in a programming language—the rest can be programmed in human language.
If it doesn’t have the capacity to understand human level language then it’s not an AGI—as that is the defining characteristic of the concept (by my/Turing’s definition).
And thus by extension, the defining characteristic of a human-mind is human language capability.
EDIT: Why are you downvoting? Don’t agree and don’t want to comment?
Turing never intended his test to be adopted as “the defining characteristic of the concept [of AGI]” in anything like this fashion. Human ‘level’ language is also somewhat misleading in as much as it implies it is reaching a level of communication power rather than adapting specifically to the kind of communications humans happen to have evolved—especially the quirks and weaknesses.
I disagree somewhat. It’s difficult to know exactly what “he intended”, but the opening of his paper which introduces the concept, starts with “Can machines think?”, and describes a reasonable language based test: an intelligent machine is one that can convince us of it’s intelligence in plain human language.
I meant natural language, the understanding of which certainly does require a certain minimum level of cognitive capabilities.
We have a much greater understanding of what the “think” in “Can machines think?” means now. We have better tests than seeing if they can fake human language.
The test isn’t about faking human language, it’s about using language to probe another mind. Whales and elephants have brains built out of similar quantities of the same cortical circuits but without a common language stepping into their minds is very difficult.
What’s a better test for AI than the turing test?
Give it a series of fairly difficult and broad ranging tasks, none of which it has been created with existing specialised knowledge to handle.
Yes—the AIQ idea.
But how do you describe the task and how does the AI learn about it? There’s a massive gulf between AI’s which can have the task/game described in human language and those that can not. Whale brains and elephants fall in the latter category. An AI which can realistically self-improve to human levels needs to be in the former category, like a human child.
You could define intelligence with an AIQ concept so abstract that it captures only learning from scratch without absorbing human knowledge, but that would be a different concept—it wouldn’t represent practical capacity to intellectually self-improve in our world.
Use something like Prolog to declare the environment and problem. If I knew how the AI would learn about it, I could build an AI already. And indeed, there are fields of machine learning for things such as Bayesian inference.
If you have to describe every potential probelm to the AI in Prolog, how will it learn to become a computer scientist or quantum physicist?
Describe the problem of learning how to become a computer scientist or quantum physicist, then let it solve that problem. Now it can learn to become a computer scientists or quantum physicist.
(That said, a better method would be to describe computer science and quantum physics and just let it solve those fields.)
Or a much better method: describe the problem of an AI that can learn natural language, the rest follows.
Except for all problems which are underspecified in natural language.
Which might be some pretty important ones.
Agreement that human children are more intelligent than whales or elephants is likely to be the closest we get to agreement on this subject. You would need to absorb a lot of new knowledge from all the replies from various sources that have been provided to you here already before in progress is possible.
Unfortunately it seems we are not even fully in agreement about that. A turing style test is a test of knowledge, the AIQ style test is a test of abstract intelligence.
An AIQ type test which just measures abstract intelligence fails to differentiate between feral einstein and educated einstein.
Effective intelligence, perhaps call it wisdom, is some product of intelligence and knowledge. The difference between human minds and those of elephants or whales is that of knowledge.
My core point, to reiterate again: the defining characteristic of human minds is knowledge, not raw intelligence.
Intelligence can produce knowledge from the environment. Feral Einstein would develop knowledge of the world, to the extent that he wasn’t limited by non-knowledge/intelligence factors (like finding shelter or feeding himself).
Possibly relevant: AIXI-style IQ tests.