Intelligence measures an agent’s ability to achieve goals in a wide range of environments.
That will be our ‘working definition’ for intelligence in this FAQ.
Does this mean that viruses and cockroaches are more intelligent than humans? They can certainly achieve their goals (feeding and multiplying) in a “wide range of environments”, much wider than humans. Well, maybe not in space.
I suspect that there should be a better definition. Wikipedia mentions abstract thought and other intangibles, but concedes that there is little agreement: ” Indeed, when two dozen prominent theorists were recently asked to define intelligence, they gave two dozen, somewhat different, definitions.”
The standard cop out “I know intelligence when I see it” is not very helpful, either.
I understand the need to have a discussion of AGI in the FAI FAQ, but I am skeptical that a critically minded person would settle for the definition you have given. Something general, measurable and not confused with a bacterial infection would be a good target.
Intelligence measures an agent’s ability to achieve a wide range of goals in a wide range of environments.
One flaw in this phrasing is that an agent exists in a single world, and pursues a single goal, so it’s more about being able to solve unexpected subproblems.
perhaps: given a poorly defined domain construct a decision theory that is as close to optimal (given the goal of some future sensory inputs) as your sensory information about the domain allows.
This doesn’t give one a rigorous way to quantify intelligence but does allow us to qualify it (ordinal scale) by making statements about how close or far away various decisions are from optimal. Otherwise I can’t seem to fold decisions about how much time to spend trying to more rigorously define the domain into the general definition.
I am not sold on your definition of intelligence:
Does this mean that viruses and cockroaches are more intelligent than humans? They can certainly achieve their goals (feeding and multiplying) in a “wide range of environments”, much wider than humans. Well, maybe not in space.
I suspect that there should be a better definition. Wikipedia mentions abstract thought and other intangibles, but concedes that there is little agreement: ” Indeed, when two dozen prominent theorists were recently asked to define intelligence, they gave two dozen, somewhat different, definitions.”
The standard cop out “I know intelligence when I see it” is not very helpful, either.
I understand the need to have a discussion of AGI in the FAI FAQ, but I am skeptical that a critically minded person would settle for the definition you have given. Something general, measurable and not confused with a bacterial infection would be a good target.
Here’s an easy fix:
Intelligence measures an agent’s ability to achieve a wide range of goals in a wide range of environments.
One flaw in this phrasing is that an agent exists in a single world, and pursues a single goal, so it’s more about being able to solve unexpected subproblems.
If you count a subgoal as a type of goal then my fix still works well.
You could consider other possible worlds and other possible goals and see if the agent could also achieve those.
perhaps: given a poorly defined domain construct a decision theory that is as close to optimal (given the goal of some future sensory inputs) as your sensory information about the domain allows.
This doesn’t give one a rigorous way to quantify intelligence but does allow us to qualify it (ordinal scale) by making statements about how close or far away various decisions are from optimal. Otherwise I can’t seem to fold decisions about how much time to spend trying to more rigorously define the domain into the general definition.