Intelligence measures an agent’s ability to achieve a wide range of goals in a wide range of environments.
One flaw in this phrasing is that an agent exists in a single world, and pursues a single goal, so it’s more about being able to solve unexpected subproblems.
perhaps: given a poorly defined domain construct a decision theory that is as close to optimal (given the goal of some future sensory inputs) as your sensory information about the domain allows.
This doesn’t give one a rigorous way to quantify intelligence but does allow us to qualify it (ordinal scale) by making statements about how close or far away various decisions are from optimal. Otherwise I can’t seem to fold decisions about how much time to spend trying to more rigorously define the domain into the general definition.
One flaw in this phrasing is that an agent exists in a single world, and pursues a single goal, so it’s more about being able to solve unexpected subproblems.
If you count a subgoal as a type of goal then my fix still works well.
You could consider other possible worlds and other possible goals and see if the agent could also achieve those.
perhaps: given a poorly defined domain construct a decision theory that is as close to optimal (given the goal of some future sensory inputs) as your sensory information about the domain allows.
This doesn’t give one a rigorous way to quantify intelligence but does allow us to qualify it (ordinal scale) by making statements about how close or far away various decisions are from optimal. Otherwise I can’t seem to fold decisions about how much time to spend trying to more rigorously define the domain into the general definition.