More intelligent agents have a larger set of possible courses of action that they’re potentially capable of evaluating and carrying out. But picking an option from a larger set is harder than picking an option from a smaller set. So max performance grows faster than typical performance as intelligence increases, and errors look more like ‘disarray’ than like ‘just not being capable of that’. e.g. Compare a human who left the window open while running the heater on a cold day, with a thermostat that left the window open while running the heater.
A Second Hypothesis: Higher intelligence often involves increasing generality—having a larger set of goals, operating across a wider range of environments. But that increased generality makes the agent less predictable by an observer who is modeling the agent as using means-ends reasoning, because the agent is not just relying on a small number of means-ends paths in the way that a narrower agent would. This makes the agent seem less coherent in a sense, but that is not because the agent is less goal-directed (indeed, it might be more goal-directed and less of a stimulus-response machine).
These seem very relevant for comparing very different agents: comparisons across classes, or of different species, or perhaps for comparing different AI models. Less clear that they would apply for comparing different humans, or different organizations.
A hypothesis for the negative correlation:
More intelligent agents have a larger set of possible courses of action that they’re potentially capable of evaluating and carrying out. But picking an option from a larger set is harder than picking an option from a smaller set. So max performance grows faster than typical performance as intelligence increases, and errors look more like ‘disarray’ than like ‘just not being capable of that’. e.g. Compare a human who left the window open while running the heater on a cold day, with a thermostat that left the window open while running the heater.
A Second Hypothesis: Higher intelligence often involves increasing generality—having a larger set of goals, operating across a wider range of environments. But that increased generality makes the agent less predictable by an observer who is modeling the agent as using means-ends reasoning, because the agent is not just relying on a small number of means-ends paths in the way that a narrower agent would. This makes the agent seem less coherent in a sense, but that is not because the agent is less goal-directed (indeed, it might be more goal-directed and less of a stimulus-response machine).
These seem very relevant for comparing very different agents: comparisons across classes, or of different species, or perhaps for comparing different AI models. Less clear that they would apply for comparing different humans, or different organizations.