“Intelligence” vs. other capabilities and resources
Legg and Hutter (2007) collect 71 definitions of intelligence. Many, perhaps especially those from AI researchers, would actually cover a wider set of capabilities or resources than people typically wantthe term “intelligence” to cover. For example, Legg and Hutter’s own “informal definition” is: “Intelligence measures an agent’s ability to achieve goals in a wide range of environments.” But if you gave me a billion dollars, that would vastly increase my ability to achieve goals in a wide range of environments, even if it doesn’t affect anything we’d typically want to refer to as my “intelligence”.
(Having a billion dollars might lead to increases in my intelligence, if I use some of the money for things like paying for educational courses or retiring so I can spend all my time learning. But I can also use money to achieve goals in ways that don’t look like “increasing my intelligence”.)
I would say that there are many capabilities or resourcesthat increase an agent’s ability to achieve goals in a wide range of environments, and intelligence refers to a particular subset of these capabilities or resources. Some of the capabilities or resources which we don’t typically classify as “intelligence” include wealth, physical strength, connections (e.g., having friends in the halls of power), attractiveness, and charisma.
“Intelligence” might help in a wider range of environments than those capabilities or resources help in (e.g., physical strength seems less generically useful). And some of those capabilities or resources might be related to intelligence (e.g., charisma), be “exchangeable” for intelligence (e.g., money), or be attainable via intelligence (e.g., higher intelligence can help one get wealth and connections). But it still seems a useful distinction can be made between “intelligence” and other types of capabilities and resources that also help an agent achieve goals in a wide range of environments.
I’m less sure how to explain why some of those capabilities and resources should fit within “intelligence” while others don’t. At least two approaches to this can be inferred from the definitions Legg and Hutter collect (especially those from psychologists):
Talk about “mental” or “intellectual” abilities
But then of course we must define those terms.
Gesture at examples of the sorts of capabilities one is referring to, such as learning, thinking, reasoning, or remembering.
This second approach seems useful, though not fully satisfactory.
An approach that I don’t think I’ve seen, but which seems at least somewhat useful, is to suggest that “intelligence” refers to the capabilities or resources that help an agent (a) select or develop plans that are well-aligned with the agent’s values, and (b) implement the plans the agent has selected or developed. In contrast, other capabilities and resources (such as charisma or wealth) primarily help an agent implement its plans, and don’t directly provide much help in selecting or developing plans. (But as noted above, an agent could use those other capabilities or resources to increase their intelligence, which then helps the agent select or develop plans.)
For example, both (a) becoming more knowledgeable and rational and (b) getting a billion dollars would help one more effectively reduce existential risks. But, compared to getting a billion dollars, becoming more knowledgeable and rational is much more likely to lead one to prioritise existential risk reduction.
I find this third approach useful, because it links to the key reason why I think the distinction between intelligence and other capabilities and resources actually matters. This reason is that I think increasing an agent’s “intelligence” is more often good than increasing an agent’s other capabilities or resources. This is because some agents are well-intentioned yet currently have counterproductive plans. Increasing the intelligence of such agents may help them course-correct and drive faster, whereas increasing their other capabilities and resources may just help them drive faster down a harmful path.
(I plan to publish a post expanding on that last idea soon, where I’ll also provide more justification and examples. There I’ll also argue that there are some cases where increasing an agent’s intelligence would be bad yet increasing their “benevolence” would be good, because some agents have bad values, rather than being well-intentioned yet misguided.)
But if you gave me a billion dollars, that would vastly increase my ability to achieve goals in a wide range of environments, even if it doesn’t affect anything we’d typically want to refer to as my “intelligence”.
I don’t think it would—the “has a billion dollars” is a stateful property—it depends on the world state. I think the LH metric is pretty reasonable and correctly ignores how much money you have. The only thing you “bring” to every environment under the universal prior, is your reasoning abilities.
My understanding is that this analysis conflates “able to achieve goals in general in a fixed environment” (power/resources) vs “able to achieve high reward in a wide range of environments” (LH intelligence), but perhaps I have misunderstood.
Firstly, I’ll say that, given that people already have a pretty well-shared intuitive understanding of what “intelligence” is meant to mean, I don’t think it’s a major problem for people to give explicit definitions like Legg and Hutter’s. I think people won’t then go out and assume that wealth, physical strength, etc. count as part of intelligence—they’re more likely to just not notice that the definitions might imply that.
But I think my points do stand. I think I see two things you might be suggesting:
Intelligence is the only thing that increases an agent’s ability to achieve goals across all environments.
Intelligence is an ability, which is part of the agent, whereas things like wealth are resources, and are part of the environment.
If you meant the first of those things, I’d agree that ““Intelligence” might help in a wider range of environments than those [other] capabilities or resources help in”. E.g., a billion US dollars wouldn’t help someone at any time before 1700CE (or whenever) or probably anytime after 3000CE achieve their goals, whereas intelligence probably would.
But note that Legg and Hutter say “across a wide range of environments.” A billion US dollars would help anyone, in any job, any country, and any time from 1900 to 2020 achieve most of their goals. I would consider that a “wide” range of environments, even if it’s not maximally wide.
And there are aspects of intelligence that would only be useful in a relatively narrow set of environments, or for a relatively narrow set of goals. E.g., factual knowledge is typically included as part of intelligence, and knowledge the dates of birth and death of US presidents will be helpful in various situations, but probably in fewer situations and for fewer goals than a billion dollars.
If you meant the second thing, I’d note in response the other capabilities, rather than the other resources. For example, it seems to me intuitive to speak of an agent’s charisma or physical strength as a property of the agent, rather than of the state. And I think those capabilities will help it achieve goals in a wide (though not maximally wide) range of environments.
We could decide to say an agent’s charisma and physical strength are properties of the state, not the agent, and that this is not the case for intelligence. Perhaps this is useful when modelling an AI and its environment in a standard way, or something like that, and perhaps it’s typically assumed (I don’t know). If so, then combining an explicit statement of that with Legg and Hutter’s definition may address my points, as that might explicitly slice all other types of capabilities and resources out of the definition of “intelligence”.
But I don’t think it’s obvious that things like charisma and physical strength are more a property of the environment than intelligence is—at least for humans, for whom all of these capabilities ultimately just come down to our physical bodies (assuming we reject dualism, which seems safe to me).
Does that make sense? Or did I misunderstand your points?
Thanks for the clarification. Yes, I’m suggesting bullet point 2.
LH intelligence evaluates learning algorithms. It makes sense to say an algorithm can adapt to a wide range of environments (in their precise formal sense: achieves high return under the universal mixture over computable environments), and maybe that it’s more “charismatic” (has hard-coded social skills, or can learn them easily in relevant environments). But it doesn’t make sense to say that an algorithm is physically stronger—that has to be a fact which is encoded by the environment’s state (especially in this dualistic formalism).
The paper’s math automatically captures these facts, in my opinion. I agree the boundary gets fuzzier in an embedded context, but so do a lot of things right now.
Ok, so it sounds like Legg and Hutter’s definition works given certain background assumptions / ways of modelling things, which they assume in their full paper on their own definition.
But in the paper I cited, Legg and Hutter give their definition without mentioning those assumptions / ways of modelling things. And they don’t seem to be alone in that, at least given the out-of-context quotes they provide, which include:
“[Performance intelligence is] the successful (i.e., goal-achieving) performance of the system in a complicated environment”
“Achieving complex goals in complex environments”
“the ability to solve hard problems.”
These definitions could all do a good job capturing what “intelligence” typically means if some of the terms in them are defined certain ways, or if certain other things are assumed. But they seem inadequate by themselves, in a way Legg and Hutter don’t note in their paper. (Also, Legg and Hutter don’t seem to indicate that that paper is just or primarily about how intelligence should be defined in relation to AI systems.)
That said, as I mentioned before, I don’t actually think this is a very important oversight on their part.
“Intelligence” vs. other capabilities and resources
Legg and Hutter (2007) collect 71 definitions of intelligence. Many, perhaps especially those from AI researchers, would actually cover a wider set of capabilities or resources than people typically want the term “intelligence” to cover. For example, Legg and Hutter’s own “informal definition” is: “Intelligence measures an agent’s ability to achieve goals in a wide range of environments.” But if you gave me a billion dollars, that would vastly increase my ability to achieve goals in a wide range of environments, even if it doesn’t affect anything we’d typically want to refer to as my “intelligence”.
(Having a billion dollars might lead to increases in my intelligence, if I use some of the money for things like paying for educational courses or retiring so I can spend all my time learning. But I can also use money to achieve goals in ways that don’t look like “increasing my intelligence”.)
I would say that there are many capabilities or resources that increase an agent’s ability to achieve goals in a wide range of environments, and intelligence refers to a particular subset of these capabilities or resources. Some of the capabilities or resources which we don’t typically classify as “intelligence” include wealth, physical strength, connections (e.g., having friends in the halls of power), attractiveness, and charisma.
“Intelligence” might help in a wider range of environments than those capabilities or resources help in (e.g., physical strength seems less generically useful). And some of those capabilities or resources might be related to intelligence (e.g., charisma), be “exchangeable” for intelligence (e.g., money), or be attainable via intelligence (e.g., higher intelligence can help one get wealth and connections). But it still seems a useful distinction can be made between “intelligence” and other types of capabilities and resources that also help an agent achieve goals in a wide range of environments.
I’m less sure how to explain why some of those capabilities and resources should fit within “intelligence” while others don’t. At least two approaches to this can be inferred from the definitions Legg and Hutter collect (especially those from psychologists):
Talk about “mental” or “intellectual” abilities
But then of course we must define those terms.
Gesture at examples of the sorts of capabilities one is referring to, such as learning, thinking, reasoning, or remembering.
This second approach seems useful, though not fully satisfactory.
An approach that I don’t think I’ve seen, but which seems at least somewhat useful, is to suggest that “intelligence” refers to the capabilities or resources that help an agent (a) select or develop plans that are well-aligned with the agent’s values, and (b) implement the plans the agent has selected or developed. In contrast, other capabilities and resources (such as charisma or wealth) primarily help an agent implement its plans, and don’t directly provide much help in selecting or developing plans. (But as noted above, an agent could use those other capabilities or resources to increase their intelligence, which then helps the agent select or develop plans.)
For example, both (a) becoming more knowledgeable and rational and (b) getting a billion dollars would help one more effectively reduce existential risks. But, compared to getting a billion dollars, becoming more knowledgeable and rational is much more likely to lead one to prioritise existential risk reduction.
I find this third approach useful, because it links to the key reason why I think the distinction between intelligence and other capabilities and resources actually matters. This reason is that I think increasing an agent’s “intelligence” is more often good than increasing an agent’s other capabilities or resources. This is because some agents are well-intentioned yet currently have counterproductive plans. Increasing the intelligence of such agents may help them course-correct and drive faster, whereas increasing their other capabilities and resources may just help them drive faster down a harmful path.
(I plan to publish a post expanding on that last idea soon, where I’ll also provide more justification and examples. There I’ll also argue that there are some cases where increasing an agent’s intelligence would be bad yet increasing their “benevolence” would be good, because some agents have bad values, rather than being well-intentioned yet misguided.)
I don’t think it would—the “has a billion dollars” is a stateful property—it depends on the world state. I think the LH metric is pretty reasonable and correctly ignores how much money you have. The only thing you “bring” to every environment under the universal prior, is your reasoning abilities.
My understanding is that this analysis conflates “able to achieve goals in general in a fixed environment” (power/resources) vs “able to achieve high reward in a wide range of environments” (LH intelligence), but perhaps I have misunderstood.
Firstly, I’ll say that, given that people already have a pretty well-shared intuitive understanding of what “intelligence” is meant to mean, I don’t think it’s a major problem for people to give explicit definitions like Legg and Hutter’s. I think people won’t then go out and assume that wealth, physical strength, etc. count as part of intelligence—they’re more likely to just not notice that the definitions might imply that.
But I think my points do stand. I think I see two things you might be suggesting:
Intelligence is the only thing that increases an agent’s ability to achieve goals across all environments.
Intelligence is an ability, which is part of the agent, whereas things like wealth are resources, and are part of the environment.
If you meant the first of those things, I’d agree that ““Intelligence” might help in a wider range of environments than those [other] capabilities or resources help in”. E.g., a billion US dollars wouldn’t help someone at any time before 1700CE (or whenever) or probably anytime after 3000CE achieve their goals, whereas intelligence probably would.
But note that Legg and Hutter say “across a wide range of environments.” A billion US dollars would help anyone, in any job, any country, and any time from 1900 to 2020 achieve most of their goals. I would consider that a “wide” range of environments, even if it’s not maximally wide.
And there are aspects of intelligence that would only be useful in a relatively narrow set of environments, or for a relatively narrow set of goals. E.g., factual knowledge is typically included as part of intelligence, and knowledge the dates of birth and death of US presidents will be helpful in various situations, but probably in fewer situations and for fewer goals than a billion dollars.
If you meant the second thing, I’d note in response the other capabilities, rather than the other resources. For example, it seems to me intuitive to speak of an agent’s charisma or physical strength as a property of the agent, rather than of the state. And I think those capabilities will help it achieve goals in a wide (though not maximally wide) range of environments.
We could decide to say an agent’s charisma and physical strength are properties of the state, not the agent, and that this is not the case for intelligence. Perhaps this is useful when modelling an AI and its environment in a standard way, or something like that, and perhaps it’s typically assumed (I don’t know). If so, then combining an explicit statement of that with Legg and Hutter’s definition may address my points, as that might explicitly slice all other types of capabilities and resources out of the definition of “intelligence”.
But I don’t think it’s obvious that things like charisma and physical strength are more a property of the environment than intelligence is—at least for humans, for whom all of these capabilities ultimately just come down to our physical bodies (assuming we reject dualism, which seems safe to me).
Does that make sense? Or did I misunderstand your points?
Thanks for the clarification. Yes, I’m suggesting bullet point 2.
LH intelligence evaluates learning algorithms. It makes sense to say an algorithm can adapt to a wide range of environments (in their precise formal sense: achieves high return under the universal mixture over computable environments), and maybe that it’s more “charismatic” (has hard-coded social skills, or can learn them easily in relevant environments). But it doesn’t make sense to say that an algorithm is physically stronger—that has to be a fact which is encoded by the environment’s state (especially in this dualistic formalism).
The paper’s math automatically captures these facts, in my opinion. I agree the boundary gets fuzzier in an embedded context, but so do a lot of things right now.
Ok, so it sounds like Legg and Hutter’s definition works given certain background assumptions / ways of modelling things, which they assume in their full paper on their own definition.
But in the paper I cited, Legg and Hutter give their definition without mentioning those assumptions / ways of modelling things. And they don’t seem to be alone in that, at least given the out-of-context quotes they provide, which include:
“[Performance intelligence is] the successful (i.e., goal-achieving) performance of the system in a complicated environment”
“Achieving complex goals in complex environments”
“the ability to solve hard problems.”
These definitions could all do a good job capturing what “intelligence” typically means if some of the terms in them are defined certain ways, or if certain other things are assumed. But they seem inadequate by themselves, in a way Legg and Hutter don’t note in their paper. (Also, Legg and Hutter don’t seem to indicate that that paper is just or primarily about how intelligence should be defined in relation to AI systems.)
That said, as I mentioned before, I don’t actually think this is a very important oversight on their part.