Firstly, I’ll say that, given that people already have a pretty well-shared intuitive understanding of what “intelligence” is meant to mean, I don’t think it’s a major problem for people to give explicit definitions like Legg and Hutter’s. I think people won’t then go out and assume that wealth, physical strength, etc. count as part of intelligence—they’re more likely to just not notice that the definitions might imply that.
But I think my points do stand. I think I see two things you might be suggesting:
Intelligence is the only thing that increases an agent’s ability to achieve goals across all environments.
Intelligence is an ability, which is part of the agent, whereas things like wealth are resources, and are part of the environment.
If you meant the first of those things, I’d agree that ““Intelligence” might help in a wider range of environments than those [other] capabilities or resources help in”. E.g., a billion US dollars wouldn’t help someone at any time before 1700CE (or whenever) or probably anytime after 3000CE achieve their goals, whereas intelligence probably would.
But note that Legg and Hutter say “across a wide range of environments.” A billion US dollars would help anyone, in any job, any country, and any time from 1900 to 2020 achieve most of their goals. I would consider that a “wide” range of environments, even if it’s not maximally wide.
And there are aspects of intelligence that would only be useful in a relatively narrow set of environments, or for a relatively narrow set of goals. E.g., factual knowledge is typically included as part of intelligence, and knowledge the dates of birth and death of US presidents will be helpful in various situations, but probably in fewer situations and for fewer goals than a billion dollars.
If you meant the second thing, I’d note in response the other capabilities, rather than the other resources. For example, it seems to me intuitive to speak of an agent’s charisma or physical strength as a property of the agent, rather than of the state. And I think those capabilities will help it achieve goals in a wide (though not maximally wide) range of environments.
We could decide to say an agent’s charisma and physical strength are properties of the state, not the agent, and that this is not the case for intelligence. Perhaps this is useful when modelling an AI and its environment in a standard way, or something like that, and perhaps it’s typically assumed (I don’t know). If so, then combining an explicit statement of that with Legg and Hutter’s definition may address my points, as that might explicitly slice all other types of capabilities and resources out of the definition of “intelligence”.
But I don’t think it’s obvious that things like charisma and physical strength are more a property of the environment than intelligence is—at least for humans, for whom all of these capabilities ultimately just come down to our physical bodies (assuming we reject dualism, which seems safe to me).
Does that make sense? Or did I misunderstand your points?
Thanks for the clarification. Yes, I’m suggesting bullet point 2.
LH intelligence evaluates learning algorithms. It makes sense to say an algorithm can adapt to a wide range of environments (in their precise formal sense: achieves high return under the universal mixture over computable environments), and maybe that it’s more “charismatic” (has hard-coded social skills, or can learn them easily in relevant environments). But it doesn’t make sense to say that an algorithm is physically stronger—that has to be a fact which is encoded by the environment’s state (especially in this dualistic formalism).
The paper’s math automatically captures these facts, in my opinion. I agree the boundary gets fuzzier in an embedded context, but so do a lot of things right now.
Ok, so it sounds like Legg and Hutter’s definition works given certain background assumptions / ways of modelling things, which they assume in their full paper on their own definition.
But in the paper I cited, Legg and Hutter give their definition without mentioning those assumptions / ways of modelling things. And they don’t seem to be alone in that, at least given the out-of-context quotes they provide, which include:
“[Performance intelligence is] the successful (i.e., goal-achieving) performance of the system in a complicated environment”
“Achieving complex goals in complex environments”
“the ability to solve hard problems.”
These definitions could all do a good job capturing what “intelligence” typically means if some of the terms in them are defined certain ways, or if certain other things are assumed. But they seem inadequate by themselves, in a way Legg and Hutter don’t note in their paper. (Also, Legg and Hutter don’t seem to indicate that that paper is just or primarily about how intelligence should be defined in relation to AI systems.)
That said, as I mentioned before, I don’t actually think this is a very important oversight on their part.
Firstly, I’ll say that, given that people already have a pretty well-shared intuitive understanding of what “intelligence” is meant to mean, I don’t think it’s a major problem for people to give explicit definitions like Legg and Hutter’s. I think people won’t then go out and assume that wealth, physical strength, etc. count as part of intelligence—they’re more likely to just not notice that the definitions might imply that.
But I think my points do stand. I think I see two things you might be suggesting:
Intelligence is the only thing that increases an agent’s ability to achieve goals across all environments.
Intelligence is an ability, which is part of the agent, whereas things like wealth are resources, and are part of the environment.
If you meant the first of those things, I’d agree that ““Intelligence” might help in a wider range of environments than those [other] capabilities or resources help in”. E.g., a billion US dollars wouldn’t help someone at any time before 1700CE (or whenever) or probably anytime after 3000CE achieve their goals, whereas intelligence probably would.
But note that Legg and Hutter say “across a wide range of environments.” A billion US dollars would help anyone, in any job, any country, and any time from 1900 to 2020 achieve most of their goals. I would consider that a “wide” range of environments, even if it’s not maximally wide.
And there are aspects of intelligence that would only be useful in a relatively narrow set of environments, or for a relatively narrow set of goals. E.g., factual knowledge is typically included as part of intelligence, and knowledge the dates of birth and death of US presidents will be helpful in various situations, but probably in fewer situations and for fewer goals than a billion dollars.
If you meant the second thing, I’d note in response the other capabilities, rather than the other resources. For example, it seems to me intuitive to speak of an agent’s charisma or physical strength as a property of the agent, rather than of the state. And I think those capabilities will help it achieve goals in a wide (though not maximally wide) range of environments.
We could decide to say an agent’s charisma and physical strength are properties of the state, not the agent, and that this is not the case for intelligence. Perhaps this is useful when modelling an AI and its environment in a standard way, or something like that, and perhaps it’s typically assumed (I don’t know). If so, then combining an explicit statement of that with Legg and Hutter’s definition may address my points, as that might explicitly slice all other types of capabilities and resources out of the definition of “intelligence”.
But I don’t think it’s obvious that things like charisma and physical strength are more a property of the environment than intelligence is—at least for humans, for whom all of these capabilities ultimately just come down to our physical bodies (assuming we reject dualism, which seems safe to me).
Does that make sense? Or did I misunderstand your points?
Thanks for the clarification. Yes, I’m suggesting bullet point 2.
LH intelligence evaluates learning algorithms. It makes sense to say an algorithm can adapt to a wide range of environments (in their precise formal sense: achieves high return under the universal mixture over computable environments), and maybe that it’s more “charismatic” (has hard-coded social skills, or can learn them easily in relevant environments). But it doesn’t make sense to say that an algorithm is physically stronger—that has to be a fact which is encoded by the environment’s state (especially in this dualistic formalism).
The paper’s math automatically captures these facts, in my opinion. I agree the boundary gets fuzzier in an embedded context, but so do a lot of things right now.
Ok, so it sounds like Legg and Hutter’s definition works given certain background assumptions / ways of modelling things, which they assume in their full paper on their own definition.
But in the paper I cited, Legg and Hutter give their definition without mentioning those assumptions / ways of modelling things. And they don’t seem to be alone in that, at least given the out-of-context quotes they provide, which include:
“[Performance intelligence is] the successful (i.e., goal-achieving) performance of the system in a complicated environment”
“Achieving complex goals in complex environments”
“the ability to solve hard problems.”
These definitions could all do a good job capturing what “intelligence” typically means if some of the terms in them are defined certain ways, or if certain other things are assumed. But they seem inadequate by themselves, in a way Legg and Hutter don’t note in their paper. (Also, Legg and Hutter don’t seem to indicate that that paper is just or primarily about how intelligence should be defined in relation to AI systems.)
That said, as I mentioned before, I don’t actually think this is a very important oversight on their part.