Thanks for the clarification. Yes, I’m suggesting bullet point 2.
LH intelligence evaluates learning algorithms. It makes sense to say an algorithm can adapt to a wide range of environments (in their precise formal sense: achieves high return under the universal mixture over computable environments), and maybe that it’s more “charismatic” (has hard-coded social skills, or can learn them easily in relevant environments). But it doesn’t make sense to say that an algorithm is physically stronger—that has to be a fact which is encoded by the environment’s state (especially in this dualistic formalism).
The paper’s math automatically captures these facts, in my opinion. I agree the boundary gets fuzzier in an embedded context, but so do a lot of things right now.
Ok, so it sounds like Legg and Hutter’s definition works given certain background assumptions / ways of modelling things, which they assume in their full paper on their own definition.
But in the paper I cited, Legg and Hutter give their definition without mentioning those assumptions / ways of modelling things. And they don’t seem to be alone in that, at least given the out-of-context quotes they provide, which include:
“[Performance intelligence is] the successful (i.e., goal-achieving) performance of the system in a complicated environment”
“Achieving complex goals in complex environments”
“the ability to solve hard problems.”
These definitions could all do a good job capturing what “intelligence” typically means if some of the terms in them are defined certain ways, or if certain other things are assumed. But they seem inadequate by themselves, in a way Legg and Hutter don’t note in their paper. (Also, Legg and Hutter don’t seem to indicate that that paper is just or primarily about how intelligence should be defined in relation to AI systems.)
That said, as I mentioned before, I don’t actually think this is a very important oversight on their part.
Thanks for the clarification. Yes, I’m suggesting bullet point 2.
LH intelligence evaluates learning algorithms. It makes sense to say an algorithm can adapt to a wide range of environments (in their precise formal sense: achieves high return under the universal mixture over computable environments), and maybe that it’s more “charismatic” (has hard-coded social skills, or can learn them easily in relevant environments). But it doesn’t make sense to say that an algorithm is physically stronger—that has to be a fact which is encoded by the environment’s state (especially in this dualistic formalism).
The paper’s math automatically captures these facts, in my opinion. I agree the boundary gets fuzzier in an embedded context, but so do a lot of things right now.
Ok, so it sounds like Legg and Hutter’s definition works given certain background assumptions / ways of modelling things, which they assume in their full paper on their own definition.
But in the paper I cited, Legg and Hutter give their definition without mentioning those assumptions / ways of modelling things. And they don’t seem to be alone in that, at least given the out-of-context quotes they provide, which include:
“[Performance intelligence is] the successful (i.e., goal-achieving) performance of the system in a complicated environment”
“Achieving complex goals in complex environments”
“the ability to solve hard problems.”
These definitions could all do a good job capturing what “intelligence” typically means if some of the terms in them are defined certain ways, or if certain other things are assumed. But they seem inadequate by themselves, in a way Legg and Hutter don’t note in their paper. (Also, Legg and Hutter don’t seem to indicate that that paper is just or primarily about how intelligence should be defined in relation to AI systems.)
That said, as I mentioned before, I don’t actually think this is a very important oversight on their part.