Is this definition circular? Let me explicit why that might be:
You define intelligent agents (or degree of intelligence) as “those physical systems that, using few resources, can optimize future states across many contexts (that is, when put in many different physical contexts, in which they need to face problems in different domains)” (consider also this more formal definition, which ignores the notion of resources).
But how are “resources” defined? There is a very natural notion of resources: energy, or negentropy. But I’m not sure this captures everything we care about here. There are many situations in which we care about the shape in which this energy is presented, since that is important to the useful usage of it (imagine finding a diamond in the desert, when you are dying of thirst). There are some ways in which energy can be stored (for example, in the nucleus of atoms?) which are very hard to make use of (without loads of other energy in other forms already available to take advantage of it). So we might naturally say something like “energy is that which, across many contexts, can be used to optimize for future states (across many preference orderings)”. But what do we mean here by “can be used”? Probably something of the form “certain kinds of physical systems can take advantage of them, and transform them into optimization of future states”. And I don’t see how to define those physical systems other than as “intelligent agents”.
A possible objection to this is that ultimately, when dealing with arbitrarily high “intelligence” and/or “variety of sources of energy available”, we will have no problem with some energy sources being locked away from us: we have so many methods and machinery already available, that for any given source we can construct the necessary mechanisms to extract it. And so we don’t need to resource to the definition of “resource” above, and can just use the natural and objective “energy” or “negentropy”. But even if that’s the case, I think for any finitely intelligent agent, with access to any finitely many energy sources, there will be some energy source it’s locked away from. And even if that weren’t the case for all such finite quantities, I do think in the intelligence regimes we care about many energy sources remain locked away.
You also seem to gesture at resources being equivalent to computation used (by the agent). Maybe we could understand this to mean something about the agent, and not its relationship to its surroundings (like the surrounding negentropy), such as “the agent is instantiated using at most such memory”. But I’m not sure we can well-define this in any way other than “it fits in this small physical box”. And then we get degenerations of the form “the most intelligent systems are black holes”, or other expanding low-level physical processes.
Is this definition circular? Let me explicit why that might be:
You define intelligent agents (or degree of intelligence) as “those physical systems that, using few resources, can optimize future states across many contexts (that is, when put in many different physical contexts, in which they need to face problems in different domains)” (consider also this more formal definition, which ignores the notion of resources).
But how are “resources” defined? There is a very natural notion of resources: energy, or negentropy. But I’m not sure this captures everything we care about here. There are many situations in which we care about the shape in which this energy is presented, since that is important to the useful usage of it (imagine finding a diamond in the desert, when you are dying of thirst). There are some ways in which energy can be stored (for example, in the nucleus of atoms?) which are very hard to make use of (without loads of other energy in other forms already available to take advantage of it). So we might naturally say something like “energy is that which, across many contexts, can be used to optimize for future states (across many preference orderings)”. But what do we mean here by “can be used”? Probably something of the form “certain kinds of physical systems can take advantage of them, and transform them into optimization of future states”. And I don’t see how to define those physical systems other than as “intelligent agents”.
A possible objection to this is that ultimately, when dealing with arbitrarily high “intelligence” and/or “variety of sources of energy available”, we will have no problem with some energy sources being locked away from us: we have so many methods and machinery already available, that for any given source we can construct the necessary mechanisms to extract it. And so we don’t need to resource to the definition of “resource” above, and can just use the natural and objective “energy” or “negentropy”. But even if that’s the case, I think for any finitely intelligent agent, with access to any finitely many energy sources, there will be some energy source it’s locked away from. And even if that weren’t the case for all such finite quantities, I do think in the intelligence regimes we care about many energy sources remain locked away.
You also seem to gesture at resources being equivalent to computation used (by the agent). Maybe we could understand this to mean something about the agent, and not its relationship to its surroundings (like the surrounding negentropy), such as “the agent is instantiated using at most such memory”. But I’m not sure we can well-define this in any way other than “it fits in this small physical box”. And then we get degenerations of the form “the most intelligent systems are black holes”, or other expanding low-level physical processes.