The most important difference between this approach and most people thinking about abstraction is that, in this approach, most of the key ideas/results do not explicitly involve an observer. The “info-at-a-distance” is more a property of the universe than of the observer, in exactly the same way that e.g. energy conservation or the second law of thermodynamics are more properties of the universe than of the observer.
Now, it’s still true that we need an observer in order to recognize that energy is conserved or entropy increases or whatever. There’s still an implicit observer in there, writing down the equations and mapping them to physical reality. But that’s true mostly in a philosophical sense, which doesn’t really have much practical bearing on anything; even if some aliens came along with radically different ways of doing physics, we’d still expect energy conservation and entropy increase and whatnot to be embedded in their predictive processes (though possibly implicitly). We’d still expect their physics to either be equivalent to ours, or to make outright wrong predictions (other than the very small/very big scales where ours is known to be incomplete). We’d even expect a lot of the internal structure to match, since they live in our universe and are therefore subject to similar computational constraints (specifically locality).
Abstraction, I claim, is like that.
On a meta-note, regarding this specifically:
You are using mathematics, a formalized system optimized to be used by humans. And you use math/your intuition to formalize “the perceiving”.
I think there’s a mistake people sometimes make when thinking about how-models-work (which you may or may not be making) that goes something like “well, we humans are representing this chunk-of-the-world using these particular mathematical symbols, but that’s kind of an arbitrary choice, so it doesn’t necessarily tell us anything fundamental which would generalize beyond humans”.
The mistake here is: if we’re able to accurately predict things about the system, then those predictions remain just as true even if they’re represented some other way. In fact, those predictions remain just as true even if they’re not represented at all—i.e. even if there’s no humans around to make them. For instance, energy is still conserved even in parts of the universe which humans have never seen and will never see, and that still constrains the viable architectures of agent-like systems in those parts of the universe.
You’re asking the right questions.
The most important difference between this approach and most people thinking about abstraction is that, in this approach, most of the key ideas/results do not explicitly involve an observer. The “info-at-a-distance” is more a property of the universe than of the observer, in exactly the same way that e.g. energy conservation or the second law of thermodynamics are more properties of the universe than of the observer.
Now, it’s still true that we need an observer in order to recognize that energy is conserved or entropy increases or whatever. There’s still an implicit observer in there, writing down the equations and mapping them to physical reality. But that’s true mostly in a philosophical sense, which doesn’t really have much practical bearing on anything; even if some aliens came along with radically different ways of doing physics, we’d still expect energy conservation and entropy increase and whatnot to be embedded in their predictive processes (though possibly implicitly). We’d still expect their physics to either be equivalent to ours, or to make outright wrong predictions (other than the very small/very big scales where ours is known to be incomplete). We’d even expect a lot of the internal structure to match, since they live in our universe and are therefore subject to similar computational constraints (specifically locality).
Abstraction, I claim, is like that.
On a meta-note, regarding this specifically:
I think there’s a mistake people sometimes make when thinking about how-models-work (which you may or may not be making) that goes something like “well, we humans are representing this chunk-of-the-world using these particular mathematical symbols, but that’s kind of an arbitrary choice, so it doesn’t necessarily tell us anything fundamental which would generalize beyond humans”.
The mistake here is: if we’re able to accurately predict things about the system, then those predictions remain just as true even if they’re represented some other way. In fact, those predictions remain just as true even if they’re not represented at all—i.e. even if there’s no humans around to make them. For instance, energy is still conserved even in parts of the universe which humans have never seen and will never see, and that still constrains the viable architectures of agent-like systems in those parts of the universe.