I have a question. This is beyond my area of competence—understood few of the technical parts—so bear with me here :)
This is how I understand the parts relevant to my question:
You want to:
Simulate the chaotic environment to be “perceived”
Formalize the perceiving. “info-at-a-distance”
Check if the “perceived info” contains patterns/”structural organizations” similar to those used by humans.
The question is about fundamental assumptions of this work. Given that my understanding is correct, then I wonder:
Since you create both the simulated environment, and the “perceiver”, wouldn’t that risk introducing bias? If the “perceiver” outputs human-like structures, is there any way to say whether they originate from an “objective” process, or if they originate from you?
When you model “chaos”, you are deciding on the representation. You are using mathematics, a formalized system optimized to be used by humans. And you use math/your intuition to formalize “the perceiving”.
----
Taking a step back I realize that GAI “in the wild” will most likely be “subjected” to the same human influences. But I haven’t thought about it a lot and I’m curious about your take on this.
The most important difference between this approach and most people thinking about abstraction is that, in this approach, most of the key ideas/results do not explicitly involve an observer. The “info-at-a-distance” is more a property of the universe than of the observer, in exactly the same way that e.g. energy conservation or the second law of thermodynamics are more properties of the universe than of the observer.
Now, it’s still true that we need an observer in order to recognize that energy is conserved or entropy increases or whatever. There’s still an implicit observer in there, writing down the equations and mapping them to physical reality. But that’s true mostly in a philosophical sense, which doesn’t really have much practical bearing on anything; even if some aliens came along with radically different ways of doing physics, we’d still expect energy conservation and entropy increase and whatnot to be embedded in their predictive processes (though possibly implicitly). We’d still expect their physics to either be equivalent to ours, or to make outright wrong predictions (other than the very small/very big scales where ours is known to be incomplete). We’d even expect a lot of the internal structure to match, since they live in our universe and are therefore subject to similar computational constraints (specifically locality).
Abstraction, I claim, is like that.
On a meta-note, regarding this specifically:
You are using mathematics, a formalized system optimized to be used by humans. And you use math/your intuition to formalize “the perceiving”.
I think there’s a mistake people sometimes make when thinking about how-models-work (which you may or may not be making) that goes something like “well, we humans are representing this chunk-of-the-world using these particular mathematical symbols, but that’s kind of an arbitrary choice, so it doesn’t necessarily tell us anything fundamental which would generalize beyond humans”.
The mistake here is: if we’re able to accurately predict things about the system, then those predictions remain just as true even if they’re represented some other way. In fact, those predictions remain just as true even if they’re not represented at all—i.e. even if there’s no humans around to make them. For instance, energy is still conserved even in parts of the universe which humans have never seen and will never see, and that still constrains the viable architectures of agent-like systems in those parts of the universe.
I have a question. This is beyond my area of competence—understood few of the technical parts—so bear with me here :)
This is how I understand the parts relevant to my question:
You want to:
Simulate the chaotic environment to be “perceived”
Formalize the perceiving. “info-at-a-distance”
Check if the “perceived info” contains patterns/”structural organizations” similar to those used by humans.
The question is about fundamental assumptions of this work.
Given that my understanding is correct, then I wonder:
Since you create both the simulated environment, and the “perceiver”, wouldn’t that risk introducing bias? If the “perceiver” outputs human-like structures, is there any way to say whether they originate from an “objective” process, or if they originate from you?
When you model “chaos”, you are deciding on the representation.
You are using mathematics, a formalized system optimized to be used by humans.
And you use math/your intuition to formalize “the perceiving”.
----
Taking a step back I realize that GAI “in the wild” will most likely be “subjected” to the same human influences. But I haven’t thought about it a lot and I’m curious about your take on this.
You’re asking the right questions.
The most important difference between this approach and most people thinking about abstraction is that, in this approach, most of the key ideas/results do not explicitly involve an observer. The “info-at-a-distance” is more a property of the universe than of the observer, in exactly the same way that e.g. energy conservation or the second law of thermodynamics are more properties of the universe than of the observer.
Now, it’s still true that we need an observer in order to recognize that energy is conserved or entropy increases or whatever. There’s still an implicit observer in there, writing down the equations and mapping them to physical reality. But that’s true mostly in a philosophical sense, which doesn’t really have much practical bearing on anything; even if some aliens came along with radically different ways of doing physics, we’d still expect energy conservation and entropy increase and whatnot to be embedded in their predictive processes (though possibly implicitly). We’d still expect their physics to either be equivalent to ours, or to make outright wrong predictions (other than the very small/very big scales where ours is known to be incomplete). We’d even expect a lot of the internal structure to match, since they live in our universe and are therefore subject to similar computational constraints (specifically locality).
Abstraction, I claim, is like that.
On a meta-note, regarding this specifically:
I think there’s a mistake people sometimes make when thinking about how-models-work (which you may or may not be making) that goes something like “well, we humans are representing this chunk-of-the-world using these particular mathematical symbols, but that’s kind of an arbitrary choice, so it doesn’t necessarily tell us anything fundamental which would generalize beyond humans”.
The mistake here is: if we’re able to accurately predict things about the system, then those predictions remain just as true even if they’re represented some other way. In fact, those predictions remain just as true even if they’re not represented at all—i.e. even if there’s no humans around to make them. For instance, energy is still conserved even in parts of the universe which humans have never seen and will never see, and that still constrains the viable architectures of agent-like systems in those parts of the universe.