I think I realized how people go from caring about sensory data to caring about real objects. But I need help with figuring out how to capitalize on the idea.
So… how do humans do it?
Humans create very small models for predicting very small/basic aspects of sensory input (mini-models).
Humans use mini-models as puzzle pieces for building models for predicting ALL of sensory input.
As a result, humans get models in which it’s easy to identify “real objects” corresponding to sensory input.
For example, imagine you’re just looking at ducks swimming in a lake. You notice that ducks don’t suddenly disappear from your vision (permanence), their movement is continuous (continuity) and they seem to move in a 3D space (3D space). All those patterns (“permanence”, “continuity” and “3D space”) are useful for predicting aspects of immediate sensory input. But all those patterns are also useful for developing deeper theories of reality, such as atomic theory of matter. Because you can imagine that atoms are small things which continuously move in 3D space, similar to ducks. (This image stops working as well when you get to Quantum Mechanics, but then aspects of QM feel less “real” and less relevant for defining object.) As a result, it’s easy to see how the deeper model relates to surface-level patterns.
In other words: reality contains “real objects” to the extent to which deep models of reality are similar to (models of) basic patterns in our sensory input.
Creating an inhumanly good model of a human is related to formulating their preferences. A model captures many possibilities and the way many hypothetical things are simulated in the training data. Thus it’s a step towards eliminating path-dependence of particular life stories (and preferences they motivate), by considering these possibilities altogether. Even if some on the possible life stories interact with distortionary influences, others remain untouched, and so must continue deciding their own path, for there are no external influences there and they are the final authority for what counts as aiding them anyway.
Creating an inhumanly good model of a human is related to formulating their preferences.
How does this relate to my idea? I’m not talking about figuring out human preferences.
Thus it’s a step towards eliminating path-dependence of particular life stories
What is “path-dependence of particular life stories”?
I think things (minds, physical objects, social phenomena) should be characterized by computations that they could simulate/incarnate.
Are there other ways to characterize objects? Feels like a very general (or even fully general) framework. I believe my idea can be framed like this, too.
Models or real objects or things capture something that is not literally present in the world. The world contains shadows of these things, and the most straightforward way of finding models is by looking at the shadows and learning from them. Hypotheses is another toy example.
One of the features of models/things seems to be how they capture the many possibilities of a system simultaneously, rather than isolated particular possibilities. So what I gestured at was that when considering models of humans, the real objects or models behind a human capture the many possibilities of the way that human could be, rather than only the actuality of how they actually are. And this seems useful for figuring out their preferences.
Path-dependence is the way outcomes depend on the path that was taken to reach them. A path-independent outcome is convergent, it’s always the same destination regardless of the path that was taken. Human preferences seem to be path dependent on human timescales, growing up in Egypt may lead to a persistently different mindset from the same human growing up in Canada.
I see. But I’m not talking about figuring out human preferences, I’m talking about finding world-models in which real objects (such as “strawberries” or “chairs”) can be identified. Sorry if it wasn’t clear in my original message because I mentioned “caring”.
Models or real objects or things capture something that is not literally present in the world. The world contains shadows of these things, and the most straightforward way of finding models is by looking at the shadows and learning from them.
You might need to specify what you mean a little bit.
The most straightforward way of finding a world-model is just predicting your sensory input. But then you’re not guaranteed to get a model in which something corresponding to “real objects” can be easily identified. That’s one of the main reasons why ELK is hard, I believe: in an arbitrary world-model, “Human Simulator” can be much simpler than “Direct Translator”.
So how do humans get world-models in which something corresponding to “real objects” can be easily identified? My theory is in the original message. Note that the idea is not just “predict sensory input”, it has an additional twist.
I’m talking about finding world-models in which real objects (such as “strawberries” or “chairs”) can be identified.
My point is that chairs and humans can be considered in a similar way.
The most straightforward way of finding a world-model is just predicting your sensory input. But then you’re not guaranteed to get a model in which something corresponding to “real objects” can be easily identified.
There’s the world as a whole that generates observations, and particular objects on their own. A model that cares about individual objects needs to consider them separately from the world. The same object in a different world/situation should still make sense, so there are many possibilities for the way an object can be when placed in some context and allowed to develop. This can be useful for modularity, but also for formulating properties of particular objects, in a way that doesn’t get distorted by the influence of the rest of the world. Human preferences is one such property.
My point is that chairs and humans can be considered in a similar way.
Please explain how your point connects to my original message: are you arguing with it or supporting it or want to learn how my idea applies to something?
There’s an alignment-related problem, the problem of defining real objects. Relevant topics: environmental goals; task identification problem; “look where I’m pointing, not at my finger”; The Pointers Problem; Eliciting Latent Knowledge.
I think I realized how people go from caring about sensory data to caring about real objects. But I need help with figuring out how to capitalize on the idea.
So… how do humans do it?
Humans create very small models for predicting very small/basic aspects of sensory input (mini-models).
Humans use mini-models as puzzle pieces for building models for predicting ALL of sensory input.
As a result, humans get models in which it’s easy to identify “real objects” corresponding to sensory input.
For example, imagine you’re just looking at ducks swimming in a lake. You notice that ducks don’t suddenly disappear from your vision (permanence), their movement is continuous (continuity) and they seem to move in a 3D space (3D space). All those patterns (“permanence”, “continuity” and “3D space”) are useful for predicting aspects of immediate sensory input. But all those patterns are also useful for developing deeper theories of reality, such as atomic theory of matter. Because you can imagine that atoms are small things which continuously move in 3D space, similar to ducks. (This image stops working as well when you get to Quantum Mechanics, but then aspects of QM feel less “real” and less relevant for defining object.) As a result, it’s easy to see how the deeper model relates to surface-level patterns.
In other words: reality contains “real objects” to the extent to which deep models of reality are similar to (models of) basic patterns in our sensory input.
Another highly relevant post: The Pointers Problem.
Creating an inhumanly good model of a human is related to formulating their preferences. A model captures many possibilities and the way many hypothetical things are simulated in the training data. Thus it’s a step towards eliminating path-dependence of particular life stories (and preferences they motivate), by considering these possibilities altogether. Even if some on the possible life stories interact with distortionary influences, others remain untouched, and so must continue deciding their own path, for there are no external influences there and they are the final authority for what counts as aiding them anyway.
How does this relate to my idea? I’m not talking about figuring out human preferences.
What is “path-dependence of particular life stories”?
Are there other ways to characterize objects? Feels like a very general (or even fully general) framework. I believe my idea can be framed like this, too.
Models or real objects or things capture something that is not literally present in the world. The world contains shadows of these things, and the most straightforward way of finding models is by looking at the shadows and learning from them. Hypotheses is another toy example.
One of the features of models/things seems to be how they capture the many possibilities of a system simultaneously, rather than isolated particular possibilities. So what I gestured at was that when considering models of humans, the real objects or models behind a human capture the many possibilities of the way that human could be, rather than only the actuality of how they actually are. And this seems useful for figuring out their preferences.
Path-dependence is the way outcomes depend on the path that was taken to reach them. A path-independent outcome is convergent, it’s always the same destination regardless of the path that was taken. Human preferences seem to be path dependent on human timescales, growing up in Egypt may lead to a persistently different mindset from the same human growing up in Canada.
I see. But I’m not talking about figuring out human preferences, I’m talking about finding world-models in which real objects (such as “strawberries” or “chairs”) can be identified. Sorry if it wasn’t clear in my original message because I mentioned “caring”.
You might need to specify what you mean a little bit.
The most straightforward way of finding a world-model is just predicting your sensory input. But then you’re not guaranteed to get a model in which something corresponding to “real objects” can be easily identified. That’s one of the main reasons why ELK is hard, I believe: in an arbitrary world-model, “Human Simulator” can be much simpler than “Direct Translator”.
So how do humans get world-models in which something corresponding to “real objects” can be easily identified? My theory is in the original message. Note that the idea is not just “predict sensory input”, it has an additional twist.
My point is that chairs and humans can be considered in a similar way.
There’s the world as a whole that generates observations, and particular objects on their own. A model that cares about individual objects needs to consider them separately from the world. The same object in a different world/situation should still make sense, so there are many possibilities for the way an object can be when placed in some context and allowed to develop. This can be useful for modularity, but also for formulating properties of particular objects, in a way that doesn’t get distorted by the influence of the rest of the world. Human preferences is one such property.
Please explain how your point connects to my original message: are you arguing with it or supporting it or want to learn how my idea applies to something?