Thanks a lot! I think my misunderstanding came from collapsing the computational complexity issues of self-referential simulation (expanding the model costs too much, as you mention) and the pure mathematical issue of defining such a model. In the latter sense, you can definitely have a self referential embedded model.
I’m embedded in the world, so my world model needs to contain a model of me, which means my world model needs to contain a copy of itself. That’s the sense in which my own world model is self-referential.
I’m not sure why the last “need” is true. Is it because we’re assuming my world model is good/useful? Because I can imagine a world model where I’m a black box, and so I don’t need to model my own world model.
In theory I could treat myself as a black box, though even then I’m going to need at least a functional self model (i.e. model of what outputs yield what inputs) in order to get predictions out of the model for anything in my future light cone.
But usually I do assume that we want a “complete” world model, in the sense that we’re not ignoring any parts by fiat. We can be uncertain about what my internal structure looks like, but that still leaves us open to update if e.g. we see some FMRI data. What I don’t want is to see some FMRI data and then go “well, can’t do anything with that, because this here black box is off-limits”. When that data comes in, I want to be able to update on it somehow.
Thanks a lot! I think my misunderstanding came from collapsing the computational complexity issues of self-referential simulation (expanding the model costs too much, as you mention) and the pure mathematical issue of defining such a model. In the latter sense, you can definitely have a self referential embedded model.
I’m not sure why the last “need” is true. Is it because we’re assuming my world model is good/useful? Because I can imagine a world model where I’m a black box, and so I don’t need to model my own world model.
In theory I could treat myself as a black box, though even then I’m going to need at least a functional self model (i.e. model of what outputs yield what inputs) in order to get predictions out of the model for anything in my future light cone.
But usually I do assume that we want a “complete” world model, in the sense that we’re not ignoring any parts by fiat. We can be uncertain about what my internal structure looks like, but that still leaves us open to update if e.g. we see some FMRI data. What I don’t want is to see some FMRI data and then go “well, can’t do anything with that, because this here black box is off-limits”. When that data comes in, I want to be able to update on it somehow.