I basically agree with everything in section 2.2, but I read section 1.4 again (which i also agree with as written there), and I just wanted to point out that I think the kind of self-modelling described in section 2.2 is in some ways unusual and happens because of smart/social-animal-specific brain programming and not just pure pressure for generative models to form self-models to make better predictions. (Though I think humans are probably smart enough that decent self-models would also have been formed for those reasons you described in section 1.4, but I think humans have quite some pre-wired machinery for self-modelling.)
In most animals the cortex mostly just models the world I think, and there are parts of the cortex that model other parts of the cortex which in the end just translates into making better predictions about the world. (This also happens in humans tbc.)
What’s described in section 2.2 though is that there’s exists a “(this) mind” concept and the cortex forms beliefs about it by creating some abstract labels for thoughts and using those to form new thoughts like “abstract-label-of-thought-X is present in the mind”. (For non-human animals which do such self-modelling they probably don’t model thoughts but rather intentions/desires or so.)
This is unusual because initially the model of the mind is separated from the model of the model of the world. So far the model of the mind is a separate magisterium which isn’t immediately useful for making predictions about the world.
However there are bridging laws between a model of a mind and the physical world, e.g. emotion <-> facial expression predictive bindings.
Evolutionarily, I think overall this kind of modelling the mind as separate entity turns out to be useful because the same machinery can be used to model and predict conspecifics. Aka such smart/social animals don’t just model the world but model the world with some minds (of conspecifics) embedded in there, where the minds are treated as special cases because better predictions about conspecifics can be made from generalizing how the own mind (which can be observed in way more detail) operates.
In humans this whole mind modelling is especially sophisticated (e.g. humans seem to at least sometimes be aware of their own awareness) and there are many aspects I don’t understand yet. I think a major part why the homunculus exists might be as interface between the model of mind and the model of the world.
But I’m still somewhat confused here and I also don’t quite know what your position here is.
I basically agree with everything in section 2.2, but I read section 1.4 again (which i also agree with as written there), and I just wanted to point out that I think the kind of self-modelling described in section 2.2 is in some ways unusual and happens because of smart/social-animal-specific brain programming and not just pure pressure for generative models to form self-models to make better predictions. (Though I think humans are probably smart enough that decent self-models would also have been formed for those reasons you described in section 1.4, but I think humans have quite some pre-wired machinery for self-modelling.)
In most animals the cortex mostly just models the world I think, and there are parts of the cortex that model other parts of the cortex which in the end just translates into making better predictions about the world. (This also happens in humans tbc.)
What’s described in section 2.2 though is that there’s exists a “(this) mind” concept and the cortex forms beliefs about it by creating some abstract labels for thoughts and using those to form new thoughts like “abstract-label-of-thought-X is present in the mind”. (For non-human animals which do such self-modelling they probably don’t model thoughts but rather intentions/desires or so.)
This is unusual because initially the model of the mind is separated from the model of the model of the world. So far the model of the mind is a separate magisterium which isn’t immediately useful for making predictions about the world.
However there are bridging laws between a model of a mind and the physical world, e.g. emotion <-> facial expression predictive bindings.
Evolutionarily, I think overall this kind of modelling the mind as separate entity turns out to be useful because the same machinery can be used to model and predict conspecifics. Aka such smart/social animals don’t just model the world but model the world with some minds (of conspecifics) embedded in there, where the minds are treated as special cases because better predictions about conspecifics can be made from generalizing how the own mind (which can be observed in way more detail) operates.
In humans this whole mind modelling is especially sophisticated (e.g. humans seem to at least sometimes be aware of their own awareness) and there are many aspects I don’t understand yet. I think a major part why the homunculus exists might be as interface between the model of mind and the model of the world.
But I’m still somewhat confused here and I also don’t quite know what your position here is.