I’m not sure that this “Cartesian vs Naturalistic” distinction that you are making is really that fundamental.
An intelligent agent tries to learn a model of its environment that allows it to explain its observations and predict how to fulfil its goals. If that entails including in the world model a submodel that represents the agent itself, the agent will learn that, assuming that the agent is smart enough and learning can done safely (e.g. without accidentally dropping an anvil on its head).
After all, humans start with an intuitively dualistic worldview, and yet they are able to revise it to a naturalistic one, after observing enough evidence. Even people who claim to believe in supernatural souls tend to use naturalistic beliefs when making actual decisions (e.g. they understand that drugs, trauma or illness that physically affect the brain can alter cognitive functions).
I’m not sure that this “Cartesian vs Naturalistic” distinction that you are making is really that fundamental.
An intelligent agent tries to learn a model of its environment that allows it to explain its observations and predict how to fulfil its goals. If that entails including in the world model a submodel that represents the agent itself, the agent will learn that, assuming that the agent is smart enough and learning can done safely (e.g. without accidentally dropping an anvil on its head).
After all, humans start with an intuitively dualistic worldview, and yet they are able to revise it to a naturalistic one, after observing enough evidence. Even people who claim to believe in supernatural souls tend to use naturalistic beliefs when making actual decisions (e.g. they understand that drugs, trauma or illness that physically affect the brain can alter cognitive functions).