That’s true. I’m focusing in on AIXI (/ AIXItl) in my next two posts because I want to see how much we can rely on indirect solutions along those lines to make a self-preserving, self-improving Cartesian. (Or an agent that starts off Cartesian but is easily self-modified, or humanly modified, to become naturalized.) AIXItl’s behaviors are what ultimately matters, and if some crude hack can make its epistemic flaws irrelevant or effectively nonexistent, then we won’t need to abandon Solomonoff induction after all.
I’m not confident that’s possible because I’m not confident it’s a process we can automate or find a single magic bullet for, even if we come up with a clever band-aid here or there. Naturalistic reasoning isn’t just about knowing when you’ll die; it’s about knowing anything and everything useful about the physical conditions for your computations.
I’m not sure that this “Cartesian vs Naturalistic” distinction that you are making is really that fundamental.
An intelligent agent tries to learn a model of its environment that allows it to explain its observations and predict how to fulfil its goals. If that entails including in the world model a submodel that represents the agent itself, the agent will learn that, assuming that the agent is smart enough and learning can done safely (e.g. without accidentally dropping an anvil on its head).
After all, humans start with an intuitively dualistic worldview, and yet they are able to revise it to a naturalistic one, after observing enough evidence. Even people who claim to believe in supernatural souls tend to use naturalistic beliefs when making actual decisions (e.g. they understand that drugs, trauma or illness that physically affect the brain can alter cognitive functions).
That’s true. I’m focusing in on AIXI (/ AIXItl) in my next two posts because I want to see how much we can rely on indirect solutions along those lines to make a self-preserving, self-improving Cartesian. (Or an agent that starts off Cartesian but is easily self-modified, or humanly modified, to become naturalized.) AIXItl’s behaviors are what ultimately matters, and if some crude hack can make its epistemic flaws irrelevant or effectively nonexistent, then we won’t need to abandon Solomonoff induction after all.
I’m not confident that’s possible because I’m not confident it’s a process we can automate or find a single magic bullet for, even if we come up with a clever band-aid here or there. Naturalistic reasoning isn’t just about knowing when you’ll die; it’s about knowing anything and everything useful about the physical conditions for your computations.
I’m not sure that this “Cartesian vs Naturalistic” distinction that you are making is really that fundamental.
An intelligent agent tries to learn a model of its environment that allows it to explain its observations and predict how to fulfil its goals. If that entails including in the world model a submodel that represents the agent itself, the agent will learn that, assuming that the agent is smart enough and learning can done safely (e.g. without accidentally dropping an anvil on its head).
After all, humans start with an intuitively dualistic worldview, and yet they are able to revise it to a naturalistic one, after observing enough evidence. Even people who claim to believe in supernatural souls tend to use naturalistic beliefs when making actual decisions (e.g. they understand that drugs, trauma or illness that physically affect the brain can alter cognitive functions).