This is largely Wei Dai’s idea. We have (computationally-unbounded) models like AIXI that can be argued to capture many key aspects of human reasoning and action. But one thing they don’t capture is philosophy—our inclination to ask questions such as ‘what is reasoning, really?” in the first place. We could try to come up with a model that could be argued to ‘do philosophy’ in addition to things like planning and reasoning. This seems promising since the lack of philosophical ability is a really glaring area where our current models are lacking—in particular, our models can’t currently account for our desire to come up with good models!
Try to solve metaphilosophy
This is largely Wei Dai’s idea. We have (computationally-unbounded) models like AIXI that can be argued to capture many key aspects of human reasoning and action. But one thing they don’t capture is philosophy—our inclination to ask questions such as ‘what is reasoning, really?” in the first place. We could try to come up with a model that could be argued to ‘do philosophy’ in addition to things like planning and reasoning. This seems promising since the lack of philosophical ability is a really glaring area where our current models are lacking—in particular, our models can’t currently account for our desire to come up with good models!