I’m pretty sure that “hard problem of correctly identifying causality” is a major goal of MIRI’s decision theory.
In what sense is discovering causality NP-hard? There’s the trivial sense in which you can embed a NP-hard problem (or tasks of higher complexity) into the real world, and there’s the sense in which inference in Bayesian networks can embed NP-hard problems.
Can you elaborate on why AIXI/Solomonoff induction is an unsafe utility maximizer, even for Cartesian agents?
I will try to edit this to include a more comprehensive reply later, but as this will take me at least another week, I will point to one paper I am already familiar with on hardness of decisions where causality is unclear; https://arxiv.org/pdf/1702.06385.pdf (Again, computational complexity is not my area of expertise—so I may be wrong.)
Re: safety of Solomonoff/AIXI, I am again unsure, but I think we can posit a situation where very early on in the world-model building process, the simpler models, those which are weighted heavily due to simplicity, are incorrect in ways that lead to very dangerous information collection options.
Apologies for not responding more fully—this is an area where I have a non-technical understanding of the area, but came to tentative conclusions on these points, and have had discussions with those more knowledgable than myself who agreed.
I’m pretty sure that “hard problem of correctly identifying causality” is a major goal of MIRI’s decision theory.
In what sense is discovering causality NP-hard? There’s the trivial sense in which you can embed a NP-hard problem (or tasks of higher complexity) into the real world, and there’s the sense in which inference in Bayesian networks can embed NP-hard problems.
Can you elaborate on why AIXI/Solomonoff induction is an unsafe utility maximizer, even for Cartesian agents?
I will try to edit this to include a more comprehensive reply later, but as this will take me at least another week, I will point to one paper I am already familiar with on hardness of decisions where causality is unclear; https://arxiv.org/pdf/1702.06385.pdf (Again, computational complexity is not my area of expertise—so I may be wrong.)
Re: safety of Solomonoff/AIXI, I am again unsure, but I think we can posit a situation where very early on in the world-model building process, the simpler models, those which are weighted heavily due to simplicity, are incorrect in ways that lead to very dangerous information collection options.
Apologies for not responding more fully—this is an area where I have a non-technical understanding of the area, but came to tentative conclusions on these points, and have had discussions with those more knowledgable than myself who agreed.