I will try to edit this to include a more comprehensive reply later, but as this will take me at least another week, I will point to one paper I am already familiar with on hardness of decisions where causality is unclear; https://arxiv.org/pdf/1702.06385.pdf (Again, computational complexity is not my area of expertise—so I may be wrong.)
Re: safety of Solomonoff/AIXI, I am again unsure, but I think we can posit a situation where very early on in the world-model building process, the simpler models, those which are weighted heavily due to simplicity, are incorrect in ways that lead to very dangerous information collection options.
Apologies for not responding more fully—this is an area where I have a non-technical understanding of the area, but came to tentative conclusions on these points, and have had discussions with those more knowledgable than myself who agreed.
I will try to edit this to include a more comprehensive reply later, but as this will take me at least another week, I will point to one paper I am already familiar with on hardness of decisions where causality is unclear; https://arxiv.org/pdf/1702.06385.pdf (Again, computational complexity is not my area of expertise—so I may be wrong.)
Re: safety of Solomonoff/AIXI, I am again unsure, but I think we can posit a situation where very early on in the world-model building process, the simpler models, those which are weighted heavily due to simplicity, are incorrect in ways that lead to very dangerous information collection options.
Apologies for not responding more fully—this is an area where I have a non-technical understanding of the area, but came to tentative conclusions on these points, and have had discussions with those more knowledgable than myself who agreed.