I have said something on this, and the short form is I don’t really believe in Christiano’s argument that the Solomonoff Prior is malign, because I think there’s an invalid step in the argument.
The invalid step is where it is assumed that we can gain information about other potential civilization’s values solely by the fact that we are in a simulation, and the key issue is since the simulation/mathematical multiverse hypotheses predict everything, this means we can gain no new information in a Bayesian sense.
(This is in fact the problem with the simulation/mathematical multiverse hypotheses, since they predict everything, this means you can predict nothing specific, and thus you need to be able to have specialized theories in order to explain any specific thing).
The other problem is that the argument assumes that there is a cost to compute, but there is not a cost to computation in the Solomonoff Prior:
I’ve also considered that objection (that no specific value predictions can be made) and addressed it implicitly in my list of demands on Adversaria, particularly “coordination” with any other Adversaria-like universes. If there is only one Adversaria-like universe then Solomonoff induction will predict its values, though in practice they may still be difficult to predict. Also, even if coordination fails, there may be some regularities to the values of Adversaria-like universes which cause them to “push in a common direction.”
I have said something on this, and the short form is I don’t really believe in Christiano’s argument that the Solomonoff Prior is malign, because I think there’s an invalid step in the argument.
The invalid step is where it is assumed that we can gain information about other potential civilization’s values solely by the fact that we are in a simulation, and the key issue is since the simulation/mathematical multiverse hypotheses predict everything, this means we can gain no new information in a Bayesian sense.
(This is in fact the problem with the simulation/mathematical multiverse hypotheses, since they predict everything, this means you can predict nothing specific, and thus you need to be able to have specialized theories in order to explain any specific thing).
The other problem is that the argument assumes that there is a cost to compute, but there is not a cost to computation in the Solomonoff Prior:
https://www.lesswrong.com/posts/tDkYdyJSqe3DddtK4/alexander-gietelink-oldenziel-s-shortform#w2M3rjm6NdNY9WDez
Link below on how the argument for Solomonoff induction can be made simpler, which was the inspiration for my counterargument:
https://www.lesswrong.com/posts/KSdqxrrEootGSpKKE/the-solomonoff-prior-is-malign-is-a-special-case-of-a
I’ve also considered that objection (that no specific value predictions can be made) and addressed it implicitly in my list of demands on Adversaria, particularly “coordination” with any other Adversaria-like universes. If there is only one Adversaria-like universe then Solomonoff induction will predict its values, though in practice they may still be difficult to predict. Also, even if coordination fails, there may be some regularities to the values of Adversaria-like universes which cause them to “push in a common direction.”