The boring answer to Solomonoff’s malignness is that the simulation hypothesis is true, but we can infer nothing about our universe through it, since the simulation hypothesis predicts everything, and thus is too general a theory.
I agree that the part where the Oracle can infer from first principles that the aliens’ values are more proobably more common among potential simulators is also speculative. But I expect that superintelligent AIs with access to a lot of compute (so they might run simulations on their own), will in fact be able to infer non-zero information about the distribution of the simulators’ values, and that’s enough for the argument to go through.
I think this is in fact the crux, in that I don’t think they can do this in the general case, no matter how much compute is used, and even in the more specific cases, I still expect it to be extremely hard verging on impossible to actually get the distribution, primarily because you get equal evidence for almost every value, for the same reasons as why getting more compute is an instrumental convergent goal, so you cannot infer the values of basically anyone solely on the fact that you live in a simulation.
In the general case, the distribution/probability isn’t even well defined at all.
The boring answer to Solomonoff’s malignness is that the simulation hypothesis is true, but we can infer nothing about our universe through it, since the simulation hypothesis predicts everything, and thus is too general a theory.
“…Solomonoff’s malignness…”
I was friends with Ray Solomonoff; he was a lovely guy and definitely not malign.
Epistemic status: true but not useful.
I agree that the part where the Oracle can infer from first principles that the aliens’ values are more proobably more common among potential simulators is also speculative. But I expect that superintelligent AIs with access to a lot of compute (so they might run simulations on their own), will in fact be able to infer non-zero information about the distribution of the simulators’ values, and that’s enough for the argument to go through.
I think this is in fact the crux, in that I don’t think they can do this in the general case, no matter how much compute is used, and even in the more specific cases, I still expect it to be extremely hard verging on impossible to actually get the distribution, primarily because you get equal evidence for almost every value, for the same reasons as why getting more compute is an instrumental convergent goal, so you cannot infer the values of basically anyone solely on the fact that you live in a simulation.
In the general case, the distribution/probability isn’t even well defined at all.