Even if the Matrix-claimant says that the 3^^^^3 minds created will be unlike you, with information that tells them they’re powerless, if you’re in a generalized scenario where anyone has and uses that kind of power, the vast majority of mind-instantiations are in leaves rather than roots.
You would have to abandon Solomonoff Induction (or modify it to account for these anthropic concerns) to make this work. Solomonoff Induction doesn’t let you consider just “generalized scenarios”; you have to calculate each one in turn, and eventually one of them is guaranteed to be nasty.
To paraphrase Wei’s example: the mugger says, “Give me five dollars, or I’ll simulate and kill 3^^^^3 people, and I’ll make sure they’re aware that they are at the leaf and not at the node”. Congratulations, you now have over 3^^^^3 bits of evidence (in fact, it’s a tautology with probability 1) that the following proposition is true: “if the mugger’s statement is correct, then I am the one person at the node and am not one of the 3^^^^3 people at the leaf.” By Solomonoff Induction, this scenario where his statement is literally true has > 1 / 2^(10^50) probability, as it’s easily describable in much less than 10^50 bits. Once you try to evaluate the utility differential of that scenario, boom, we’re right back where we started.
On the other hand, you could modify Solomonoff Induction to reflect anthropic concerns, but I’m not sure it’s any better than just modifying the utility function to reflect anthropic concerns.
And, of course, there’s still the pig problem in either case.
Even if the Matrix-claimant says that the 3^^^^3 minds created will be unlike you, with information that tells them they’re powerless, if you’re in a generalized scenario where anyone has and uses that kind of power, the vast majority of mind-instantiations are in leaves rather than roots.
You would have to abandon Solomonoff Induction (or modify it to account for these anthropic concerns) to make this work. Solomonoff Induction doesn’t let you consider just “generalized scenarios”; you have to calculate each one in turn, and eventually one of them is guaranteed to be nasty.
To paraphrase Wei’s example: the mugger says, “Give me five dollars, or I’ll simulate and kill 3^^^^3 people, and I’ll make sure they’re aware that they are at the leaf and not at the node”. Congratulations, you now have over 3^^^^3 bits of evidence (in fact, it’s a tautology with probability 1) that the following proposition is true: “if the mugger’s statement is correct, then I am the one person at the node and am not one of the 3^^^^3 people at the leaf.” By Solomonoff Induction, this scenario where his statement is literally true has > 1 / 2^(10^50) probability, as it’s easily describable in much less than 10^50 bits. Once you try to evaluate the utility differential of that scenario, boom, we’re right back where we started.
On the other hand, you could modify Solomonoff Induction to reflect anthropic concerns, but I’m not sure it’s any better than just modifying the utility function to reflect anthropic concerns.
And, of course, there’s still the pig problem in either case.