Solomonoff priors work fine for adversarial interaction. After all, the prior distribution does include lots of adversarial agents. The main problem is that all such priors are uncomputable even in theory, and so are equally useless for any interaction whether adversarial or not.
If you happen to have a hypercomputer in your mind, then you could ignore that inconvenient fact.[1] You can use a Solomonoff prior to estimate the outcome distribution over your actions for every possible model of the world, weighted by complexity, which naturally includes everything about the agent you’re interacting with.
Agents with bounded rationality can’t do this, and have to rely on much cruder heuristics and social structures. For example, people who say “I’ve got a device that will kill or maim a great many people unless you give me money” tend to be dealt with very harshly by society if their threat is credible, and still pretty harshly if their threat is not credible.
You also need to know that nothing else in your universe involves a hypercomputer of comparable strength, which does seem unjustifiable. The nice thing about absurd hypotheticals is that we can just declare this to be true by fiat anyway.
Solomonoff priors work fine for adversarial interaction. After all, the prior distribution does include lots of adversarial agents. The main problem is that all such priors are uncomputable even in theory, and so are equally useless for any interaction whether adversarial or not.
If you happen to have a hypercomputer in your mind, then you could ignore that inconvenient fact.[1] You can use a Solomonoff prior to estimate the outcome distribution over your actions for every possible model of the world, weighted by complexity, which naturally includes everything about the agent you’re interacting with.
Agents with bounded rationality can’t do this, and have to rely on much cruder heuristics and social structures. For example, people who say “I’ve got a device that will kill or maim a great many people unless you give me money” tend to be dealt with very harshly by society if their threat is credible, and still pretty harshly if their threat is not credible.
You also need to know that nothing else in your universe involves a hypercomputer of comparable strength, which does seem unjustifiable. The nice thing about absurd hypotheticals is that we can just declare this to be true by fiat anyway.